Whether or not I use Claude is not going to change society
This gives me shopping cart theory vibes. I don’t usually base my moral compass based on whether my action will have some kind of measurable impact, but whether I believe it’s the right thing to do. After the intense doubling down in that discussion thread I’m definitely steering clear of lutris. It costs me very little effort to avoid projects that do icky things I don’t want to encourage (even though it may not have a measurable impact~)
At my job we have been told how we have to start using AI more. I can’t really see any point. The only tasks AI can help me for are pointless tasks from HR that shouldn’t exist in the first place. Monthly forms with questions like “how are you feeling emotionally”, used to take me ages to come up with corpo bullshit friendly answers but locally hosted deepseek does it in seconds.
Lutris has always been a bit hit-or-miss for me, I avoided it unless it was the only option, as it only worked half the time. I don’t want it to come off like it shouldn’t exist, as stuff making Linux easier to use is great, but I don’t use it at all in my current workflows.
I guess I’ve just been behind the times, but I’ve never had an incentive to switch. I just installed faugus and transferred everything over and it seems very slick. It seems to be missing 1 or 2 things, like environment variables per-game, but all the other important stuff seems to be here. I know what I’m doing with prefixes so having all the knobs to turn is great, but honestly linux gaming does not need most of those knobs nowadays.
Also, it is one thing to decide that something is not an ethical issue of concern, it is another thing to act with disrespect to everyone with a different opinion.
Utilitarianism really falls at the first hurdle of any kind of evaluation of a moral system.
It has no real prescriptive power because it demands you be able to correctly foresee the outcome of your actions, something literally addressed by “The road to hell is paved with good intentions”, an adage of at least 400 years ago, and yet people will still gravitate towards it as if society did not explicitly caution us about that mindset forever now.
At this point I can’t help but look down on those who genuinely identify as utilitarian as either too young, too stupid, or actively malevolent and trying to find a way to justify their bad behaviours as errors rather than malice or negligence.
I’d offer you a counterpoint (ignoring the issue with Lutris and AI for a minute):
If you choose not to judge your own actions by the expected consequences of those actions for everyone involved, then how exactly are you supposed to judge them? If you’re following some rule that disagrees with the utilitarian view, then by definition it’s a rule that in your own opinion leads to a worse outcome for everyone.
It’s of course completely fine to not be utilitarian, but trying to claim that all utilitarians are either stupid or evil is just incorrect.
This gives me shopping cart theory vibes. I don’t usually base my moral compass based on whether my action will have some kind of measurable impact, but whether I believe it’s the right thing to do. After the intense doubling down in that discussion thread I’m definitely steering clear of lutris. It costs me very little effort to avoid projects that do icky things I don’t want to encourage (even though it may not have a measurable impact~)
I can’t fix the problem, therefore I’ll be part of the problem.
At my job we have been told how we have to start using AI more. I can’t really see any point. The only tasks AI can help me for are pointless tasks from HR that shouldn’t exist in the first place. Monthly forms with questions like “how are you feeling emotionally”, used to take me ages to come up with corpo bullshit friendly answers but locally hosted deepseek does it in seconds.
The HR department will see that it’s not quality human HR-slop and the thought police will be with you shortly
Oh LLMs are great at writing HR slop
But then there’s no suffering
When my work enabled Gemini, I asked it how to disable it. It said it couldn’t help me and asked if I had another question. I didn’t.
That’s the only interaction I’ve willingly had with it.
In my experience, AI models are fairly good at contextual search. That’s the only thing I use them for.
Yes, if we had documentation then I suspect AI tools could be good for finding information in that.
Lutris has always been a bit hit-or-miss for me, I avoided it unless it was the only option, as it only worked half the time. I don’t want it to come off like it shouldn’t exist, as stuff making Linux easier to use is great, but I don’t use it at all in my current workflows.
I guess I’ve just been behind the times, but I’ve never had an incentive to switch. I just installed faugus and transferred everything over and it seems very slick. It seems to be missing 1 or 2 things, like environment variables per-game, but all the other important stuff seems to be here. I know what I’m doing with prefixes so having all the knobs to turn is great, but honestly linux gaming does not need most of those knobs nowadays.
How does transferring work?
I only have 2 or 3 things in lutris.
I just did it manually, pointing faugus at the old prefixes and setting the launch options the same
Sick. Thanks. I’ll do the same.
Also, it is one thing to decide that something is not an ethical issue of concern, it is another thing to act with disrespect to everyone with a different opinion.
Unless that opinion is ‘I like using AI’, then they deserved the disrespect.
virtue ethics > utilitarianism
Utilitarianism really falls at the first hurdle of any kind of evaluation of a moral system.
It has no real prescriptive power because it demands you be able to correctly foresee the outcome of your actions, something literally addressed by “The road to hell is paved with good intentions”, an adage of at least 400 years ago, and yet people will still gravitate towards it as if society did not explicitly caution us about that mindset forever now.
At this point I can’t help but look down on those who genuinely identify as utilitarian as either too young, too stupid, or actively malevolent and trying to find a way to justify their bad behaviours as errors rather than malice or negligence.
I’d offer you a counterpoint (ignoring the issue with Lutris and AI for a minute):
If you choose not to judge your own actions by the expected consequences of those actions for everyone involved, then how exactly are you supposed to judge them? If you’re following some rule that disagrees with the utilitarian view, then by definition it’s a rule that in your own opinion leads to a worse outcome for everyone.
It’s of course completely fine to not be utilitarian, but trying to claim that all utilitarians are either stupid or evil is just incorrect.