• 1 Post
  • 513 Comments
Joined 2 years ago
cake
Cake day: June 10th, 2023

help-circle


  • Saw people talking in comments at several places now, expressing animosity towards them to say the least, always presented as something that everyone seems to know about.

    tl;dr It’s YouTuber drama. Consider yourself lucky you’re not so terminally online that you understand it.

    Piratesoftware is a Twitch steamer/YouTuber who speaks his mind quite bluntly and doesn’t back down after doing so. Because of that, he’s often involved in streamer drama and has a lot of people who dislike him. Haters love to reference these past dramas whenever his name is brought up.

    10 months ago, he was involved in drama when he was asked his opinion on the Stop Killing Games EU citizens initiative. His opinion was that he didnt like it and expressed himself in a crass and crude way as he normally does. Supporters of the initiative didn’t like that and it spawned a lot of back and forth arguments before dying down.

    Currently, the citizens initiative is short on having the required signatures to move forward and the deadline is in a few weeks. The lead guy behind the movement put out a video saying the initiative will likely fail, he will be ending his organization efforts when it does, and blamed it on Piratesoftware’s video from 10 months ago. That has restarted the drama.





  • why don’t they program them

    AI models aren’t programmed traditionally. They’re generated by machine learning. Essentially the model is given test prompts and then given a rating on its answer. The model’s calculations will be adjusted so that its answer to the test prompt will be closer to the expected answer. You repeat this a few billion times with a few billion prompts and you will have generated a model that scores very high on all test prompts.

    Then someone asks it how many R’s are in strawberry and it gets the wrong answer. The only way to fix this is to add that as a test prompt and redo the machine learning process which takes an enormous amount of time and computational power each time it’s done, only for people to once again quickly find some kind of prompt it doesn’t answer well.

    There are already AI models that play chess incredibly well. Using machine learning to solve a complexe problem isn’t the issue. It’s trying to get one model to be good at absolutely everything.








  • I have a friend who has worked for 3 companies over 6 years. She has never once released a game as they were all cancelled before release. She found out she lost her job at one company after reading an interview about a bunch of studios being shut down. One of them was the place she worked. Even her boss apparently didn’t know.

    The studio she works at now initially hired her for completely remote work, but they’ve since changed their minds and now she has to drive over 100km to work every day. She was going to quit but she’s sticking with it for now in the hopes of finishing at least one game.