• 1 Post
  • 205 Comments
Joined 3 years ago
cake
Cake day: June 12th, 2023

help-circle
  • AI promises consistency

    Lol. Along what vectors? Certainly not between racial groups or genders. Remember to include “don’t be racist” in your prompt. I’m sure that’ll fix it.

    If you want the same outcome every time all you need is a form. No AI needed. Hand people a flowchart and file the end result. If it’s more complicated than that AI should not be responsible. If it’s less complicated then AI is not needed. There’s some things that are required to go through the court that may not make sense to anymore, like came changes in my opinion, but those shouldn’t be offloaded to AI for approval, they should be moved away from the court.


  • No please. Do not reward this kind of posting. People are just going to flood the fediverse with this kind of engagement farming. If they want to get eyes on their content they are welcome to post it in the relevant communities, ideally with a disclaimer that they made it.

    Reddit was filled with posts like: “no one cares about my neurodivergent differently abled nieces art project 🥺👉👈”

    Let’s not encourage that here.






  • Guy work in IT and spent 100k to pay devs to make an app so people can talk to his tuned ChatGPT? I hope anyone who has hired him checks his work. That does not bode well for his work product.

    Another case from the article:

    “I still use AI, but very carefully,” he says. “I’ve written in some core rules that cannot be overwritten. It now monitors drift and pays attention to overexcitement. There are no more philosophical discussions. It’s just: ‘I want to make a lasagne, give me a recipe.’ The AI has actually stopped me several times from spiralling. It will say: ‘This has activated my core rule set and this conversation must stop.’

    What’s weird to me is they now recognize AI will lie to you but somehow think they can prompt it not to? Your rules can be “overwritten” because they do not exist to ChatGPT. It does not know what words mean.






  • Not sure how I didn’t hear of this already. Apparently it’s not necessarily a breathalyzer, but the proposals include a camera facing the driver to monitor them and passive monitoring of the air in the car.

    I don’t drunk drive and barely even drink, but that’s horrifying. I can’t believe this went under the radar for me.

    More garbage that is going to break and cost thousands of dollars to fix in addition to all the violations of privacy. Cars are already advertising to people. Can you imagine if they put a camera inside the vehicle? Why not invest in public transit? That’s a great way to decrease impaired drivers of all stripes as well as help people in general. All this does is funnel more money into auto makers. I am so upset that this is the first I’m hearing of it.




  • Ridiculous that Grammarly even attempted to do this. The article was good, but at the end, though they hedged, they fell into the same trap everyone seems to. AI is not better at coding than it is at writing and their tinkering with this does not suggest that. Grammarly had a bad product, but realistically, there was likely just no effort put into this aspect of the software. Maybe I’m way off base, and I don’t support AI either way, but I just think it was a poor way to end the article. Programmers think it’s good for art, artists think it’s good for programming, it’s almost like it’s easier to see flaws in a field you’re familiar with.


  • If you sandbox anything it’ll be safer than otherwise. Not really sure what you’re suggesting. I would still want the code reviewed regardless of the safety measures in place.

    I wrote a program that basically auto organizes my files for me. Even if an AI was sandboxed and only had access to the relevant files and had no delete privileges, I would still want the code reviewed. Otherwise it could move a file into a nonsensical location and I would have to go through all possible folders to find it. Someone would have to make the interfaces/gateways and also review the code. There’s no way to know how it’s working, so there’s no way to know IF it’s working, until the code is reviewed. Regardless of how detailed you prompt, AI will generate something that possibly (currently very likely) needs to be adjusted. I’m not going to take an AIs raw output and run it assuming the AI did it properly, regardless of the safety measures.