

This was only done because the editors pushed to minimize AI involvement. There’s a comment here already mentioning that: https://lemmy.world/comment/22826863


This was only done because the editors pushed to minimize AI involvement. There’s a comment here already mentioning that: https://lemmy.world/comment/22826863


The images I am referring to are likely distinct from the ones in the title as they are from his iPhone and Google is who reported him. Regardless in the article it says the detective looked at one of the Google reported images. Whether they just referenced a known hash I don’t know for sure, but I think it’s pretty well known that FAANG scan basically all images for CSAM nowadays.


The detective alleges that that photograph and others she examined appeared to be stored in a folder on the iPhone titled “Girls I Drugged And Raped.”


This link was shared on Lemmy just recently about this book. I watched some of the persons other videos, and though we have different tastes it seems like they can identify “bad” books.


Not sure how I didn’t hear of this already. Apparently it’s not necessarily a breathalyzer, but the proposals include a camera facing the driver to monitor them and passive monitoring of the air in the car.
I don’t drunk drive and barely even drink, but that’s horrifying. I can’t believe this went under the radar for me.
More garbage that is going to break and cost thousands of dollars to fix in addition to all the violations of privacy. Cars are already advertising to people. Can you imagine if they put a camera inside the vehicle? Why not invest in public transit? That’s a great way to decrease impaired drivers of all stripes as well as help people in general. All this does is funnel more money into auto makers. I am so upset that this is the first I’m hearing of it.


Wow. What a terrible idea. There was a woman who was sent to jail in a different state for several months and lost her house, car, and dog because AI misidentified her and cops didn’t give a fuck. Cops should need a warrant for facial recognition at the very least, if it’s allowed at all. Can’t wait for “give me a smile” to be codified into law.
What a completely accurate description. The nuance of the issues being subtle yet catastrophic is always the part that I find the funniest, because how are they so incapable of seeing how that might be a universal issue? Thank you for the chuckle.
Ridiculous that Grammarly even attempted to do this. The article was good, but at the end, though they hedged, they fell into the same trap everyone seems to. AI is not better at coding than it is at writing and their tinkering with this does not suggest that. Grammarly had a bad product, but realistically, there was likely just no effort put into this aspect of the software. Maybe I’m way off base, and I don’t support AI either way, but I just think it was a poor way to end the article. Programmers think it’s good for art, artists think it’s good for programming, it’s almost like it’s easier to see flaws in a field you’re familiar with.


If you sandbox anything it’ll be safer than otherwise. Not really sure what you’re suggesting. I would still want the code reviewed regardless of the safety measures in place.
I wrote a program that basically auto organizes my files for me. Even if an AI was sandboxed and only had access to the relevant files and had no delete privileges, I would still want the code reviewed. Otherwise it could move a file into a nonsensical location and I would have to go through all possible folders to find it. Someone would have to make the interfaces/gateways and also review the code. There’s no way to know how it’s working, so there’s no way to know IF it’s working, until the code is reviewed. Regardless of how detailed you prompt, AI will generate something that possibly (currently very likely) needs to be adjusted. I’m not going to take an AIs raw output and run it assuming the AI did it properly, regardless of the safety measures.


While I personally don’t like AI, I do think it is changing things. I don’t think it’s ever safe to run code without oversight from an actual programmer, but AI will likely affect the number of programmers being hired in a non negligible way.


I keep seeing this, but I think people forget what things were like before we had a formalized education system. They were not good. The modern system can certainly be improved, but it has overall improved upon itself essentially since its inception. I’m not sure if there’s some kind of golden age of US education people are imagining, or they’re just pointing out current flaws, but it really is (unfortunately or not) the best it’s ever been.


That’s really not fair to universities or the inventors. Knowledge is not inherently evil, and things that have far reaching positive impacts can be used for nefarious purposes. Modern society has perverse incentives, but individuals adding to a corpus of humanity’s knowledge are not the ones at fault.


Often times these purchases are not for the product itself, but how it can be incorporated into an existing product. I imagine if Meta makes bot accounts for people to follow/engage with, they can increase user retention and therefore ad revenue.


Yang is a grifter and no one should listen to him. Companies will happily use any excuse to fire employees and create a perception of job scarcity so that they can rehire workers who are scared and desperate and willing to take less compensation for more work.
All of that said, AI is definitely being incorporated quite heavily into a lot of products. It’s already caused issues with services we all rely on, and I hope we are able to hold companies accountable and stop patronizing them wherever possible. AI cannot do a lot of the things they are pretending it can and we are paying the price, not the companies responsible.


That behaviour would probably have the opposite effect that the people who created this rule would want.
Why are you suggesting that? Ignoring capitalistic incentives, the rule is theoretically in place to increase safety. Your decision would have no impact on safety so I’m not sure why you think it would have the opposite effect.


Uber is not society. It is making a decision it thinks will increase its revenue. It is indeed a sign of a lack of progress, but the people responsible for the progress you want are us. I’m not chiming in on the policy itself, but your comment makes it feel like you are not as committed to the progress you want made. There are men in this thread saying that they hate what women have to put up with, but understand it and want them to feel safe. That’s not what I’m getting from your comment. If I had to choose between being in a forest with you vs them I’d choose them because it seems like you’re more concerned with how you’re perceived than how other people are actually affected. I can imagine that being viewed as a predator must be uncomfortable, but women are often viewed as prey and that’s not great either. I don’t want to start playing at oppression olympics, but the fact that a post about a move to theoretically increase women’s safety has you responding about your feelings as a perceived predator makes it seem like you don’t think we as a society should do things that make women feel safer because it makes you feel like you’re being viewed as a predator.
I for the most part don’t mind being around male strangers, but the ones that give me extra room on a sidewalk or in a bar are undoubtedly the ones I’m most comfortable around and ones I’d be most likely to engage with. Not because the others make me feel unsafe but because they make me feel safe. It’s like if you invite someone into your house you can offer them food or a drink to help them feel comfortable or you can just not. You’re not necessarily a bad person for not offering something, just potentially perceived as less inviting. Society is still seen and felt as the dominion of men for a lot of people, so when men go out of their way to make space for us, it signals that they are friendly and welcoming and want us to feel safe. I think if you want to work on that divide, the best thing to do is make the women you’re around feel safe. It’s unfortunate, but it’s up to us to destigmatize our own identities. I just don’t think your comment does that.


Per RAINN, 57% of perpetrators are white. I’ll charitably imagine you’re attempting to point out perceived hypocrisy in gender vs race selection, but you’re perpetuating racist and xenophobic stereotypes. White men commit rape at more than twice the rate of black men, and naturally born citizens commit crimes at rates higher than both documented and undocumented immigrants.
If you want to make the case that it’s a discriminatory policy, you’re welcome to do so, but tying it to false perceptions of race is probably not the best move. It’s coming off as reactionary at best.


Thank you for this comment. I have backups I tested on implementation and rummaged through two years ago after a weird corruption issue, but not once since. I still get alerts about them, so I just assume they’re fine, but first thing Monday I’m gonna test them. I feel stupid for not having implemented regular checks already, but will do so now.


Yea, I mentioned in my comment that there was a confluence of issues, but the article does point out that the AI translation made the statement more definitive.
Edit to add:
As part of our post-mortem on this article’s evolution, PCWelt’s executive editor pointed out that the translation makes the article sound more definitive than its native German. He says that in the context of the article, the German word “soll” signals a rumored expectation, but the English translation used “will” instead of something more akin to “is rumored to.”
Guy work in IT and spent 100k to pay devs to make an app so people can talk to his tuned ChatGPT? I hope anyone who has hired him checks his work. That does not bode well for his work product.
Another case from the article:
What’s weird to me is they now recognize AI will lie to you but somehow think they can prompt it not to? Your rules can be “overwritten” because they do not exist to ChatGPT. It does not know what words mean.