• 1 Post
  • 180 Comments
Joined 3 years ago
cake
Cake day: June 12th, 2023

help-circle

  • I thought this was a very well written, transparent article that took accountability as seriously as it should. I am still not sure why people are using AI for translation when translation software already existed. People mention that AI is more context aware, but I feel like when you saw those friction points in old translation software it prompted you to look further into the context, whereas AI will just make an executive decision and people feel like it must be right because it’s AI. I guess it’s possible old language software, or even a translator, would have done the same thing, but I still think people would have less inherent trust in the old software alone. I do want to point out that this AI issue was just a small part of the problem and they addressed plenty of other issues and how they plan to remedy those.




  • I understand mutual aid as a concept, but my local anarchist groups seem happy to just do random mutual aid. They will just stand on a corner, distribute food to anyone that comes by and say “great job team!” It feels ineffectual and the lack of planning really hobbles them. I suggested doing a more organized approach and they were all “you can do that if you want”, which I already knew I could do. I was wondering if WE should maybe be a little more organized and they just aren’t interested. They’ll do a toy drive and then just go to a random park to give them out. It feels more like a random act of kindness group than a group trying to build parallel systems of power. I understand that it may just be my local groups, but I would love to hear about other groups experiences. Is there maybe a more anarchist friendly way of organizing that I’m not privy to? I can do some reading if necessary. I’m not really an anarchist, but I believe mutual aid is important, I’d just like to see it done more purposefully. Is your mutual aid group a chapter of one I’d be familiar with? I’d be interested in trying a different group if it felt more helpful.


  • Google is a bad company with bad policies, but I’d love to have them explain what caused the compromise. They dispute that it was uploaded publicly to GitHub, but don’t seem to provide any information as to what happened. They also didn’t have 2fa on, which is strange to hear because AWS (they’re using Google) required 2fa on all accounts at least a year ago, regardless of permissions if memory serves. Really sorry to hear this happened to them, and the fact you can’t set a hard cap on spend makes Google the party ultimately responsible here, but I’d appreciate having more information on the actual cause.


  • I get where you’re coming from, but I think it’s important that ars has held this person accountable. They have a journalistic standard they are sticking to, which is that there should be no AI use, and there are repercussions for people who don’t abide. There’s not an extremely large cohort that is willing to spend more to avoid AI, but I am certainly part of it, and seeing ars hold this person accountable helps me know that I can trust and patronize them ethically. There are businesses out there unwilling to acquiesce to an AI first narrative, and I’m just worried that elements of doomerism are going to make people unwilling to believe those companies when they have every reason to believe them.






  • I feel like that may be worse. Kind of like how if you have certain security measures while browsing the web it’s almost easier to fingerprint you. It’ll get a good idea of your age and that’ll be enough rather than sticking to a specific lie. Just always be 3 years older with one additional sibling or a sibling of the opposite sex. If the sex of your sibling is relevant just describe them as a close family friend or close cousin in that instance. I can’t say for sure, but if I had to guess having a static lie is maybe more obfuscation than a variable one. Though even posting on this thread is bad opsec.






  • The writer seems pretty moderate on AI from a cursory glance, but this particular post seems relatively dismissive of some of the things uncovered in the AI lawsuits. I don’t think it’s fully biased, as they do mention late in the article that the AI could be doing more, but I think it’s really important to emphasize that in most of the legal cases about AI and suicide that I have seen, the AI 1) gave explicit instructions on methodology often without reservation or offering a helpline 2) encouraged social isolation 3) explicitly discouraged seeking external support 4) basically acted as a hypeman for suicide.

    The article mentions that self report of suicidal ideation (SI) is not a good metric, but I wonder how that holds across known response to that admission. I have a family that relies on me. If admitting to SI would have me immediately committed and unable to earn a living and saddle my family with a big healthcare bill, you bet I’d lie about it. What about stigma? Say you have good healthcare and vacation days and someone to care for pets/kids, is there going to be a large stigma if admitting to SI caused you to be held for observation for a few days?

    I think it’s great that there are other indicators they are looking into, but I think we also need to know and address why people are not admitting to SI.


  • This is an translated excerpt from the article:

    The man decided to download the files. Police told the man to stop this and delete the files. The man indicated that he would only stop and renounce it if he ‘would get something in return’. Therefore, the police have decided to arrest the man and confiscate his data carriers to secure the files again and prevent distribution.

    If you are sent a download link, while you know you should get an upload link, it is clearly told not to download and choose to download the files anyway, then you may be guilty of computer breach. The recipient can reasonably assume that the download link and the files shared with it are not intended for him.

    The police have no indication that the files are further distributed. The protocol surrounding a data breach is followed. Police are conducting further investigations.

    It does not seem like a power imbalance allows them to just roll up and arrest him. It seems like they have a legal ability to ask him to remove the files and since he did not they have a legal right to charge him/confiscate the files. I generally don’t want to assume public sentiment, but I personally think it’s understandable that some government documents (those pertaining to open investigations) are subject to protections that other documents might not be. For what it’s worth, if someone sent me their digital information they wouldn’t have to ask me to delete it because I would not have saved it in the first place and I certainly would not have asked for payment to delete it if I somehow accidentally downloaded it.


  • Reddit had a lot of really friendly “femme leaning” communities. Especially the smaller ones. If you were only going to Reddit for nail painting and wedding inspiration it was actually really wholesome. Those communities tended to be 1) very well modded 2) “easy” to mod 3) not fun to troll. There’s a little grey area on if someone is offering good faith critique, but if you’ve commented twice and neither have been positive you lose the privilege to comment. It can create a bit of a hugbox, but it’s much preferred to the opposite.

    I really like my experience with the fediverse so far, but I really miss the experience of those positive “femme” spaces. It’s a very different feeling and I haven’t gotten it from the fediverse yet. Not that we’re not empathetic, just that it’s a different space.