• Martineski@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    8
    ·
    10 hours ago

    The issue there is that it feeds into those mental health issues with efficiency and on on a scale never seen before. The models are programmed to agree with the user, and they are EXTREMELY HEAVILY ADVERTISED AND SHOVED ONTO PEOPLE AROUND THE WHOLE GLOBE DESPITE IT BEING WELL KNOWN HOW LIMITED AND PROBLEMATIC THE TECHNOLOGY IS WHILE THE CORPORATIONS DON’T TAKE ANY RESPONSIBILITY AT ALL. Anything from violating rights and privacy by gathering any and all data they can on you to situations like these where people hurt themselves (suicide, health advice, etc.) or others. But sure, let’s be ignorant, do some victim blaming and disregard the bigger picture there.

    • brbposting@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 hours ago

      I wonder if there’s a parallel universe where the labs instead went to the other extreme and require intelligence tests to onboard to their platforms.

      And the outcry is, not inappropriately, about how many are being denied access to the latest technologies. The policy could effectively be construed as racist, even.

      Anyway the middle ground there is pretty obvious. (Though I’m not sure how I’d design it just right, so e.g. folks without access to traditional/expensive mental healthcare might still be able to see some small benefit if it’s determined to be safe, just like maybe it could be safe for a well-adjusted individual to complain to it about their day for a couple minutes before moving on to real things. Sure I suppose it’s inherently unsafe but a proportion of the population should be making that decision for themselves.)