• 0 Posts
  • 87 Comments
Joined 1 year ago
cake
Cake day: June 5th, 2023

help-circle






  • People developing local models generally have to know what they’re doing on some level, and I’d hope they understand what their model is and isn’t appropriate for by the time they have it up and running.

    Don’t get me wrong, I think LLMs can be useful in some scenarios, and can be a worthwhile jumping off point for someone who doesn’t know where to start. My concern is with the cultural issues and expectations/hype surrounding “AI”. With how the tech is marketed, it’s pretty clear that the end goal is for someone to use the product as a virtual assistant endpoint for as much information (and interaction) as it’s possible to shoehorn through.

    Addendum: local models can help with this issue, as they’re on one’s own hardware, but still need to be deployed and used with reasonable expectations: that it is a fallible aggregation tool, not to be taken as an authority in any way, shape, or form.


  • On the whole, maybe LLMs do make these subjects more accessible in a way that’s a net-positive, but there are a lot of monied interests that make positive, transparent design choices unlikely. The companies that create and tweak these generalized models want to make a return in the long run. Consequently, they have deliberately made their products speak in authoritative, neutral tones to make them seem more correct, unbiased and trustworthy to people.

    The problem is that LLMs ‘hallucinate’ details as an unavoidable consequence of their design. People can tell untruths as well, but if a person lies or misspeaks about a scientific study, they can be called out on it. An LLM cannot be held accountable in the same way, as it’s essentially a complex statistical prediction algorithm. Non-savvy users can easily be fed misinfo straight from the tap, and bad actors can easily generate correct-sounding misinformation to deliberately try and sway others.

    ChatGPT completely fabricating authors, titles, and even (fake) links to studies is a known problem. Far too often, unsuspecting users take its output at face value and believe it to be correct because it sounds correct. This is bad, and part of the issue is marketing these models as though they’re intelligent. They’re very good at generating plausible responses, but this should never be construed as them being good at generating correct ones.





  • Early voting is an option in many places too!

    Voting early is usually less stressful, and you can schedule it easier (because election day isn’t a national holiday for some reason). Look up the dates early voting is running in your county, read up on what polling stations you can vote early at, and make a plan!

    As far as changing minds though… yeah, everyone is pretty much locked in at this point. I just hope people in the US cast a ballot even if they don’t plan on voting for the president. There are so many downballot positions for local offices that one’s vote can have a huge impact on.

    I think if people are resigned to not pick between outright vs lite genocide (understandably), the best thing they can do is research their local elections, make a list on who they plan on choosing for each office, and make the decision on the president (including the choice to do a write-in or leave it blank) when they get to the ballot box.



  • Advertising is like the Kudzu vine: neat and potentially useful if maintained responsibly, but beyond capable of growing out of control and strangling the very landscape if you don’t constantly keep it in check. I think, for instance, that a podcast or over-the-air show running an ad-read with an affiliate link is fine for the most part, as long as it’s relatively unobtrusive and doesn’t put limitations on what the content would otherwise go over.

    The problem is that there needs to be a reset of advertiser expectations. Right now, they expect the return on investment that comes from hyper-specific and invasive data, and I don’t think you can get that same level of effectiveness without it. The current advertising model is entrenched, and the parasitic roots have eroded the foundation. Those roots will always be parasitic because that’s the nature of advertising, and the profit motive in general when unchecked.







  • Anything within a sealed loop such as blood or brain fluid shouldn’t be boiling. Your body is pretty good at keeping that stuff inside as long as you don’t have any major cuts or something. That said, I don’t think even a minor cut suffered in the vacuum could clot or scab without oxygen.

    All of the air in any of your orifices would rapidly get sucked out (including from one’s butt), and pretty much any liquids exposed to the resulting vacuum would boil. Negative pressure within the body means more room-temp boiling liquids, which then creates more air to get sucked out! It’s a feedback loop!

    A space-exposed corpse would likely end up quite dehydrated for the above reason.