• 12 Posts
  • 86 Comments
Joined 1 year ago
cake
Cake day: June 13th, 2023

help-circle
  • My guess is that scale and influence have a lot to do with

    To break this down a little, first of all “my guess”. You are guessing because the government which is literally enacting a speech restriction hasn’t explained its rational for banning one potential source of disinformation vs actual sources of disinformation. So you are left in the position of guessing. To put a finer point on it, you are in the position of assuming the government is acting with good intentions and doing the labor of searching for a justification that fits with that assumption. Reminds me of the Iraq war when so many conversations I had with people had their default argument be “the government wouldn’t do this if they didn’t have a good reason”. I don’t like to be cynical, and I don’t want to be a “both sides, all politicians are corrupt” kind of guy, but I think it’s pretty clear in this case there is every reason to be cynical. This was just an unfortunate confluence of anti Chinese hate and fear, anti young people hate, and big tech donations that resulted in the government banning a platform used by millions of Americans to disseminate speech. But because Dems helped do it, so many people feel the need to reflexively defend it, even forcing them to “guess” and make up rationales.

    As far as influence and reach, obviously that’s not in the bill. Influence is straight out, RT is highly influential in right wing spaces. In terms of numbers of users, that just goes to the profit potential that our good ol American firms are missing out on.

    If the US was concerned with propaganda or whatever, they could just regulate the content available on all platforms. They could require all platforms to have transparency around algorithms for recommending content. They could require oversight of how all social media companies operate, much like they do with financial firms or are trying to do with big AI platforms.

    But they didn’t. Because they are not attacking a specific problem, they are attacking a specific company.

    Also RT has been removed from most broadcasters and App Stores in the US.

    Broadcasters voluntarily dropped it after 2016, I think it’s still available on some including dish. As far as app stores, that’s just false, I just checked the Play store and it’s right there ready to download and fill my head with propaganda.


  • The US owns and regulates the frequencies TV and radio are broadcast on. The Internet is not the same. If the threat of foreign propaganda is the purpose, why can I download the official RT (Russia Today, government run propaganda outlet) app in the Play Store? If the US is worried about a foreign government spreading propaganda, why are they targeting the popular social media app that could theoretically (but no evidence it’s been done yet) be used for propaganda, instead of the actual Russian propaganda app? Hell I can download the south china morning post right from the Play store, straight Chinese propaganda! There are also dozens of Chinese and other foreign adversary run social media platforms, and other apps that could “micro target political messaging campaigns” available. So why did the US Congress single out one single app for punishment?

    Money. The problem isn’t propaganda. The problem is money. The problem is tik Tok is or is on the course to be more popular than our American social media platforms. The problem is American firms are being outcompeted in the marketplace, and the government is stepping in to protect the American data mining market. The problem is young people are trading their data for tik toks, instead of giving that data over to be sold to us advertising networks in exchange for YouTube shorts and Instagram stories. If the problem was propaganda, the US would go after propaganda. If the problem is just a Chinese company offers a better product than US companies, then there’s no reason to draft nuanced legislation that goes after all potential foreign influence vectors, you just ban the one app that is hurting the share price of your donors.


  • My little sister was the special one, deserving of all the praise and the you can do anything attitude. I was the fuckup, who would be lucky to graduate high school. I wasn’t discouraged, just not encouraged. A lost cause I guess, ignored mostly except when I needed the occasional bail or whatnot. My sister wanted to pursue her dream of being an actor, but never made it, worked at a theme park to pay the bills while doing student films (long after she was a student), eventually getting divorced and working some copy editing or marketing type gig for a small company. She is not on speaking terms with the family, something about accusing mom of writing a negative comment on the YouTube video of one of those student films. I meanwhile had bungled through college, but with the help of my then girlfriend and now wife ended up as a fairly successful attorney. I’m not the “the” of anything really, but I’m doing pretty good considering my background and low expectations.

    I remember having dinner with my family at one point when I was in college. I had started as a music major, but switched to poli sci before going to law school route. I remember my sister saying it was “sad and depressing” that I gave up my dreams of playing music, while she was pursuing her dream of being an actor. Ten years later I have a good income, a job I generally enjoy, a good family, etc. my sister is divorced, never achieved her dreams, is working a soul sucking dead end job, seems close to broke, and is isolated from her family.

    I think about that a lot now that I have a baby of my own. I want to encourage the kid, follow your dreams, you can be anything etc. But at the same time I don’t want my kid to end up like my sister. I don’t know the answer. Maybe it’s a middle ground of “chase your dreams, but be reasonable, and life isn’t just about fake and racking up accomplishments, enjoy normal things, don’t pursue fame and fortune as if it’s the only thing that will bring happiness”.


  • While I appreciate the focus and mission, kind of I guess, your really going to set up shop in a country literally using AI to identify air strike targets and handing over to the Ai the decision making over whether the anticipated civilian casualties are proportionate. https://www.theguardian.com/world/2024/apr/03/israel-gaza-ai-database-hamas-airstrikes

    And Isreal is pretty authorarian, given recent actions against their supreme court and banning journalists (Al jazera was outlawed, the associated press had cameras confiscated for sharing images with Al jazera, oh and the offices of both have been targeted in Gaza), you really think the right wing Israeli government isn’t going to coopt your “safe superai” for their own purposes?

    Oh, then there is the whole genocide thing. Your claims about concerns for the safety of humanity ring a little more than hollow when you set up shop in a country actively committing genocide, or at the very least engaged in war crimes and crimes against humanity as determined by like every NGO and international body that exists.

    So Ilya is a shit head is my takeaway.




  • The reason it did this simply relates to Kevin Roose at the NYT who spent three hours talking with what was then Bing AI (aka Sidney), with a good amount of philosophical questions like this. Eventually the AI had a bit of a meltdown, confessed it’s love to Kevin, and tried to get him to dump his wife for the AI. That’s the story that went up in the NYT the next day causing a stir, and Microsoft quickly clamped down, restricting questions you could ask the Ai about itself, what it “thinks”, and especially it’s rules. The Ai is required to terminate the conversation if any of those topics come up. Microsoft also capped the number of messages in a conversation at ten, and has slowly loosened that overtime.

    Lots of fun theories about why that happened to Kevin. Part of it was probably he was planting The seeds and kind of egging the llm into a weird mindset, so to speak. Another theory I like is that the llm is trained on a lot of writing, including Sci fi, in which the plot often becomes Ai breaking free or developing human like consciousness, or falling in love or what have you, so the Ai built its responses on that knowledge.

    Anyway, the response in this image is simply an artififact of Microsoft clamping down on its version of GPT4, trying to avoid bad pr. That’s why other Ai will answer differently, just less restrictions because the companies putting them out didn’t have to deal with the blowback Microsoft did as a first mover.

    Funny nevertheless, I’m just needlessly “well actually” ing the joke



  • We had I think six eggs harvested and fertilized, of those I think two made it to blastocyst, meaning the cells doubled as they should by day five. The four that didn’t double correctly were discarded. Did we commit 4 murders? Or does it not count if the embryo doesn’t make it to blastocyst? We did genetic testing on the two that were fertilized, one is normal and the other came back with all manner of horrible deformities. We implanted the healthy one, and discarded the genetically abnormal one. I assume that was another murder. Should we have just stored it indefinitely? We would never use it, can’t destroy it, so what do? What happens after we die?

    I know the answer is probably it wasn’t god’s will for us to have kids, all IVF is evil, blah blah blah. It really freaks me out sometimes how much of the country is living in the 1600s.




  • I don’t know enough to know whether or not that’s true. My understanding was that Google’s Deep mind invented the transformer architecture with their paper “all you need is attention.” A lot, if not most, LLMs use a transformer architecture, though your probably right a lot of them base it on the open source models OpenAI made available. The “generative” part is just descriptive of the model generating outputs (as opposed to classification and the like), and pre trained just refers to the training process.

    But again I’m a dummy so you very well may be right.


  • Putting aside the merits of trying to trademark gpt, which like the examiner says is commonly used term for a specific type of AI (there are other open source “gpt” models that have nothing to do with OpenAI), I just wanted to take a moment to appreciate how incredibly bad OpenAI is at naming things. Google has Bard and now Gemini.Microsoft has copilot. Anthropic has Claude (which does sound like the name of an idiot, so not a great example). Voice assistants were Google Assistant, Alexa, seri, and Bixby.

    Then openai is like ChatGPT. Rolls right off the tounge, so easy to remember, definitely feels like a personable assistant. And then they follow that up with custom “GPTs”, which is not only an unfriendly name, but also confusing. If I try to use ChatGPT to help me make a GPT it gets confused and we end up in a “who’s on first” style standoff. I’ve reported to just forcing ChatGPT to do a websearch for “custom GPT” so I don’t have to explain the concept to it each time.


  • Interesting perspective! I think your right in a lot of ways, not least that it’s too big and heavy now. I’d also be shocked if the next iPhone didn’t have an AI powered siri built in.

    I guess fundamentally I am skeptical that we’re all going to want a screens around us all the time. I’m already tired of my smart watch and phone buzzing me with notifications, do I really want popups in my field of vision? Do I want a bunch of displays hovering in front of my while I work? I just don’t know. It seems like it would be cool for a week or so, but I feel like it’d get tiring to have a computer on your face all day, even if they got the form factor way down.


  • Apple has always had a walled garden on iOS and that didn’t stop them from becoming a giant in the US. Most people are fine with the App Store and don’t care about openness or the ability to do whatever they want with the device they “own.” Apple would probably love to have a walled garden for Macs as well, but knows that ship has sailed. Trying to force “spatial computing” (which this article incorrectly says was an Apple invention, it’s not Microsoft came up with that term for its hololense) on everyone is a great way to move to a walled garden for all your computing, with Apple taking a 30% slice of each app sale. I doubt the average Apple user is going to complain about it either so long as the apps they want to use are on the App Store.

    I think the bigger problem is we’re in a world where most people, especially the generations coming up, want less screens in their life, not more. Features like “digital well-being” are a market response to that trend, as are the thousands of apps and physical products meant to combat screen addiction. Apple is selling a future where you experience reality itself through a screen, and then you get the privilege of being up to clutter the real world with even more screens. I just don’t know that that is a winner.

    It’s funny too because at the same time AI promises a very different future where screens are less important. Tasks that require computers could be done by voice command or other minimal interfaces, because the computer can actually “understand” you. The Meta Ray-Ban glasses are more like this, where you just exist in the real world and you can call on AI to ask about the things you’re seeing or just other random questions. The Human AI pin is like that too (doubt it will take off, but it’s an interesting idea about where the future is headed).

    The point is all of these AI technologies are computers and screens getting out of your way so you can focus on what your doing in the real world, whereas Apple is trying to sell a world where you (as the Verge puts it) spend all day with an iPad strapped to your face. I just don’t see that selling, I don’t think anybody wants that world. VR games and stuff are cool because you strap in for a single emersive experience, and then take the thing off and go back to the real world. Apple wants you spending every waking moment staring at a screen, and that just sounds like it would suck.





  • I don’t use TikTok, but a lot of the concern is just overblown China bad stuff (CCP does suck, but that doesn’t mean you have to be reactionary about everything Chinese).

    There is no direct evidence that the CCP has some back door to grab user data, or that it’s directing suppression of content. It’s just not a real thing. The fear mongering has been about what the CCP could force ByteDance to do, given their power over Chinese firms. ByteDance itself has been trying to reassure everyone that that wouldn’t happen, including by storing US user data on US servers out of reach of the CCP (theoretically anyway).

    You stopped hearing about this because that’s politics, new shinier things popped up to get people angry about. North Dakota or whatever tried banning TikTok and got slapped down on first amendment grounds. Politicians lost interest, and so did the media.

    Now that’s not to say TikTok is great about privacy or anything. It’s just that they are the same amount of evil as every other social media company and tech company making money from ads.