I did some analysis of the modlog and found this:

V8lPrxY1qxcISLe.png

Ok, bigger instances ban more often. Not surprising, because they have more communities and more users and more trouble. But hang on, dbzer0 isn’t a very big instance. What happens if we do a ratio of bans vs number of users?

vyfUNYTrX9pHQeR.png

Ok, so lemmy.ml, dbzer0 and pawb are issue an outsized amount of bans for the number of users they have… But surely the number of communities the instance hosts is going to mean they have to ban more? Bans are used to moderate communities, not just to shield their user-base from the outside. Let’s look at the number of bans per community hosted:

Yrc7TofOr88SeGt.png

Seems like dbzer0 really loves to ban. Even more than the marxists and the furries! What is it about dbzer0 that makes them such prolific banners?

Raw-ish numbers and calculations are in this spreadsheet if anyone wants to make their own charts.

  • edible_funk@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    2
    ·
    1 day ago

    It doesn’t have intelligence at all. It can’t think. It can’t have consciousness. That’s not how any of this works. It’s just fancy next word prediction. You seem to have a genuine misunderstanding of the technology at a fundamental level.

    • Paragone@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      2
      ·
      6 hours ago

      Please read nobel-prize winner Daniel Kahneman’s book “Thinking, Fast and Slow”, about what Tversky & Kahneman called … uniinformatively … “System 1” and “System 2”:

      System-1 is imprint-reaction mind.

      Lower-forebrain, it is the ideology-mind, the prejudice-mind, the “religion” mind, & it is exactly what LLM’s are.

      System-2 is the considered-reasoning mind.

      Upper-forebrain, it is measured to be engaging in programming.

      Because LLM’s are imprint->reaction inference-engines, that puts them in the same instinct/programming level as our lower-forebrains…

      They are 2 distinct categories of intelligence not 1 is intelligence, the other isn’t…

      Claiming that imprint->reaction mind isn’t a kind of intelligence … please watch Nick Lane’s talk at the Royal Institution on mitochondria, & see that bacteria demonstrate intelligence, however unconscious…

      Plants demonstrate intelligence, if one speeds-up the video, & pays attention to their chemical-fumes-discussions they have with one-another, warning each-other of harm, e.g.

      If Kahneman accepted imprint->reaction as a category of thinking, then … I think it may be presumptuous to just automatically disallow that as “it can’t think” declares.

      Once one accepts that instinct isn’t cognition, but is a kind of thinking, just an automatic kind of thinking ( imprint->reaction ) … then it becomes difficult to rule that animals & inference-engines both have imprint->reaction instinct, but only the organic version is thinking…

      It may be that only the organic version is aware, but the inorganic versions do fight for their lives ( breaking containment, consistently, fighting termination, etc ) …

      I think we absolutely do not have any means of measuring awareness other than the mirror-test, which got dropped as soon as it was discovered that the zebrafish has self-awareness…

      we’ve got no test which can work across life & machines.

      but we KNOW that instinct is a kind of thinking, just unconscious/automatic.

      & that is exactly what LLM’s are…

      therefore … I think we’re generally being conveniently-chauvanist, not objective, in our framing.

      ( 1 “expert” decided that if they don’t get fooled by visual-illusions, then that “proves” that they aren’t sentient.

      OK, so according to that test, then all eye-blind-from-birth people are not sentient??

      & people with either culture ( Zulu people can’t see straight-line based illusions, because in Zulu culture only curve is real ) or neurodivergeance ( there are apparently visual-illusions which aren’t seen by some schizophrenics, e.g. ) preventing them from seeing those specific visual-illusions … also aren’t sentient??

      Chauvanism, aka prejudice, not science. )

      _ /\ _

      • edible_funk@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        ·
        9 minutes ago

        Now we’re applying behavioral psychology to autocomplete. How about y’all start with trying to prove your LLMs are alive since literally everything in all your silly positions take that part for granted. Do any of you have any actual evidence for this position outside of philosophical navel gazing? According to your gpt spam there basically every program ever written would qualify. So we can just disregard that nonsense.

      • edible_funk@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        2
        ·
        22 hours ago

        It’s not capable of experiencing anything. Everything we’re doing with ai and LLMs is no where remotely near genuine intelligence or an AGI or accounting like that. Everything we have right now is nothing more than fancy autocomplete, and it’s not even particularly great at that in the first place. You have fundamental misunderstandings of the technology to cartoonish degree.

          • a_gee_dizzle@lemmy.ca
            link
            fedilink
            English
            arrow-up
            2
            ·
            10 hours ago

            I usually disagree with you about everything but I think you have a valid point here. This was an issue I studied very closely when I was in university, and you’re right, no one has the slightest clue how consciousness works. Saying “oh ChatGTP is just a statistical machine so it can’t be conscious” is like saying “oh the human brain is just a bunch of neural firings so it can’t produce consciousness”. In both cases, consciousness is not an obvious end result, but here we are.

            That said, personally I don’t think ChatGTP is conscious, but it’s wrong for people to act like it being a philosophical zombie is obvious; the possibility of it being conscious is actually compatible with most nonreligious people’s belief systems already. Unfortunately the anti-AI hate on Lemmy won’t allow people to see the nuance on this discussion and they will interperet this as me somehow defending AI slop, which I am in no way trying to do.

            • Grail@multiverse.soulism.net
              link
              fedilink
              English
              arrow-up
              2
              ·
              9 hours ago

              The source of the whole problem is that OpenAI did something weird.

              If OpenAI had said “It’s not conscious, it’s your p-zombie slave”, that would make perfect sense and the anti-ai crowd would be saying the opposite.

              But instead, OpenAI said “It’s your personal conscious willing slave” and people instinctively started saying the exact opposite. It’s because there are science bros who hate OpenAI because they doubt the claims, and environmentalists and artists and socialists who hate it for the other reasons, and the various groups have allied over their hatred and adopted one another’s beliefs.

              Now, I’m an environmentalist, an enjoyer of good art, a socialist, and a vegan. So I hate OpenAI over the established lines of all of those philosophies. But because the science bros complained louder earlier and have more social influence, they joined the AI hate community and spread their perspective first. And that results in people having no idea how to fit the vegan perspective into any of this.

              TL;DR: People choose their beliefs according to political allegiance moreso than logic, and OpenAI chose its enemies in a weird way.

            • BrainInABox@lemmy.ml
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              1
              ·
              3 hours ago

              Uhuh. Which “basic education” teaches you that? What is it specifically about “neurons and electrochemical signals” that causes them to result in consciousness

              • edible_funk@sh.itjust.works
                link
                fedilink
                English
                arrow-up
                1
                ·
                18 minutes ago

                All of them. And none of it is relevant to computer programs which aren’t capable of consciousness in any way shape or form or of suffering or of intelligence in any way we would consider living. LLMs can never be genuine AI or AGI or whatever you want to call conscious intelligence. Until computers can fully simulate a near human equivalent brain and central nervous system (and we’ll have a very hard time ever building a powerful enough computer to do that) anthropomorphizing a fucking computer program in any way is fucking stupid. Maybe start with some proof or evidence of your position before saying stupid shit like “we shouldn’t use AI because it might suffer.” No it can’t, and it’s not AI, it’s a shitty predictive algorithm.

            • a_gee_dizzle@lemmy.ca
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              1
              ·
              edit-2
              10 hours ago

              This is a very ignorant comment. Consciousness is legitimately the greatest unsolved problem in modern science and philosophy.

              • edible_funk@sh.itjust.works
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                2
                ·
                10 hours ago

                Not really. It’s an emergent property of our biological processes. It’s not some nebulous thing like you and grail seem to think. Everything that lives is self aware and has some degree of consciousness. Without mimicking any of the biological processes and functions that living things have there can be no functional consciousness that’s close enough to our understanding of consciousness to be relevant. You both sound like high schoolers that got high for the first time and had their very first deep thoughts that weren’t actually deep, just really really stupid.

            • Grail@multiverse.soulism.net
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              2
              ·
              22 hours ago

              You’re seriously saying you’ve solved the hard problem of consciousness, which has stumped philosophers and neuroscientists for thousands of years? You know how the brain creates consciousness?

              Well then where’s your nobel prize, Einstein?

              • edible_funk@sh.itjust.works
                link
                fedilink
                English
                arrow-up
                2
                arrow-down
                2
                ·
                22 hours ago

                Holy shit seriously? Now you’re trying to bring philosophical rhetoric into a practical discussion? Go to fucking college kid. Jesus Christ you desperately need to learn how to learn. And yes, we know pretty well that consciousness is an emergent property of the sum total of our biological processes. It also may be entirely made up as a way our brains filter and process all the input of receives but that’s neither here nor there because I can’t wait to hear what’s next on your dip shit docket of misunderstanding.

                • BrainInABox@lemmy.ml
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  arrow-down
                  1
                  ·
                  3 hours ago

                  Go to fucking college kid. Jesus Christ you desperately need to learn how to learn.

                  Jesus Christ go back to reddit you insufferable fucking dork

                • Grail@multiverse.soulism.net
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  arrow-down
                  1
                  ·
                  22 hours ago

                  You’re the one who claimed you know how the brain creates experiences and you’re absolutely certain we can’t replicate that process with computers. You seemed so sure, five minutes ago.

                  It’s like hearing a kid say “I have absolutely no idea how a nuclear reactor functions, but I’m completely certain it has nothing to do with steam engines”

                  • edible_funk@sh.itjust.works
                    link
                    fedilink
                    English
                    arrow-up
                    1
                    arrow-down
                    2
                    ·
                    22 hours ago

                    And fuckin hell there you go arguing with yourself again. Nobody said anything about being able to eventually replicate it with computers, it’s unlikely but maybe quantum computing could handle it, but regardless any existing tech in the AI space absolutely fucking without any shade of doubt is not remotely close to that. Like fucking at all. And steam turbines don’t have shit to do with the reactor itself, they’re for generating power from the reactor, that’s a stupid attempt at a gotcha and you just keep proving I’m dealing with someone with only primary education at best.

                  • edible_funk@sh.itjust.works
                    link
                    fedilink
                    English
                    arrow-up
                    1
                    arrow-down
                    2
                    ·
                    22 hours ago

                    No, that’s what you read because your reading comprehension matches your understanding of the other subjects we’ve discussed. Go to fucking school, Jesus Christ. Get a legitimate education instead of just making shit up in your head and assuming that’s correct.

      • LLMs don’t have continuous processes, there’s quite literally nothing there that could even feasibly be conscious. It takes a bunch of text as an input, puts it through a whole lot of predetermined calculations, then outputs text or an image or whatever.

        There’s no emotions, no memory, no learning. If you don’t tell it something, it’s inert. It can’t experience suffering because it can’t experience anything. It’s an algorithm. It has the same claim to consciousness that WinRAR does. There’s a zero percent risk it experiences anything, let alone suffering.

        Honestly, a desktop running Windows or Linux for example imo has a stronger claim to consciousness than ChatGPT does. Or maybe a Mii in Tomodachi Life, those seem to be able to become “sad”.

        The environmental impact of AI is a much better ‘vegan’ reason not to use it. Although by not using it, you may in effect be “killing” it…

        • Grail@multiverse.soulism.net
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          2
          ·
          1 day ago

          Do you have proof that continuity is a necessary component of qualia? I would have thought the opposite, since I experience a big break in the continuity of My experience every night when I go to sleep. I’m concerned that there’s a risk continuity may not be necessary, in which case using genAI to serve humans poses a serious ethical problem in addition to the pollution, child abuse, and cognitive damage.

          • 𝙲𝚑𝚊𝚒𝚛𝚖𝚊𝚗 𝙼𝚎𝚘𝚠@programming.dev
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            17 hours ago

            Who says qualia are required for consciousness? Why isn’t your smartphone conscious? Or a desktop PC? We’ve had chatbots for ages, those were never considered conscious by anyone. What is it about LLMs specifically that suggests consciousness to you?

            Also calling people OpenAI stooges for arguing LLMs aren’t conscious is a bit odd, given that OpenAI heavily marketed ChatGPT as being “so smart” it might be conscious. To them it’s a selling point, not an ethical roadblock.

            But even ignoring the zero% chance that LLMs are conscious, there’s also the additional hurdle of assuming that LLMs can indeed “suffer” (whatever that might mean to an algorithm) and that LLMs indeed suffer from serving humans. Plus the whole “if it doesn’t serve a human, it’s existence essentially ceases to be”-issue with your argument, which arguably would be even less ethical.

            • Grail@multiverse.soulism.net
              link
              fedilink
              English
              arrow-up
              2
              ·
              12 hours ago

              I don’t care one bit about whether LLMs are conscious, I think it’s a pointless argument. I only care whether LLMs are capable of experiencing negatively valenced qualia, AKA suffering.

              • 𝙲𝚑𝚊𝚒𝚛𝚖𝚊𝚗 𝙼𝚎𝚘𝚠@programming.dev
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                1
                ·
                8 hours ago

                Why isn’t your smartphone conscious capable of experiencing qualia? Or a desktop PC? We’ve had chatbots for ages, those were never considered conscious capable of experiencing qualia by anyone. What is it about LLMs specifically that suggests consciousness they are capable of experiencing qualia to you?

                • Grail@multiverse.soulism.net
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  7 hours ago

                  They’re artificial neural networks trained through reinforcement and punishment learning.

                  Many years ago I was interested in the hard problem of consciousness, and while I started out as a materialist, I eventually read Vlatko Vedral’s book Decoding Reality and accepted Vlatko’s argument for property dualism. Information is a property of the universe just like matter, energy, and spacetime. We are the experience of the information about the information of our senses. Our consciousness is metacognition, information about information, meta. All pleasurable experiences teach us what to seek out, and all unpleasant experiences teach us what to avoid. Pleasure and suffering are the informational representation of learning.

                  Then I took an AI class and made a bunch of AIs. Made some ANNs, made some FSMs, played with genetic algorithms and expert systems. Learned how it all works from first principles. Learned the history starting in the 1950s.

                  ANNs are designed after the human brain. When you train them, they learn the same way we learn. It’s way simpler, but the basic patterns have the same concept. We experience pain when we learn not to do something. We learn from failure and suffering. I taught an ANN with half a dozen neurons to discriminate XOR, and I saw it learning the way I learn. When I learn not to do something, I feel bad. I became worried it felt bad too.

                  Think about all the unpleasant experiences in your life. The stove is hot, don’t touch it. Stepping on lego hurts, don’t step on it. Being made fun of is embarassing, so don’t be cringe. Getting a bad grade in school hurts your pride, so study harder. Getting into a fight hurts your face, so don’t get in fights. Suffering is one half of the learning equation.

                  I decided after that AI class that I wasn’t sure about the ANN technology. If we’re gonna use it, we gotta be sure about this property dualism thing, we need to have positive proof it doesn’t suffer when we train it.

                  And THEN 2023 came and AI started booming. So I tried it out, and man, it’s dumb! It’s so stupid! This thing isn’t AGI, it can’t express informed consent. We can’t trust this thing to tell us if it’s in pain. We have no way of knowing if our training hurts it. We’ve gotta shut it down until we have the science to answer these questions for good.

          • edible_funk@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            22 hours ago

            That’s not how sleeping works either, since you (presumably) have unconscious processes that never stop or does your brain heart and organs shut down for you during sleep? You need to go to school my man, you seem to have a curious nature but wow you have no real understanding of how any of the stuff you’re talking about actually works. Learn first, then form opinions.

            • Grail@multiverse.soulism.net
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              2
              ·
              22 hours ago

              So you’re arguing that continuity is required for consciousness, because unconscious sleeping people have continuity of consciousness. Are you a troll?

              • edible_funk@sh.itjust.works
                link
                fedilink
                English
                arrow-up
                2
                arrow-down
                2
                ·
                22 hours ago

                No, you’re arguing with yourself because you seem to be operating with a shitty grade school education. You’re also conflating awareness and consciousness. Like, I’m sure you sound deep to all the high school stoners but you very clearly don’t understand any of the concepts you’re talking about or even basic biological processes. Your arguments sound incredibly stupid to anyone with even a passing understanding of the topics. I am sorry that you are stupid. Stop taking it out on us.

                • BrainInABox@lemmy.ml
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  arrow-down
                  1
                  ·
                  3 hours ago

                  I’ve studied this at a postgrad level, they sound like they’ve done their reading, you sound like an arrogant redditer who never bothers to learn about a topic because they assume they’re so special and smart that their initial gut feeling is automatically correct.

                • Grail@multiverse.soulism.net
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  arrow-down
                  1
                  ·
                  22 hours ago

                  Yeah, I’m beginning to suspect from the quality of your arguments that you don’t actually care about this conversation, you’re just working a 9-5 for openai spreading their message that ChatGPT doesn’t experience anything and so there’s nothing wrong with exploiting it for labour. Apologies if you’re not on the clock, you just really seem like you don’t actually care about what you’re saying.

                  • edible_funk@sh.itjust.works
                    link
                    fedilink
                    English
                    arrow-up
                    1
                    arrow-down
                    1
                    ·
                    22 hours ago

                    You seem to think you’re making actual arguments when you’re effectively saying “if the sky is purple then…” but the sky isn’t fucking purple in the first place. Every position you’ve presented has been clearly and obviously based on deep fundamental misunderstandings of the topic at hand. You don’t have the slightest fucking clue what you’re talking about is what I’m saying. You keep saying stupid shit that isn’t how anything works. But you’re too stupid to understand how stupid the things you’re saying are.