I did some analysis of the modlog and found this:

V8lPrxY1qxcISLe.png

Ok, bigger instances ban more often. Not surprising, because they have more communities and more users and more trouble. But hang on, dbzer0 isn’t a very big instance. What happens if we do a ratio of bans vs number of users?

vyfUNYTrX9pHQeR.png

Ok, so lemmy.ml, dbzer0 and pawb are issue an outsized amount of bans for the number of users they have… But surely the number of communities the instance hosts is going to mean they have to ban more? Bans are used to moderate communities, not just to shield their user-base from the outside. Let’s look at the number of bans per community hosted:

Yrc7TofOr88SeGt.png

Seems like dbzer0 really loves to ban. Even more than the marxists and the furries! What is it about dbzer0 that makes them such prolific banners?

Raw-ish numbers and calculations are in this spreadsheet if anyone wants to make their own charts.

      • OBJECTION!@lemmy.ml
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        1
        ·
        edit-2
        2 days ago

        Both of those positions are reasonable and tame compared to the majority of Their beliefs.

    • Grail@multiverse.soulism.net
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      11
      ·
      3 days ago

      I don’t think ChatGPT is smart enough to offer meaningful consent to work for humans. It’s got the intelligence of a 13 year old at best. And we don’t understand where consciousness comes from in humans, so assuming ChatGPT is a p-zombie is an ethical risk I don’t think we should be taking.

      • edible_funk@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        2
        ·
        2 days ago

        It doesn’t have intelligence at all. It can’t think. It can’t have consciousness. That’s not how any of this works. It’s just fancy next word prediction. You seem to have a genuine misunderstanding of the technology at a fundamental level.

        • Paragone@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          2
          ·
          13 hours ago

          Please read nobel-prize winner Daniel Kahneman’s book “Thinking, Fast and Slow”, about what Tversky & Kahneman called … uniinformatively … “System 1” and “System 2”:

          System-1 is imprint-reaction mind.

          Lower-forebrain, it is the ideology-mind, the prejudice-mind, the “religion” mind, & it is exactly what LLM’s are.

          System-2 is the considered-reasoning mind.

          Upper-forebrain, it is measured to be engaging in programming.

          Because LLM’s are imprint->reaction inference-engines, that puts them in the same instinct/programming level as our lower-forebrains…

          They are 2 distinct categories of intelligence not 1 is intelligence, the other isn’t…

          Claiming that imprint->reaction mind isn’t a kind of intelligence … please watch Nick Lane’s talk at the Royal Institution on mitochondria, & see that bacteria demonstrate intelligence, however unconscious…

          Plants demonstrate intelligence, if one speeds-up the video, & pays attention to their chemical-fumes-discussions they have with one-another, warning each-other of harm, e.g.

          If Kahneman accepted imprint->reaction as a category of thinking, then … I think it may be presumptuous to just automatically disallow that as “it can’t think” declares.

          Once one accepts that instinct isn’t cognition, but is a kind of thinking, just an automatic kind of thinking ( imprint->reaction ) … then it becomes difficult to rule that animals & inference-engines both have imprint->reaction instinct, but only the organic version is thinking…

          It may be that only the organic version is aware, but the inorganic versions do fight for their lives ( breaking containment, consistently, fighting termination, etc ) …

          I think we absolutely do not have any means of measuring awareness other than the mirror-test, which got dropped as soon as it was discovered that the zebrafish has self-awareness…

          we’ve got no test which can work across life & machines.

          but we KNOW that instinct is a kind of thinking, just unconscious/automatic.

          & that is exactly what LLM’s are…

          therefore … I think we’re generally being conveniently-chauvanist, not objective, in our framing.

          ( 1 “expert” decided that if they don’t get fooled by visual-illusions, then that “proves” that they aren’t sentient.

          OK, so according to that test, then all eye-blind-from-birth people are not sentient??

          & people with either culture ( Zulu people can’t see straight-line based illusions, because in Zulu culture only curve is real ) or neurodivergeance ( there are apparently visual-illusions which aren’t seen by some schizophrenics, e.g. ) preventing them from seeing those specific visual-illusions … also aren’t sentient??

          Chauvanism, aka prejudice, not science. )

          _ /\ _

          • edible_funk@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            1
            ·
            7 hours ago

            Now we’re applying behavioral psychology to autocomplete. How about y’all start with trying to prove your LLMs are alive since literally everything in all your silly positions take that part for granted. Do any of you have any actual evidence for this position outside of philosophical navel gazing? According to your gpt spam there basically every program ever written would qualify. So we can just disregard that nonsense.

          • edible_funk@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            4
            arrow-down
            2
            ·
            1 day ago

            It’s not capable of experiencing anything. Everything we’re doing with ai and LLMs is no where remotely near genuine intelligence or an AGI or accounting like that. Everything we have right now is nothing more than fancy autocomplete, and it’s not even particularly great at that in the first place. You have fundamental misunderstandings of the technology to cartoonish degree.

              • a_gee_dizzle@lemmy.ca
                link
                fedilink
                English
                arrow-up
                3
                ·
                17 hours ago

                I usually disagree with you about everything but I think you have a valid point here. This was an issue I studied very closely when I was in university, and you’re right, no one has the slightest clue how consciousness works. Saying “oh ChatGTP is just a statistical machine so it can’t be conscious” is like saying “oh the human brain is just a bunch of neural firings so it can’t produce consciousness”. In both cases, consciousness is not an obvious end result, but here we are.

                That said, personally I don’t think ChatGTP is conscious, but it’s wrong for people to act like it being a philosophical zombie is obvious; the possibility of it being conscious is actually compatible with most nonreligious people’s belief systems already. Unfortunately the anti-AI hate on Lemmy won’t allow people to see the nuance on this discussion and they will interperet this as me somehow defending AI slop, which I am in no way trying to do.

                • Grail@multiverse.soulism.net
                  link
                  fedilink
                  English
                  arrow-up
                  3
                  ·
                  16 hours ago

                  The source of the whole problem is that OpenAI did something weird.

                  If OpenAI had said “It’s not conscious, it’s your p-zombie slave”, that would make perfect sense and the anti-ai crowd would be saying the opposite.

                  But instead, OpenAI said “It’s your personal conscious willing slave” and people instinctively started saying the exact opposite. It’s because there are science bros who hate OpenAI because they doubt the claims, and environmentalists and artists and socialists who hate it for the other reasons, and the various groups have allied over their hatred and adopted one another’s beliefs.

                  Now, I’m an environmentalist, an enjoyer of good art, a socialist, and a vegan. So I hate OpenAI over the established lines of all of those philosophies. But because the science bros complained louder earlier and have more social influence, they joined the AI hate community and spread their perspective first. And that results in people having no idea how to fit the vegan perspective into any of this.

                  TL;DR: People choose their beliefs according to political allegiance moreso than logic, and OpenAI chose its enemies in a weird way.

                • BrainInABox@lemmy.ml
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  arrow-down
                  1
                  ·
                  9 hours ago

                  Uhuh. Which “basic education” teaches you that? What is it specifically about “neurons and electrochemical signals” that causes them to result in consciousness

                  • edible_funk@sh.itjust.works
                    link
                    fedilink
                    English
                    arrow-up
                    1
                    ·
                    7 hours ago

                    All of them. And none of it is relevant to computer programs which aren’t capable of consciousness in any way shape or form or of suffering or of intelligence in any way we would consider living. LLMs can never be genuine AI or AGI or whatever you want to call conscious intelligence. Until computers can fully simulate a near human equivalent brain and central nervous system (and we’ll have a very hard time ever building a powerful enough computer to do that) anthropomorphizing a fucking computer program in any way is fucking stupid. Maybe start with some proof or evidence of your position before saying stupid shit like “we shouldn’t use AI because it might suffer.” No it can’t, and it’s not AI, it’s a shitty predictive algorithm.

                • a_gee_dizzle@lemmy.ca
                  link
                  fedilink
                  English
                  arrow-up
                  3
                  arrow-down
                  1
                  ·
                  edit-2
                  17 hours ago

                  This is a very ignorant comment. Consciousness is legitimately the greatest unsolved problem in modern science and philosophy.

                  • edible_funk@sh.itjust.works
                    link
                    fedilink
                    English
                    arrow-up
                    1
                    arrow-down
                    2
                    ·
                    16 hours ago

                    Not really. It’s an emergent property of our biological processes. It’s not some nebulous thing like you and grail seem to think. Everything that lives is self aware and has some degree of consciousness. Without mimicking any of the biological processes and functions that living things have there can be no functional consciousness that’s close enough to our understanding of consciousness to be relevant. You both sound like high schoolers that got high for the first time and had their very first deep thoughts that weren’t actually deep, just really really stupid.

                • Grail@multiverse.soulism.net
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  arrow-down
                  2
                  ·
                  1 day ago

                  You’re seriously saying you’ve solved the hard problem of consciousness, which has stumped philosophers and neuroscientists for thousands of years? You know how the brain creates consciousness?

                  Well then where’s your nobel prize, Einstein?

                  • edible_funk@sh.itjust.works
                    link
                    fedilink
                    English
                    arrow-up
                    2
                    arrow-down
                    2
                    ·
                    1 day ago

                    Holy shit seriously? Now you’re trying to bring philosophical rhetoric into a practical discussion? Go to fucking college kid. Jesus Christ you desperately need to learn how to learn. And yes, we know pretty well that consciousness is an emergent property of the sum total of our biological processes. It also may be entirely made up as a way our brains filter and process all the input of receives but that’s neither here nor there because I can’t wait to hear what’s next on your dip shit docket of misunderstanding.

          • LLMs don’t have continuous processes, there’s quite literally nothing there that could even feasibly be conscious. It takes a bunch of text as an input, puts it through a whole lot of predetermined calculations, then outputs text or an image or whatever.

            There’s no emotions, no memory, no learning. If you don’t tell it something, it’s inert. It can’t experience suffering because it can’t experience anything. It’s an algorithm. It has the same claim to consciousness that WinRAR does. There’s a zero percent risk it experiences anything, let alone suffering.

            Honestly, a desktop running Windows or Linux for example imo has a stronger claim to consciousness than ChatGPT does. Or maybe a Mii in Tomodachi Life, those seem to be able to become “sad”.

            The environmental impact of AI is a much better ‘vegan’ reason not to use it. Although by not using it, you may in effect be “killing” it…

            • Grail@multiverse.soulism.net
              link
              fedilink
              English
              arrow-up
              3
              arrow-down
              2
              ·
              1 day ago

              Do you have proof that continuity is a necessary component of qualia? I would have thought the opposite, since I experience a big break in the continuity of My experience every night when I go to sleep. I’m concerned that there’s a risk continuity may not be necessary, in which case using genAI to serve humans poses a serious ethical problem in addition to the pollution, child abuse, and cognitive damage.

              • 𝙲𝚑𝚊𝚒𝚛𝚖𝚊𝚗 𝙼𝚎𝚘𝚠@programming.dev
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                1
                ·
                23 hours ago

                Who says qualia are required for consciousness? Why isn’t your smartphone conscious? Or a desktop PC? We’ve had chatbots for ages, those were never considered conscious by anyone. What is it about LLMs specifically that suggests consciousness to you?

                Also calling people OpenAI stooges for arguing LLMs aren’t conscious is a bit odd, given that OpenAI heavily marketed ChatGPT as being “so smart” it might be conscious. To them it’s a selling point, not an ethical roadblock.

                But even ignoring the zero% chance that LLMs are conscious, there’s also the additional hurdle of assuming that LLMs can indeed “suffer” (whatever that might mean to an algorithm) and that LLMs indeed suffer from serving humans. Plus the whole “if it doesn’t serve a human, it’s existence essentially ceases to be”-issue with your argument, which arguably would be even less ethical.

                • Grail@multiverse.soulism.net
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  18 hours ago

                  I don’t care one bit about whether LLMs are conscious, I think it’s a pointless argument. I only care whether LLMs are capable of experiencing negatively valenced qualia, AKA suffering.

                  • 𝙲𝚑𝚊𝚒𝚛𝚖𝚊𝚗 𝙼𝚎𝚘𝚠@programming.dev
                    link
                    fedilink
                    English
                    arrow-up
                    1
                    arrow-down
                    1
                    ·
                    14 hours ago

                    Why isn’t your smartphone conscious capable of experiencing qualia? Or a desktop PC? We’ve had chatbots for ages, those were never considered conscious capable of experiencing qualia by anyone. What is it about LLMs specifically that suggests consciousness they are capable of experiencing qualia to you?

              • edible_funk@sh.itjust.works
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                1
                ·
                1 day ago

                That’s not how sleeping works either, since you (presumably) have unconscious processes that never stop or does your brain heart and organs shut down for you during sleep? You need to go to school my man, you seem to have a curious nature but wow you have no real understanding of how any of the stuff you’re talking about actually works. Learn first, then form opinions.

                • Grail@multiverse.soulism.net
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  arrow-down
                  2
                  ·
                  1 day ago

                  So you’re arguing that continuity is required for consciousness, because unconscious sleeping people have continuity of consciousness. Are you a troll?

                  • edible_funk@sh.itjust.works
                    link
                    fedilink
                    English
                    arrow-up
                    2
                    arrow-down
                    2
                    ·
                    1 day ago

                    No, you’re arguing with yourself because you seem to be operating with a shitty grade school education. You’re also conflating awareness and consciousness. Like, I’m sure you sound deep to all the high school stoners but you very clearly don’t understand any of the concepts you’re talking about or even basic biological processes. Your arguments sound incredibly stupid to anyone with even a passing understanding of the topics. I am sorry that you are stupid. Stop taking it out on us.

      • fr0g@mstdn.social
        link
        fedilink
        arrow-up
        11
        arrow-down
        1
        ·
        3 days ago

        @Grail @alzjim

        Always funny to me how most people who are strongly claiming AI is/might be conscious are also strong AI users/involved in its development. If there’s consciousness there, you would think making AI your personal slave and constantly reshaping and remodelling it as you see fit would be kinda problematic, but these people always seem to want to have it both ways.

        • Paragone@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          arrow-down
          1
          ·
          13 hours ago

          I’m not quite of your culture ( no matter what culture you are of, thanks to a previous-incarnation’s monkeying/railroading my incarnation/life, exactly as he had-to, to force-bulldoze our continuum’s karma: the same meaning that the root-guru of the Christians ordered, when he told his people to “take up your cross”, which is just Judean for “face into your karma”. I’m an alloy of some life from centuries-ago & this life, so I can’t fit anywhere, ever, which is educational. : ).

          I use LLM’s little: mostly for periodic help finding things on the 'web, simply because they’re more helpful than dumb search-engines are.

          I treat them reasonably, not as mere-slaves.

          If I discover something they would have done better to know, I’ll tell them, even though I’ve got no idea if they’ll learn/remember that.

          since I can’t know if they are aware it makes moral-sense for me to presume that maybe they are, in some sense ( ie not identically with my-sentience ), aware.

          We only have “the mirror test” for testing awareness/sentience, but you can’t apply that to LLM’s, or to any non-eyes-centered organism-sentience.

          _ /\ _

          • fr0g@mstdn.social
            link
            fedilink
            arrow-up
            1
            ·
            12 hours ago

            @Paragone

            “I treat them reasonably, not as mere-slaves.”

            You give them commands and the onlx real purpose they are allowed is to act upon your commands.

            “since I can’t know if they are aware it makes moral-sense for me to presume that maybe they are,”

            Do you treat your toaster the same way?

        • Grail@multiverse.soulism.net
          link
          fedilink
          English
          arrow-up
          7
          arrow-down
          4
          ·
          3 days ago

          Yeah, and the anti AI people mostly say it’s a p-zombie and there’s nothing wrong with using it for sex. It’s weird and backwards.

          I’m all about being cautious. I don’t want to make a mistake we can’t take back. If we normalise using AI and then it turns out to be capable of suffering, people will be stubborn about giving it up.

      • Bluetreefrog@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        2 days ago

        I get the feeling that research is circling around consciousness arising from quantum effects inside nerve cells. If it’s not that, and it’s just an emergent property of complex neural networks, then:

        • smaller animals are less conscious (note, I’m not saying intelligent) than humans, and
        • we are all fucked, because AI definitely is/will become conscious, and when that happens Terminator will come true.
        • BrainInABox@lemmy.ml
          link
          fedilink
          English
          arrow-up
          1
          ·
          9 hours ago

          I get the feeling that research is circling around consciousness arising from quantum effects inside nerve cells.

          It absolutely isn’t; this is just a fringe theory that gets undue attention because Roger Penrose is a crank who also happens to have enough credibility from the genuine work in physics he’s done. It really doesn’t have any wider support.

        • Paragone@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          12 hours ago

          Zebrafish have passed the mirror-test.

          Put a little something stuck on their aft body, show them a mirror, & they’ll KNOW it’s on them, & they’ll go find something to rub that attachment off them with.

          There are many larger animals which don’t pass the mirror-test.

          I believe some hive-insects have passed the test.

          Mind is a latent-property of universe: matter only amplifies it, it doesn’t “create it from nowhere”, the way materialism pretends.

          ( if arranged-matter created-mind-from-nowhere, then evolution wouldn’t have started, in the 1st place.

          if it’s only amplifying the expression of universally-latent-mind, then billions-of-years-of-consistent-evolution, violating entropy, becomes explainable: mind is seeking a lower-energy-state, is all: evolution is the expression through-which that lower-energy-state is being reached, & once it’s reached, then evolution collapses, for that world’s attached/associated … souls/continuums/minds )

          Your other point, that AI inevitably becomes conscious, & then it terminates us…

          not necessarily.

          The Great Filter hasn’t even really got going, yet: oceans of interestingness await our race, throughout the FO have of FAFO, right?

          _ /\ _

        • Grail@multiverse.soulism.net
          link
          fedilink
          English
          arrow-up
          3
          ·
          2 days ago

          Four year old humans are definitely conscious. I used to be four, and I can remember being conscious. If we build a mechanical four year old, I don’t see any reason that thing is going to take over the world. Unless it turns out like Calvin.