xkcd #3126: Disclaimer

Title text:

You say no human would reply to a forum thread about Tom Bombadil by writing and editing hundreds of words of text, complete with formatting, fancy punctuation, and two separate uses of the word ‘delve’. Unfortunately for both of us, you are wrong.

Transcript:

Transcript will show once it’s been added to explainxkcd.com

Source: https://xkcd.com/3126/

explainxkcd for #3126

  • givesomefucks@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    ·
    3 days ago

    You just have say fuck a lot…

    But I’m pretty sure any explanation of Bombadil less than 300 words would fail the Turing test

    • u/lukmly013 💾 (lemmy.sdf.org)@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      7
      ·
      edit-2
      3 days ago

      That is an excellent point! Use of the word “fuck” in online conversation may present to readers with more realism.

      It is however important to note that use of the word “fuck” does not fully rule out the use of large language models. While most commercial offerings may be trained to avoid profanity, certain models might not be trained the same way.

      Additionally, use of the word “fuck” may be inappropriate in certain human conversations such as:

      • formal conversations
      • conversations with parents
      • conversations with children

      So, while the presence of the word “fuck” may decrease the likelihood of the text being generated by large language models, it is important to keep in mind its limitations, and opt for more robust methods like cryptographic signatures or verbal conversations.

      Is there anything else I can help you with?

      (This was genuinely written by me)

      • Armok_the_bunny@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        3 days ago

        The method I (just now) thought up using to signal humanity was responding to accusations of being an LLM with a “fuck you”. The combination of vulgar language and defiance of the sycophantic tendencies of LLMs feels to me like a pretty effective proof of humanity, at least for now.

    • logicbomb@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 days ago

      You can actually get LLMs to swear, sort of. They just won’t use real swear words. If you set up your LLM parameters to use a specific word for an expletive, but it’s not actually an expletive, then you can replace that word with your choice of expletive after the text is generated.