• howrar@lemmy.ca
    link
    fedilink
    arrow-up
    1
    ·
    14 hours ago

    Two more questions need answering before these findings can become actionable:

    • How do these two groups compared to a third group that can use both? ChatGPT is pretty useless on its own when correctness is important, but it improves a lot when you combine it with ways to verify its output.
    • How much time and effort would this new group need to accomplish the same task? One of ChatGPT’s strengths is being able to communicate a piece of information in many different ways, and in whatever order you ask of it. It’s then much faster to verify or through a legitimate source than it is to learn from those sources in the first place.
    • chicken@lemmy.dbzer0.com
      link
      fedilink
      arrow-up
      1
      ·
      10 hours ago

      To me the main thing is, this is about utility of tools for acquiring general domain knowledge in a one-off event. The effects on overall intelligence, which is a separate thing from knowledge or ability to give effective advice on a topic, are a totally different scope.

      What it’s actually testing doesn’t seem like it’s finding anything surprising, because the information itself the subjects are getting from ChatGPT is likely lower quality. So it could just be that the people reading blogposts or wikihow articles about starting a garden learned more and/or more accurate things about it, rather than, research using AI negatively affects the way you think, something that would make more sense to test over a longer period of time, and with a greater variety of topics and tasks.