• SleeplessCityLights@programming.dev
    link
    fedilink
    arrow-up
    3
    arrow-down
    1
    ·
    2 days ago

    I like your strategy. I use a system prompt that forces it to ask a question if there are options or if it has to make assumptions. Controlling context is key. It will get lost if it has too much, so I start a new chat frequently. I also will do the same prompts on two models from different providers at the same time and cross reference the idiots to see if they are lying to me.

    • dejected_warp_core@lemmy.world
      link
      fedilink
      arrow-up
      2
      arrow-down
      1
      ·
      2 days ago

      I use a system prompt that forces it to ask a question if there are options or if it has to make assumptions

      I’m kind of amazed that even works. I’ll have to try that. Then again, I’ve asked ChatGPT to “respond to all prompts like a Magic 8-ball” and it knocked it out of the park.

      so I start a new chat frequently.

      I do this as well, and totally forgot to mention it. Yes, I keep the context small and fresh so that prior conversations (and hallucinations) can’t poison new dialogues.

      I also will do the same prompts on two models from different providers at the same time and cross reference the idiots to see if they are lying to me.

      Oooh… straight to my toolbox with that one. Cheers.

      • SleeplessCityLights@programming.dev
        link
        fedilink
        arrow-up
        2
        ·
        1 day ago

        I forgot another key. The code snippets they give you are bloated and usually do unnecessary things. You are actually going to have to think to pull out the needed line(s) and clean it up. I never copy paste.