• PierceTheBubble@lemmy.ml
    link
    fedilink
    English
    arrow-up
    4
    ·
    22 hours ago

    I don’t know: it’s not just the outputs posing a risk, but also the tools themselves. The stacking of technology can only increase attack-surface it seems, at least to me. The fact that these models seem to auto-fill API values, without user-interaction, is quite unacceptable to me; it shouldn’t require additional tools, checking for such common flaws.

    Perhaps AI tools in professional contexts, can be best seen as template search tools. Describe the desired template, and it simply provides the template, it believes most closely matches the prompt. The professional can then “simply” refine the template, to match it to set specifications. Or perhaps rather use it as inspiration and start fresh, and not end up spending additional time resolving flaws.

    • tal@lemmy.today
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      22 hours ago

      I don’t know: it’s not just the outputs posing a risk, but also the tools themselves

      Yeah, that’s true. Poisoning the training corpus of models is at least a potential risk. There’s a whole field of AI security stuff out there now aimed at LLM security.

      it shouldn’t require additional tools, checking for such common flaws.

      Well, we are using them today for human programmers, so… :-)