Whack yakety-yak app chaps rapped for security crack

  • kinkles@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    7
    ·
    3 months ago

    Is it possible to implement a perfect guardrail on an AI model such that it will never ever spit out a certain piece of information? I feel like these models are so complex that you can always eventually find the perfect combination of words to circumvent any attempts to prevent prompt injection.