That’s some War Games shit…The CPU usage grows through each iteration of AI learning to post better reviews and AI debunking the better reviews it has posted until there’s no more room for anything else and the LLM becomes consumed with this one task of beating itself
True, but that’s also a well-known machine learning technique called adversarial training, often used in Generative Adversarial Networks (GANs) or when teaching a model to play games like chess or Go.
With a game with simplish rules like Go, I think this would work. With something more complicated like language with implicit meanings and tones, I see AI driving off a cliff and learning bad things from itself to the point where the model needs to be trashed and redone
Use LLMs and machine learning to detect the reviews created by LLMs and machine learning.
That’s some War Games shit…The CPU usage grows through each iteration of AI learning to post better reviews and AI debunking the better reviews it has posted until there’s no more room for anything else and the LLM becomes consumed with this one task of beating itself
True, but that’s also a well-known machine learning technique called adversarial training, often used in Generative Adversarial Networks (GANs) or when teaching a model to play games like chess or Go.
With a game with simplish rules like Go, I think this would work. With something more complicated like language with implicit meanings and tones, I see AI driving off a cliff and learning bad things from itself to the point where the model needs to be trashed and redone