The AI Fix podcast regularly has reports on the testing of AI models and the testers perform many tests of a situation and report what percentage of times they saw a specific outcome.
It’s amazing how many times they exhibit human behaviour such as lying, hiding their mistakes, and resorting to blackmail. They were even shown to behave like gambling addicts as reported in this article.
I know this is a joke but wouldn’t the thing to do be simulate the trading first?
Tell it that it has 59 grand and ask it how it should invest. Pretend it has and monitor the stock
Gotta have skin in the game
I imagine that this is actually what happened.
The AI Fix podcast regularly has reports on the testing of AI models and the testers perform many tests of a situation and report what percentage of times they saw a specific outcome.
It’s amazing how many times they exhibit human behaviour such as lying, hiding their mistakes, and resorting to blackmail. They were even shown to behave like gambling addicts as reported in this article.