- cross-posted to:
- [email protected]
- [email protected]
- cross-posted to:
- [email protected]
- [email protected]
The issue was not caused, directly or indirectly, by a cyber attack or malicious activity of any kind. Instead, it was triggered by a change to one of our database systems’ permissions which caused the database to output multiple entries into a “feature file” used by our Bot Management system. That feature file, in turn, doubled in size. The larger-than-expected feature file was then propagated to all the machines that make up our network.
The software running on these machines to route traffic across our network reads this feature file to keep our Bot Management system up to date with ever changing threats. The software had a limit on the size of the feature file that was below its doubled size. That caused the software to fail.


What are the chances they started using AI to automate some of this and that’s the real reason. It sounds like no human was involved in breaking this.
Zero for the triggering action. A human rolled out a permissions change in a database that led to an unexpected failure in a different system because that other system was missing some safety checks when loading the data (non-zero chance that code was authored in some way by AI).