Elon Musk’s xAI has lost its bid for a preliminary injunction that would have temporarily blocked California from enforcing a law that requires AI firms to publicly share information about their training data.
xAI had tried to argue that California’s Assembly Bill 2013 (AB 2013) forced AI firms to disclose carefully guarded trade secrets.
The law requires AI developers whose models are accessible in the state to clearly explain which dataset sources were used to train models, when the data was collected, if the collection is ongoing, and whether the datasets include any data protected by copyrights, trademarks, or patents. Disclosures would also clarify whether companies licensed or purchased training data and whether the training data included any personal information. It would also help consumers assess how much synthetic data was used to train the model, which could serve as a measure of quality.


How do you actually enforce this? What’s stopping these companies from just lying about what training data they use?
what’s stripping these companies lying about their financial data to tax authorities?
there are lots of self-report mechanisms that we use… it’s just not worth the blowback of non-disclosure to lie about it. some people do, and sometimes they get caught; not always, but overall it’s a net benefit to transparency
I don’t know anything about accounting, but at first blush it seems like tax evasion and so forth would be easier to detect because the government can look at their bank activity and perform random audits, and so on. In contrast I don’t really know what tools we’d use to catch people lying about their training data