Earlier this week, PCWorld published a roundup of Windows 12 rumors translated from PCWelt that does not meet our editorial standards. We’re deeply embarrassed by it, and I personally apologize that the article was published. It should not have been, but we’re keeping the article live (with an editor’s note at the top) so it remains in the public record.
Windows Central published a response detailing its errors. Thanks for keeping us accountable, guys — genuinely. In the same spirit of accountability, I want to explain how this happened, and what we’re doing to ensure a mistake like this never occurs again.
Let’s start by discussing how PCWorld handles translated articles, and then I’ll dive into the issues with the article itself.


I thought this was a very well written, transparent article that took accountability as seriously as it should. I am still not sure why people are using AI for translation when translation software already existed. People mention that AI is more context aware, but I feel like when you saw those friction points in old translation software it prompted you to look further into the context, whereas AI will just make an executive decision and people feel like it must be right because it’s AI. I guess it’s possible old language software, or even a translator, would have done the same thing, but I still think people would have less inherent trust in the old software alone. I do want to point out that this AI issue was just a small part of the problem and they addressed plenty of other issues and how they plan to remedy those.
This wasn’t even an AI issue nor even a translation issue. They published an article that lacked sources, and still wasn’t good enough once sources were added.
Yea, I mentioned in my comment that there was a confluence of issues, but the article does point out that the AI translation made the statement more definitive.
Edit to add:
Translation is what the transformer architecture was designed for. It is the state of the art, and translation software has been using ML for a long long time.
This feels like an appropriate use of AI, but failure of editing.
Not with general purpose LLMs. They start off ok, but become much more interested in continuing the text they’ve already translated, rather than looking back to what it is they’re meant to translate. So they drift off course as the translation gets longer.
General purpose LLMs’ failure to do a task like translation must be very funny for their investors. Even the more translation-gocused ones seem to have issues.
(ETA I need to edit my comments to federate them?)
Pre-existing software was also never terribly accurate.