Clearly the whole drama with Pentagon making a big deal of showing that they’re trying to force AI companies to build autonomous AI killing machines and spy on citizens is completely manufactured.
Anthropic was always going to comply, and the goal is to just create a marketing campaign them as heroically resisting. All the media has been running the story of a plucky Anthropic defying US military to defend ethical AI and protect humanity.


Well you see, the last reply didn’t respond germanely to what I had said. It instead made a baseless accusation. So I invited you to continue the conversation germanely. I can always just stop replying to you.
LLMs, but sure close enough.
“Effective” is vague. I have made more specific criticisms. Slop vibe coders think their work is effective, don’t they? Even as they leave critical endpoints exposed, unauthenticated.
As I said, using these “tools” means someone is now in the position of doing code reviews of an incompetent junior dev that keeps making the same kinds of mistakes, some of which I listed. And yet those who seem most enamored with these tools actually can’t do that review, as those who can realize how much time is not saved by having to review the code of an incompetent junior dev that, for example, keeps doing rewrites rather than addressing a problem you pointed out directly.
I have already given examples and you have not replied to them.
There is no difference between those things. But sure you are saying you are the only user of your software. Nobody else has to suffer if something goes wrong, you don’t get fired, and maybe it isn’t exposed to the internet or your LAN so you don’t have to think about security (who knows?). At the same time if the work is to trivial the value of the tool itself also diminishes. Is it more than 500 lines of code? Does it need to be? How do you know it’s correct? Does it need to be correct? Most code is read many more times than it is written and the writing portion is more about thinking about and understanding the problem to solve. The time savings is not particularly high unless being used as a way for someone that doesn’t understand these things to make a tool that looks correct with a scant once over. It helps them because they couldn’t write the widget in the first place. The LLM might do the widget in 10 seconds and then you have to review it for 5-10 minutes. Writing the widget might take 5-10 minutes and then need almost zero review because you already did the review thinking as part of the process.
If my criticisms, which have not been addressed, apply, then it’s neither.
Google is ostensibly a search engine. It generates an ordered list of results ordered by "relevance’, where relevance used to be pagerank and the idea was that it would crawl pages, index content, tie them together, and give you the results that were most-linked and most tied to your search terms. So Google theoretically finds you relevant web pages. Of course, it is highly limited by the terms you use, what is accessible on the internet, and what their censorship teams allow you to see. I would never tell someone who wanted to actually learn about a topic to just Google it. They’re just as likely to come across an accurate resource as they are to use one that is wrong in a serious way, ranging from subtly (but importantly) wrong to blatantly ridiculous yet entirely believable by someone that just starts typing in terms to Google to learn something.
Google does not provide you with options to learn from. It provides you its semi-curated list of websites in response to search terms, and every single one you see on the first page (the only one 99% of people see) may be bullshit.
Does that mean it can’t be used for research? No. It can help you locate websites that do have good information, obviously. But it’s not a particularly good tool and LLMs are even worse for the reasons I’ve already described.
Incorrect. An LLM constructs text streams based on its model(s) and inputs, the inputs being your prompt, generally the entire history of your “conversation” (hence it getting stuck on things it previously said even if you pointed out they are wrong), and whatever the devs decided to prepend to the inputs to make the LLM behave less poorly. They are not knowledge systems, they don’t think, and they don’t know things. The search results are indeed sometimes part of it, in that they are also added to the inputs.
It is not interesting or salient that you can decide to accept or reject what an LLM says. This applies regardless of whether my criticisms are true - criticisms you have not addressed.
What I see is people making papier mache houses and telling me how cool it is that papier mache can build a house. It’s so fast and easy! And look, it’s house-shaped! Apparently my fight is with papier mache and not, say, the people who are telling me how great it is to build houses out of it. “It’s just a tool! You can’t be dumb about using it! Use it to build houses! Obviously you’ve just never used papier mache.”
So, you didn’t respond germanely to what I said and are resorting to bad faith due to your perception of condescension.
I will likely not respond to you further. You don’t seem interested in engaging on this topic in good faith.
Yikes. Honestly I’m not going to read this novel. Should probably put it through some LLM to make a more succinct response. There is beauty and intelligence in brevity.
We’re not having a conversation worth having. You have absolutely made up your mind and are not open changing it. At the end of the day, we are both right. I think you have valid points and you probably do not agree with anything I’m saying so no point continuing.