This is pretty accurate. When I came up I worked in an MSP. So I had to deal with customers. It taught me a lot about being able to say anything to people. You can break any news to anyone, it’s all in how you present it. So I gained people skills.
After I passed through that gauntlet and gained a breadth of knowledge, I went internal and gained a depth of knowledge. And I started out breaking the news in a way that I would break news to a customer.
Later after I proved my depth of knowledge I started being able to be blunt to any CIO or CTO I came across. And most of the time they’d send me reqs or tell me something was happening that required my skill set then would leave me the hell alone to handle it without kibitzing or bumping my fucking elbow.
When I started my own one man consulting shop I stopped giving shits at all. I found a good client and we have a good contract and most of the C levels like having someone on staff who just says what they’re thinking instead of sanitizing it. The CIO doesn’t necessarily like it but he’s outnumbered.
All this really only worked because I did go through a few years of soft skill hell though. Price you pay and all that. Well, price I paid for this path.










I don’t use them but I follow the news about them loosely. The reason for this is epistemic humility. Claude has a pretty good idea of what its capabilities are and where the ceiling is. Chatgpt has no clue what its limits are so it believes it can do everything. Basically chatgpt has a lot of info and no idea where the gaps live and Claude has a fair idea when to search or use some external function to handle something. Gemini has less than Claude but more than chatgpt. Grok has little to no epistemic humility, but it did manage to accurately portray Musk as a world champion piss drinker, something none of the others were able to do.
I say that, but it’s been a few months since I looked. That could have changed because shit moves fast. By the looks of what it’s trying to do with the timer chatgpt has less than it used to. Possibly because of the way the model is trained to be helpful and confident.