- cross-posted to:
- [email protected]
- cross-posted to:
- [email protected]
There is a machine learning bubble, but the technology is here to stay. Once the bubble pops, the world will be changed by machine learning. But it will probably be crappier, not better.
What will happen to AI is boring old capitalism. Its staying power will come in the form of replacing competent, expensive humans with crappy, cheap robots.
AI is defined by aggressive capitalism. The hype bubble has been engineered by investors and capitalists dumping money into it, and the returns they expect on that investment are going to come out of your pocket. The singularity is not coming, but the most realistic promises of AI are going to make the world worse. The AI revolution is here, and I don’t really like it.
That split won’t work cause the top 20% would not like to do their day job clean up AI codes. It’s much better time investment wise for them to write their own template generation tool so the 80% can write the key part of their task, than taking AI templates that may or may not be wrong and then hunting all over the place to remove bugs.
Use the AI to fix the bugs.
A couple months ago, I tried it on ChatGPT: I had never ever written or seen a single line in COBOL… so I asked ChatGPT to write me a program to print the first 10 elements of the Fibonacci series. I copy+pasted it into a COBOL web emulator… and it failed, with some errors. Copy+pasted the errors back to ChatGPT, asked it to fix them, and at the second or third iteration, the program was working as intended.
If an AI were to run with enough context to keep all the requirements for a module, then iterate with input from a test suite, all one would need to write would be the requirements. Use the AI to also write the tests for each requirement, maybe make a library of them, and the core development loop could be reduced to ticking boxes for the requirements you wanted for each module… but maybe an AI could do that too?
Weird times are coming. 😐
I’m a professional programmer and this is how I use ChatGPT. Instead of asking it “give me a script to do big complicated task” and then laughing at it when it fails, I tell it “give me a script to do <first step of the task>.” Then when I confirm that works, I say “okay, now add a function that takes the output of the first function and does <second step of the task.>” Repeat until done, correcting it when it makes mistakes. You still need to know how to spot problems but it’s way faster than writing it myself, even if I don’t have to go rummaging through API documentation and whatnot.
I mean that is exactly what programming is except you type to an AI and have it type the script. What is that good for?
Could have just typed the script in the first place.
It ChatGPT can use the API it can’t be too complex otherwise you are in for a surprise once you find out what ChatGPT didn’t care about (caching, usage limits, pricing, usage contracts)
Sure - but ChatGPT can type faster than me. And for simple tasks, CoPilot is even faster.
Also - it doesn’t just speed up typing, it also speeds up basics like “what did bob name that function?”
And stuff like “I know there’s a library out there that does the thing I’m trying to do, what’s it named and how do I call it?”
I haven’t been using ChatGPT for the “meat” of my programming, but there are so many things that little one-off scrappy Python scripts make so much easier in my line of work.
I already explained.
I could write the scripts myself, sure. But can I write the scripts in a matter of minutes? Even with a bit of debugging time thrown in, and the time it takes to describe the problem to ChatGPT, it’s not even close. And those descriptions of the problem make for good documentation to boot.