- cross-posted to:
- [email protected]
- cross-posted to:
- [email protected]
There is a machine learning bubble, but the technology is here to stay. Once the bubble pops, the world will be changed by machine learning. But it will probably be crappier, not better.
What will happen to AI is boring old capitalism. Its staying power will come in the form of replacing competent, expensive humans with crappy, cheap robots.
AI is defined by aggressive capitalism. The hype bubble has been engineered by investors and capitalists dumping money into it, and the returns they expect on that investment are going to come out of your pocket. The singularity is not coming, but the most realistic promises of AI are going to make the world worse. The AI revolution is here, and I don’t really like it.
Fashion designers are being replaced by AI.
Investment capitalists are starting to argue that C-Suite company officers are costing companies too much money.
Our Ouroboros economy hungers.
C-Suites can get replaced by AIs… controlled by a crypto DAO replacing the board. And now that we’re at it, replace all workers by AIs, and investors by AI trading bots.
Why have any humans, when you can put in some initial capital, and have the bot invert in a DAO that controls a full-AI company. Bonus points if all the clients are also AIs.
The future is going to be weird AF. 😆😰🙈
If the AI is doing a better job at each of those things, why not let it?
That’s where we need to ask how we define “better”. Is better “when the number goes bigger” or is better “when more people benefit”? If an AI can better optimize to better extract the maximum value from people’s work and discard them, then optimize how many ways they can monetize their product to maximize the profit they get from each customer, the result is a horrible company and a horrible society.
In theory yes… but what do we call “doing a better job”? Is it just blindly extracting money? Or is it something more, and do we all agree on what it is? I think there could be a compounded problem of oversight.
Like, right now an employee pays/invests some money into a retirement fund, whose managers invest into several mutual funds, whose managers invest into several companies, whose owners ask for some performance from their C-suite, who through a chain of command tell the same employee what to do. Even though it’s part of the employee’s capital that’s controlling that company, if it takes an action negative for the employee like fracking under their home, or firing them, they’re powerless to do anything about it with their investment.
With AI replacing all those steps, it would all happen much quicker, and —since AIs are still basically a black box— with even less transparency than having corruptible humans on the same steps (at least we kind of know what tends to corrupt humans). Adding strict “code as contract” rules to try to keep them in check, would on a first sight look like an improvement, but in practice any unpredicted behavior could spread blindingly fast over the whole ecosystem, with nobody having a “stop” button anymore. That’s even before considering coding errors and malicious actors.
I guess a possible solution would be requiring every AI to have an external stop trigger, that a judicial system could launch to… possibly paralyze the whole economy. But that would require new legislation to be passed (with AI lawyers), and it would likely get late, and not be fully implemented by those trying to outsmart the system. Replace the judges by AIs too, politicians with AIs, talking heads on TV with AIs… and it becomes an AI world where humans have little to nothing to say. Are humans even of any use, in such a world?
None of those AIs need to be an AGI, so we could run ourselves into a corner with nobody and nothing having a global plan or oversight. Kind of like right now, but worse for the people.
Alternatively, all those AIs could be eco-friendly humans-first compassionate black boxes… but I kind of doubt those are the kind of AIs that current businesses are trying to build.
Thing is nobody will do that because once AI finds a way to spazz out that is totally unpredictable (black box) everything might just be gone.
It’s a totally unrealistic scenario.
People are already doing it, piece by piece, in all areas. As more AIs get input from other AIs, the chance of a cascading failure increases… but it will seem to be working “good enough” up until then, so more people will keep jumping on the bandwagon.
The question is: can we prepare for the eventual cascading spazz out, or have we no option other than letting it catch us by surprise?
They are working on mitigating the unpredictable “black box”.
Like making the AI explain their working method step by step. Not only does it make the AI more transparent, it also increases the correctness of whatever it types.
AI is still in development. It is good to list the problems you have, but don’t think those problems won’t be solved in the future.