Their charter: https://openai.com/charter

OpenAI is the company behind ChatGPT among other AI products. I try to keep myself out of the loop when it comes to AI because I end up hearing about it anyway so I wasn’t aware of this charter.

For the unaware, AGI stands for Artificial General Intelligence. It basically means a form of AI that is extremely advanced and general-purpose like human intelligence is. For contrast, ChatGPT and Stable Diffusion (for example) are highly specialised. The former generates text response to text input and the latter generates images in response to text input.

Despite both these AI technologies of today being very impressive (even if their proprietors try to obscure the training and energy cost), the path to achieving AGI is pretty much inconceivable at present. Current AI technologies may have exploratory value in achieving AGI in some far future. But AGI is most likely not going to be built upon currently existing technologies and is going to be a different beast altogether provided it exists in the first place.

Given this, I find it absolutely baffling that OpenAI is talking about AGI like they do. This is the same level of delusion as Elon Musk talking about Mars colonisation. But given that techbros see themselves as the stewards for the next step in civilisational evolution, I guess it should come as no surprise that they eat this shit up uncritically.

I’m not sure what role these generative AIs will play in the near future. I am trying to figure out whether they will primarily be sold to corporations to cut labour cost or to end users to boost productivity. But talking of AGI and AI singularity and far fetched shit like that is a pure marketing stunt.

  • 中国共产党万岁@lemmygrad.ml
    link
    fedilink
    arrow-up
    3
    ·
    1 year ago

    I disagree. It is able to come up with something that “sounds right” based on what it’s been trained on, and it can be transferred to specific domains. LLMs are mimicking “fast” human thinking. It’s like the world’s smartest BS-ing alien trying to say things at a cocktail party to convince the guests that it is human despite not knowing anything and just repeating sounds that seem to please the audience. Humans have similar “off-the-cuff” automatic behaviors. However, LLMs seem to have no capability to mimick human “slow” critical thinking, and there is no real internal representation of the world to speak of. There is nothing I know about on the AI research roadmap to overcome these limitations because right now we’re taking neural networks and throwing them into GPUs until investor money comes raining from the sky.

    Even if these technical limitations were miraculously overcome within 10 years, the biggest problem by far is that the energy consumption is basically uneconomical. A full-fledged human AGI takes tens of watts, whereas a silicon AGI takes orders and orders of magnitude more.