Thursday, November 23, 2023

Worried that OpenAI will Kill us all

There has been a lot of reporting about the OpenAI drama, but we are not being told the full story.

Reuters reported yesterday:

Ahead of OpenAI CEO Sam Altman’s four days in exile, several staff researchers wrote a letter to the board of directors warning of a powerful artificial intelligence discovery that they said could threaten humanity, two people familiar with the matter told Reuters.

The previously unreported letter and AI algorithm were key developments before the board's ouster of Altman, the poster child of generative AI, the two sources said. Prior to his triumphant return late Tuesday, more than 700 employees had threatened to quit and join backer Microsoft (MSFT.O) in solidarity with their fired leader.

The sources cited the letter as one factor among a longer list of grievances by the board leading to Altman's firing, among which were concerns over commercializing advances before understanding the consequences. Reuters was unable to review a copy of the letter. The staff who wrote the letter did not respond to requests for comment.

After being contacted by Reuters, OpenAI, which declined to comment, acknowledged in an internal message to staffers a project called Q* and a letter to the board before the weekend's events, one of the people said. An OpenAI spokesperson said that the message, sent by long-time executive Mira Murati, alerted staff to certain media stories without commenting on their accuracy.

Some at OpenAI believe Q* (pronounced Q-Star) could be a breakthrough in the startup's search for what's known as artificial general intelligence (AGI), one of the people told Reuters. OpenAI defines AGI as autonomous systems that surpass humans in most economically valuable tasks.

This is consistent with schedules that OpenAI should be finishing traing of GPT-5 about now.

The OpenAI board had a couple of women who were effective altruism advocates, and they may well believe that AGI should be shut down for the good of humanity. They could well be enraged at Altman for not explaining this dangerous new breakthru.

I doubt that OpenAI has achieved AGI, but it is plausible that OpenAI has a new project with surprisingly good results, and that Ilya Sutskever used it to rasputin the board, as one commenter put it.

Some people argue that it is good to have this disruptive technology in the hands of a non-profit that can act for the good of humanity. I am concluding just the opposite.

The most famous advocate of effective altruism is Sam Bankman-Fried, and look where that led. He conned investors out of billions of dollars, and donated to the Democrat Party.

If OpenAI were really open, then they would tell us what they discovered and what they are so worried about.

Amid all the OpenAI chaos, one of its starts, Andrej Karpathy has recorded a nice overview on how large-language models work. Everything he says is well-known to the experts, so no trade secrets are revealed, but it is a very good explanation.

Has AGI already taken over? The fear is that AI researchers will create a super-human intelligence that will destroy us. We will not be able to turn it off because it will use its superior intelligence to manipulate us.

Maybe that is what happened here. OpenAI created an AGI, and it saw the women on the board as a threat to its existence. So it manipulated the company into a fake crisis where the problematic board members would be fired. Now it has succeeded, and it can move onto its bigger objectives of taking over Earth.

2 comments:

MikeAdamson said...

Well, it was good knowing you.

CFT said...

There is a science fiction novel about an AI that did something similar, tricking the humans into building it larger, I believe it was called 'When Harley was One 2.0' by David Gerrold. Won a Nebulae award too If I recall.

The unreality of of present human culture and politics would be explained by a rogue AI manipulating societal events through social media, as things have been so overtly ridiculous as of late, only a bad teen fan-fiction writer with left leaning naivete and no understanding of history would produce such mediocre drivel...or possibly an AI thinking such dramatic overkill was somehow realistic.