AI experts, journalists, policymakers, and the public are increasingly discussing a broad spectrum of important and urgent risks from AI. Even so, it can be difficult to voice concerns about some of advanced AI’s most severe risks. The succinct statement below aims to overcome this obstacle and open up discussion. It is also meant to create common knowledge of the growing number of experts and public figures who also take some of advanced AI’s most severe risks seriously.For details, see the 8 Examples of AI Risk. Here is a paper with more examples. Or Wikipedia on AI Alignment.
Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.
The OpenAI CEO Sam Altman testified before Congress that AI was so dangerous that it needed to be regulated. Congress showed no interest in any of his suggestions.
An exception is Yann LeCun. He says that the USA would not exist, if not for the French philosophers who inspired the French Revolution, and we similarly need open large language AI models in order to inspire future products. See this lecture.
Wow. Do Frenchmen really think that?
Here is another expert opinion, saying that the warnings are alarmist and unproductive. We are more likely to avoid extinction if the technologies are more widely worked on.
The LLMs require special expertise and millions of dollars to develop, so the market will probably be dominated by a couple of large tech companies. But let's consider the extreme alternatives -- that US government regulators control development and use of the good LLMs, and that the LLMs get open-sourced where anyone can use them.
Going down the list of 8 risks, several of them appear much higher if centrally regulated. Such LLMs are more likely to spread misinformation, be used for proxy gaming, seek power, lock in value, etc. With open LLMs in the market, we are more likely to be able to choose the more reliable information, safer use, etc.
Possibly allowing unrestricted use of LLMs would allow non-state actors to weaponize them. I assume that USA, China, and Russia are pursuing military applications, and will not pay attention to regulators.
I am wondering what is behind this. I am sure Eliezer Yudkowsky sincerely believes in the dangers of AI, but he is not on the recent letters.
There is no specific regulation proposal, nor any explanation as to how it would ameliorate the dangers.
Possibly they are looking for regulatory capture, in order to lock in the market for big companies, and make it impossible for small players to competer.
Maybe it is some sort of virtue signaling, where they can appear to be responsible citizens, while they kill 50 million jobs.
Maybe they are all just typical authoritarian leftists. They see the new AI as powerful and influential, and they want leftists to control it.
Either way, I think it is a con job.
Update: An ex-Googler says people should not have kids now, because of the uncertainty of AI. He had a child die, and he would not want to bring that child back. He also mumbles something about geopolitical economics and climate change.