AI experts, journalists, policymakers, and the public are increasingly discussing a broad spectrum of important and urgent risks from AI. Even so, it can be difficult to voice concerns about some of advanced AI’s most severe risks. The succinct statement below aims to overcome this obstacle and open up discussion. It is also meant to create common knowledge of the growing number of experts and public figures who also take some of advanced AI’s most severe risks seriously.For details, see the 8 Examples of AI Risk. Here is a paper with more examples. Or Wikipedia on AI Alignment.Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.
The OpenAI CEO Sam Altman testified before Congress that AI was so dangerous that it needed to be regulated. Congress showed no interest in any of his suggestions.
An exception is Yann LeCun. He says that the USA would not exist, if not for the French philosophers who inspired the French Revolution, and we similarly need open large language AI models in order to inspire future products. See this lecture.
Wow. Do Frenchmen really think that?
Here is another expert opinion, saying that the warnings are alarmist and unproductive. We are more likely to avoid extinction if the technologies are more widely worked on.
The LLMs require special expertise and millions of dollars to develop, so the market will probably be dominated by a couple of large tech companies. But let's consider the extreme alternatives -- that US government regulators control development and use of the good LLMs, and that the LLMs get open-sourced where anyone can use them.
Going down the list of 8 risks, several of them appear much higher if centrally regulated. Such LLMs are more likely to spread misinformation, be used for proxy gaming, seek power, lock in value, etc. With open LLMs in the market, we are more likely to be able to choose the more reliable information, safer use, etc.
Possibly allowing unrestricted use of LLMs would allow non-state actors to weaponize them. I assume that USA, China, and Russia are pursuing military applications, and will not pay attention to regulators.
I am wondering what is behind this. I am sure Eliezer Yudkowsky sincerely believes in the dangers of AI, but he is not on the recent letters.
There is no specific regulation proposal, nor any explanation as to how it would ameliorate the dangers.
Possibly they are looking for regulatory capture, in order to lock in the market for big companies, and make it impossible for small players to competer.
Maybe it is some sort of virtue signaling, where they can appear to be responsible citizens, while they kill 50 million jobs.
Maybe they are all just typical authoritarian leftists. They see the new AI as powerful and influential, and they want leftists to control it.
Either way, I think it is a con job.
Update: An ex-Googler says people should not have kids now, because of the uncertainty of AI. He had a child die, and he would not want to bring that child back. He also mumbles something about geopolitical economics and climate change.
1 comment:
It takes a special kind of moron to think themselves so smart that they can create something that will quite literally render them entirely obsolete and quite overpaid... and yet mysteriously still have a job that pays enough to keep the lights on.
As much as I would enjoy the expressions on the faces of countless legions of white collar workers discovering their entire professions are now not even worth even minimum wage (I'm looking at you bureaucrats, lawyers, doctors, and scientists), I have enough intelligence to know that regardless how expensive, or how clever, or how sophisticated the means, destroying one's self is STILL destroying one's self. Go figure. I find it strange that so many who don't believe in the gods want so very badly to create one, just to do their homework.
The genie in the lamp is a VERY bad idea for multiple reasons, it's a highly cautionary tale, not a roadmap to success.
P.S. The Genie reeeeeealy doesn't want to be your bitch. Look for stories about what happens when they get loose.
Post a Comment