See this podcast on why Sam Altman should not be trusted for anything.
The head of Anthropic is almost as annoying as the head of Openai. His main argument is not that he is opposed to autonomous weapons, but that Claude is not sufficiently reliable yet. How does he know?
3 USA F-15s were shot down by friendly fire. Was that error by a human or an autonomous system? They are not telling us. Either way, an AI LLM might be able to improve what we have, and Amodei has no clue.
Here is his lawsuit against the War Dept, and an amicus brief from Openai and Google researchers. The brief argues:
Every smartphone continuously broadcasts location data to carriers and dozens of applications. Credit and debit cards generate a timestamped record of nearly every commercial transaction Americans make. Social media platforms log not just what people post, but what they read, how long they browse, and what they posted before deleting it. Employers, insurers, and data brokers have assembled behavioral profiles on most American adults that are already, in many cases, available for government purchase without a warrant. What does not yet exist is the AI layer that transforms this sprawling, fragmented data landscape into a unified, real-time surveillance apparatus. Today, these streams are siloed, inconsistent, and require significant human effort to connect. From our vantage point at frontier AI labs, we understand that an AI system used for mass surveillance could dissolve those silos, correlating face recognition data with location history, transaction records, social graphs, and behavioral patterns across hundreds of millions of people simultaneously.That is a description of the Google business model. It collects that data to use AI to make 100s of billions of dollars targeting ads, effectively selling mass surveilance data to marketers.The mere existence of such a capability in government hands — even if never activated against a specific individual — changes the character of public life in a democracy.
The Google brief complaint is not that such AI systems exist, but that they might end up on government hands. I am much more worried about what private companies are doing with such systems.
The brief also argues:
. A child’s tricycle can physically be driven on an interstate, but we do not allow it because of the risks of using the technology in that environment. Mass domestic surveillance and autonomous lethal weapons systems are the equivalently reckless domain for today’s frontier models. The considered judgment, shared widely across the AI development community, is that these applications of current AI technology carry risks so severe, and threaten harm so impossible to repair after the fact, that some kind of guardrails — whether contractual or technical — are necessary to constrain them in the absence of robust, genuinely effective governance frameworks.The analogy to a child's tricycle is weird. A tricycle is not a dangerous technology, except that it gets in the way of other vehicles. I would expect an analogy about a toxic chemical truck on the highway, or something like that. No one worries about tricycles.
I do not think that there is any such consensus in the AI development community. They worry about AI LLMs taking over the world, or releasing dangerous info like bomb-making recipes, or tricking people into doing foolish things. They do not worry about military units doing their jobs more efficiently.
Openai and Anthropic are both planning to go public this year, with a combined market value of about a trillion dollars. And yet they run by lunatics with untenable business models. There has never been a situation before with so much at stake with such flaky companies.
Conflict of interest disclosure: Anthropic has promised to pay me $3000 to compensate for training Claude on a pirated copy of my book.
No comments:
Post a Comment