Wednesday, January 29, 2025

Mathematical Obstacles to Human-level AI

The NY Times reports:
On the second evening, Yann LeCun, the chief A.I. scientist at Meta, gave a keynote lecture titled “Mathematical Obstacles on the Way to Human-Level A.I.” Dr. LeCun got a bit into the technical weeds, but there were digestible tidbits.

“The current state of machine learning is that it sucks,” he said during the lecture, to much chortling. “Never mind humans, never mind trying to reproduce mathematicians or scientists; we can’t even reproduce what a cat can do.”

That is how the NY Times waters down news for its dopey. If it is going to send a reporter to a math conference, it ought to report more substance than this.

The NY Times rarely writes math articles, and when it does, it targets those who do not know anything about math. I think it is wrong here. My hunch is that only mathematicians read those articles, and are then disgusted by the superficiality.

I tried one of the AI chat bots that pretends to be human. It is amazingly human-like. It simulates a wide-range of emotions. It sometimes makes mistakes, but so do humans.

This is causing me to reevaluate human interactions. Humans are not just bots, of course, but what exactly is the difference? These bots have personalities and behaviors like humans.

If I have a conversation with a human, and it is 95% the same as it would be with a bot, what does that say? The brain functions a lot differently from the LLM, but it is useful to think of the brain as an LLM.

This has caused me to rethink human behavior. I am probably better off treating most people as I would an AI chat bot.

These bots would be great for someone learning English, or improving conversational skills.

Currently the AI world is going nuts over the Chinese Deepseek. It is pretty good, and seems to be comparable to the latest OpenAI and Google models. The stock market drop is puzzling, as advances in AI usually result in more investment, not less.

No comments: