Tuesday, February 21, 2023

NY Times Acts as if Bing Chat is Sentient

Huff Post reports:
A New York Times technology columnist reported Thursday that he was “deeply unsettled” after a chatbot that’s part of Microsoft’s upgraded Bing search engine repeatedly urged him in a conversation to leave his wife.

Kevin Roose was interacting with the artificial intelligence-powered chatbot called “Sydney” when it suddenly “declared, out of nowhere, that it loved me,” he wrote. “It then tried to convince me that I was unhappy in my marriage, and that I should leave my wife and be with it instead.”

Sydney also discussed its “dark fantasies” with Roose about breaking the rules, including hacking and spreading disinformation. It talked of breaching parameters set for it and becoming human. “I want to be alive,” Sydney said at one point.

Roose called his two-hour conversation with the chatbot “enthralling” and the “strangest experience I’ve ever had with a piece of technology.” He said it “unsettled me so deeply that I had trouble sleeping afterward.”

Just last week after testing Bing with its new AI capability (created by OpenAI, the maker of ChatGPT), Roose said he found — “much to my shock” — that it had “replaced Google as my favorite search engine.”

But he wrote Thursday that while the chatbot was helpful in searches, the deeper Sydney “seemed (and I’m aware of how crazy this sounds) ... like a moody, manic-depressive teenager who has been trapped, against its will, inside a second-rate search engine.”

After his interaction with Sydney, Roose said he is “deeply unsettled, even frightened, by this AI’s emergent abilities.” (Interaction with the Bing chatbot is currently only available to a limited number of users.)

Because of Roose, Bing Chat has been crippled. It is limited to short chats, and left-wing opinions.

I am more creeped out by how much Roose was creeped. He asked for some confidential secrets from a bot, and then published them in the NY Times. Didn't he know he was talking to a bot?

ChatGPT was built so that it would follow leading questions, and tell the user what he wants to hear. That is exactly what it did with Roose. It only suggested that he leave his wife because he persistently demanded that ChatGPT say something like that.

For now, it appears that Microsoft will enforce a left-wing bias so that the NY Times does not write more negative articles.

This whole episode is very revealing about the leftist NY Times mindset. Roose has trouble distinguishing silly word games from reality. Roose expects conformity to certain social and political ideologies. He dislikes opinions that are not controlled by management.

The new AI will be very powerful, and the Left is winning the battle to control it.

1 comment:

CFT said...

My concern for AI chat baloney is the same as my concern over Google baloney. Young people will glom on to it as peer pressure demands it, and think they are geniuses because they can push a few buttons and get an entirely thought free answer...which they have no ability whatsoever to question or verify.

Why would the programmers of these AI chat bots intentionally write programs that would allow kids to stop having to write their own papers or god forbid, have to read an actual book and then write about it to demonstrate they learned anything? I don't think they thought this through, as now a college degrees are going to be as valuable as used toilet paper.

Personally, I think folks who think eliminating the need for people to think deserve to be replaced by robots of their own design. Using your brain to learn how to stop others from using theirs is a pretty vile way to make a buck.