您现在的位置是:The new Turing test: Are you human? >>正文

The new Turing test: Are you human?

后花园论坛社区|2024夜上海论坛网|爱上海419论坛 -- Back garden6人已围观

简介In 1950, when Alan Turing conceived "The Imitation Game" as a test of computer behavior, it was unim...

In 1950, when Alan Turing conceived "The Imitation Game" as a test of computer behavior, it was unimaginable that humans of the future would spend most hours of their day glued to a screen, inhabiting the world of machines more than the world of people. That is the Copernican Shift in AI.

Tiernan Ray for ZDNet

"I propose to consider the question, 'Can machines think?'"

— Alan Turing, Computing Machinery and Intelligence, 1950

Buried in the controversy this summer about Google's LaMDA language model, which an engineer claimed was sentient, is a hint about a big change that's come over artificial intelligence since Alan Turing defined the idea of the "Turing Test" in an essay in 1950.

Turing, a British mathematician who laid the groundwork for computing, offered what he called the "Imitation Game." Two entities, one a person, one a digital computer, are asked questions by a third entity, a human interrogator. The interrogator can't see the other two, and has to figure out simply from their type-written answers which of the two is human and which machine. 

Artificial Intelligence

  • AI in 2023: A year of breakthroughs that left no human thing unchanged
  • These are the jobs most likely to be taken over by AI
  • AI at the edge: 5G and the Internet of Things see fast times ahead
  • Almost half of tech executives say their organizations aren't ready for AI or other advanced initiatives

Why not, suggested Turing, let behavior settle the matter. If it answers like a human, then it can be credited with thinking.

Turing was sure that machines would get so good at the Turing Test that by the year 2000, "one will be able to speak of machines thinking without expecting to be contradicted." 

A funny thing happened on the way to the future. Humans, it turns out, are spending more and more of their time inside of the world of machines, rather than the other way around. 

Likewise, until the last decade or so, every presumption of machine intelligence involved machines inserting themselves into our world, becoming anthropoid and succeeding in navigating emotions and desires, as in the movie "A.I."

Instead, what has happened is that humans have spent more and more of their time inside computer activities: clicking on screens, filling out Web forms, navigating rendered graphics, assembling iterative videos that produce copycat dance moves, re-playing the same game scenarios in hours-long stretches. 

Featured

  • Garmin wins big at CES, launches new products and updates Garmin Connect
  • From indoor solar to light-based internet, photonics offers a brighter future
  • I saw Samsung and LG's new transparent TVs at CES, and there's a clear winner
  • Have 10 hours? IBM will train you in AI fundamentals - for free

In the case of Google's LaMDA chat bot, former Google engineer Blake Lemoine was assigned to test the program, an amusing echo of the Turing challenge. Only, in Lemoine's case, he was told up-front that it was a program. That did not prevent him from ascribing to LaMDA sentience, even a soul.

We don't know exactly how many hours, days, weeks or months Lemoine spent, but spending lots and lots of time chatting with something you've been told is a program is, again, a novel event in human history.

Computer scientist Hector Levesque has pointed out that "the Turing Test has a serious problem: it relies too much on deception." (Emphasis Levesque's.) The free-form nature of the test, writes Levesque, means an AI program can merely engage in a bag of tricks that feel human to the interrogator. 

metaverse-sf-background

How the metaverse will change the future of work and society

ZDNET explores the ways that the metaverse is coming to life and how it will change the nature of work -- and maybe everything else, too.

Read now

In a clever inversion of the Turing Test, a recent Google AI program flips the role of interrogator and subject. 

Called Interview Warmup, the Google program is an example of Natural Language Assessment, a form of natural language understanding where a program has to decide if free-form answers to a question are appropriate to the context of the question. 

Interview Warmup invites a human to answer multiple questions in a row as a job seeker. The program then evaluates how well the subject's responses fit with the nature of the question. Google suggests Warmup is a kind of electronic coach, a substitute for a human who would help another human prepare for a job interview. 

Show Comments

Tags:

相关文章



友情链接