• Welcome! The TrekBBS is the number one place to chat about Star Trek with like-minded fans.
    If you are not already a member then please register an account and join in the discussion!

Would Data pass the Turing Test?

Trekker4747

Boldly going...
Premium Member
In present-day artificial intelligence research and development there is something known as the Turing Test, somewhat criticized, it is said to be a way to tell if a computer system possesses artificial intelligence. Naturally, to date no system has passed the test.

The test works like this, on one side of the test you have the featured computer system and a human being, on the other side another human being. Each subject is isolated from one another and communication occurs in a non-verbal manner in order to eliminate any clues to the answer to the test.

The second human being presents a questions to the computer system and other human and they provide answers. If the second human cannot tell which answer-provider is the computer system it is said to have passed the test. Passing isn't dependent on whether are not the answers are correct just that if the computer system can provide answers that are identical or at least indistinguishable to the answers a human might provide.

Given what we know about Data, would he pass the test?

Obviously through the series we know Data is intelligent and possesses artificial intelligence but we also know from watching Data that his intelligence is flawed in many ways and in some cases when seeking or giving answers he has a tendency to over-answer or at times go off on a tangent like someone reading a TVTropes entry.

In "A Matter of Time" a man who claims to be a historian from the 26th century arrives in the 24th century and provides the crew with questionnaires about various things. The historian semi-gripes at Data that he was overly-thorough when it came to his answers, he mentions the number of words Data provided numbered around that of an average novel.

So, our question-asker asks his question and he gets back a simple, succinct, answer that lasts maybe a couple sentences to maybe an average-sized paragraph. The answer is wrong but on outward appearances it seems to have been provided by a person.

He then gets an answer that lasts several pages, uses a lot of big words, has a lot of parenthetical tangents, call-outs to footnotes, asides in hashes and the answer seems to be lacking in terms of personal examples, anecdotes or use of imagery and metaphor. It's overall a cold answer, though it happens to be correct.

Naturally, Data provided the second answer. But would he have passed the test? Would his answer amounting to basically being a Wikipedia article be indicative of an artificial intelligence or of simply a computer system providing a quick answer to a question, along the lines of a Wikipedia paragraph being provided on the first page of results in doing a Google Search? Google is a very smart system but it having the ability to take a search entry, or even to be able to predict what I'm searching for with auto-complete, and then providing me with the opening paragraph to the Wikipedia entry as the answer to the question doesn't make Google intelligent. Google's servers aren't passing the Turing Test any time soon.

So, do you think Data could pass the Turing Test?
 
Last edited:
It's easier to fake sentience than to create sentience. A much less advanced AI than Data could pass the Turing Test much better than he could.

Data could provide encyclopedic knowledge on very obscure topics. He would probably fail the test, given those rules, assuming the human test subject knew one of the answers was from a machine.

Although, if he were coached to pass it he could probably pass it.

I think if the top AI engineers put their minds to it they could pass the Turing Test today. Just they have no incentive to because there's no money in making AI give less correct answers.
 
Typically the Turing Test is "double blind" no subject is aware of what the others are. So the person receiving the answers doesn't know that one of the questioned is possibly a machine.

Though in some versions of the test a "control" is introduced in the form of another human and he and the computer swap at some point. The idea *now* being if the person getting the answers can tell when the computer entered the equation or, again, if the computer is able to provide answers similar to what a human would. (Now likely having to be consistent with how the human did, or will, answer in order to maintain consistency.)

The test has some other layers and ideas and complexities to it, in theory, to test the computer system. Including identifying objects (passed through a double-doored hatch) and testing areas of knowledge and ability to learn. All of which Data would pass at, providing he provides natural-sounding answers.

One other area Data would pass at would be if the ability to "create" was introduced to the test considering we've seen Data excel in a number of artistic areas, like art. In "The Ensigns of Command" Picard credits Data's play of the violin. Data thinks he wasn't original since he simply copied the techniques of classic violinists. Picard counters by saying Data was able to blend together the techniques of violinists with very different approaches and pull it off.

So in being a violinist Data would excel at a Turing Test that provided that as a "question" or test of the system's skill.

But there's still the problem with Data's use of language, not even mentioning his inability to use contractions which would likely stand out, being vastly more formal and long-winded and, again, at times filled with asides and branches which would make him stand-out as being a machine.

Obviously, this isn't to say Data isn't intelligent since he obviously is, just that he likely couldn't pass the Turing Test because he's *so* intelligent it'd stand out.
 
Last edited:
Of course he could pass. All you have to do is tell Data, "Answer the questions exactly as you predict Geordi would, please." and he'd nail it. :)
 
Of course he could pass. All you have to do is tell Data, "Answer the questions exactly as you predict Geordi would, please." and he'd nail it. :)

That would taint the test, the idea is to see how *Data* would answer the questions in order to test his intelligence, or at least his ability to pass for human. Telling him to answer as someone else isn't really testing Data's ability to act as human on his own.

But even then, Data would still struggle as we still know he struggles with metaphors and using examples and imagery. How many times has Data had to have explained to him someone's metaphor or example? And how many times has he parsed metaphors in his own manner ("I may be pursing an untamed ornithoid without cause.", "This repair may require us to ignite the nocturnal petroleum.") And there's times when Data tries to behave in a more human manner, using friendly insults (Calling Geordi "lunkhead" in "Data's Day.") but doesn't have the knack of it to make it come across as natural or appropriate.

So, again, we run into an area where Data would likely fail the Turing Test if he were to provide answers that included poorly parsed idioms or inappropriate friendly jabs.
 
Of course he could pass. All you have to do is tell Data, "Answer the questions exactly as you predict Geordi would, please." and he'd nail it. :)

That would taint the test, the idea is to see how *Data* would answer the questions in order to test his intelligence, or at least his ability to pass for human. Telling him to answer as someone else isn't really testing Data's ability to act as human on his own.
The computers submitted to the test now are programmed to act as human aren't they?
 
If the test were double blind and the subject did not know he was taking a Turing Test, Data would pass. The subject would just maybe think Data were an odd person, perhaps with Aspergers.
 
Data would obviously NOT pass the turing test if asked only a few questions. His inability to do contractions would immediately be spotted as a limitation, no human being would have. His inability to understand expressions like "take a rain check" would be perceived as some kind of a joke though, as google for instance understands these expressions. People would think that Data is a human being with little to none understanding of actual artificial intelligence pretending to be a robot. So, in a very bizarre runabout way data could very well pass the turing test.
 
It depends on what questions they ask, & whether or not Data is told the true nature of the test, prior to taking it. Clearly, our general (& sometimes comedic) observations of how Data behaves is opposed to the idea behind the Turing test, but the whole Turing test is rather contested in many ways, anyhow, & assuming he would fail presupposes that he is only capable of responses like the more peculiar ones we've seen

When we watch Data with Lal, for example, his behavior & responses to her questions are not at all like what we'd see when he provides a technical answer to someone, that's overly detailed

My opinion is that Data gives those verbose answers not because he can't answer any other way, but that he has misinterpreted the social element in how he's expected to answer in given circumstances. Take that out of the equation, & just subject him to a test where he knows its nature, & I'm sure he could pass
 
The computers submitted to the test now are programmed to act as human aren't they?

I agree with this. I believe that the idea of the test is not to see if a computer reacts "innately" much like a human (even though it could respond with inhuman accuracy and speed, and 'mechanical' qualities in the answer), but if you can get that computer to emulate human behavior so well that even a thorough interrogator cannot distinguish between the two. It may even be programmed to make deliberate mistakes every once in a while, to convince the interrogator it is 'only human'.

If the test concerns an an already really sentient computer (as seems to be the case for Data), I think this would translate (short of reprogramming the computer) to telling the computer it is subjected to the test, and that it should try to mimic a human.
 
I agree he'd be able to pass the test as long as he was told in advance that he was taking the test and what it was about. Data might have his natural inclination to give too much detail or to speak with excessive formality, but he also makes a point to study and practice typical human behavior. He's not always great at it, but answering a few questions in the style of a human, particularly in written form shouldn't be too hard for him to pull off if he wanted to.

There might be some inconsistencies that would give the reviewer pause, but if I'm remembering right, I think in the Turing test all that needs to be established is a significant enough level of believability. A percentage or something needs to be reached?
 
In present-day artificial intelligence research and development there is something known as the Turing Test, somewhat criticized, it is said to be a way to tell if a computer system possesses artificial intelligence. Naturally, to date no system has passed the test.

The test works like this, on one side of the test you have the featured computer system and a human being, on the other side another human being. Each subject is isolated from one another and communication occurs in a non-verbal manner in order to eliminate any clues to the answer to the test.

The second human being presents a questions to the computer system and other human and they provide answers. If the second human cannot tell which answer-provider is the computer system it is said to have passed the test. Passing isn't dependent on whether are not the answers are correct just that if the computer system can provide answers that are identical or at least indistinguishable to the answers a human might provide.

Given what we know about Data, would he pass the test?

Obviously through the series we know Data is intelligent and possesses artificial intelligence but we also know from watching Data that his intelligence is flawed in many ways and in some cases when seeking or giving answers he has a tendency to over-answer or at times go off on a tangent like someone reading a TVTropes entry.

In "A Matter of Time" a man who claims to be a historian from the 26th century arrives in the 24th century and provides the crew with questionnaires about various things. The historian semi-gripes at Data that he was overly-thorough when it came to his answers, he mentions the number of words Data provided numbered around that of an average novel.

So, our question-asker asks his question and he gets back a simple, succinct, answer that lasts maybe a couple sentences to maybe an average-sized paragraph. The answer is wrong but on outward appearances it seems to have been provided by a person.

He then gets an answer that lasts several pages, uses a lot of big words, has a lot of parenthetical tangents, call-outs to footnotes, asides in hashes and the answer seems to be lacking in terms of personal examples, anecdotes or use of imagery and metaphor. It's overall a cold answer, though it happens to be correct.

Naturally, Data provided the second answer. But would he have passed the test? Would his answer amounting to basically being a Wikipedia article be indicative of an artificial intelligence or of simply a computer system providing a quick answer to a question, along the lines of a Wikipedia paragraph being provided on the first page of results in doing a Google Search? Google is a very smart system but it having the ability to take a search entry, or even to be able to predict what I'm searching for with auto-complete, and then providing me with the opening paragraph to the Wikipedia entry as the answer to the question doesn't make Google intelligent. Google's servers aren't passing the Turing Test any time soon.

So, do you think Data could pass the Turing Test?
With the type of answer you describe, if he does pass the test, the observer may well think he's an engineer. Having known engineers, I know I would.
 
In present-day artificial intelligence research and development there is something known as the Turing Test, somewhat criticized, it is said to be a way to tell if a computer system possesses artificial intelligence. Naturally, to date no system has passed the test.

The test works like this, on one side of the test you have the featured computer system and a human being, on the other side another human being. Each subject is isolated from one another and communication occurs in a non-verbal manner in order to eliminate any clues to the answer to the test.

The second human being presents a questions to the computer system and other human and they provide answers. If the second human cannot tell which answer-provider is the computer system it is said to have passed the test. Passing isn't dependent on whether are not the answers are correct just that if the computer system can provide answers that are identical or at least indistinguishable to the answers a human might provide.

Given what we know about Data, would he pass the test?

Obviously through the series we know Data is intelligent and possesses artificial intelligence but we also know from watching Data that his intelligence is flawed in many ways and in some cases when seeking or giving answers he has a tendency to over-answer or at times go off on a tangent like someone reading a TVTropes entry.

In "A Matter of Time" a man who claims to be a historian from the 26th century arrives in the 24th century and provides the crew with questionnaires about various things. The historian semi-gripes at Data that he was overly-thorough when it came to his answers, he mentions the number of words Data provided numbered around that of an average novel.

So, our question-asker asks his question and he gets back a simple, succinct, answer that lasts maybe a couple sentences to maybe an average-sized paragraph. The answer is wrong but on outward appearances it seems to have been provided by a person.

He then gets an answer that lasts several pages, uses a lot of big words, has a lot of parenthetical tangents, call-outs to footnotes, asides in hashes and the answer seems to be lacking in terms of personal examples, anecdotes or use of imagery and metaphor. It's overall a cold answer, though it happens to be correct.

Naturally, Data provided the second answer. But would he have passed the test? Would his answer amounting to basically being a Wikipedia article be indicative of an artificial intelligence or of simply a computer system providing a quick answer to a question, along the lines of a Wikipedia paragraph being provided on the first page of results in doing a Google Search? Google is a very smart system but it having the ability to take a search entry, or even to be able to predict what I'm searching for with auto-complete, and then providing me with the opening paragraph to the Wikipedia entry as the answer to the question doesn't make Google intelligent. Google's servers aren't passing the Turing Test any time soon.

So, do you think Data could pass the Turing Test?
With the type of answer you describe, if he does pass the test, the observer may well think he's an engineer. Having known engineers, I know I would.

I know many engineers, none of them would say stupid things like "hunting an untamed ornithoid without cause." or not know what a rain check is...
 
I agree he'd be able to pass the test as long as he was told in advance that he was taking the test and what it was about.

I'd say he's not told this as I doubt subject in present-day Turing Tests are told what the test is about, other than perhaps the potential AI in the test being programed specifically for test. In order for the test to be valid no one can know what is happening. If Data knows he is being tested on the validity of his artificial intelligence then he's going to likely skew his answers in that direction which would taint the results of the test. (While, oddly, in effect passing it.)

Data has to give his "first reaction" answers, not know he is being tested and his answers have to be perceived as coming from a human being in order for him to pass.

There might be some inconsistencies that would give the reviewer pause, but if I'm remembering right, I think in the Turing test all that needs to be established is a significant enough level of believability. A percentage or something needs to be reached?

This is where the idea of it gets sort of fuzzy. Because presumably if a valid AI is provided then the judge has to guess which was the computer. But in real-world examples of machines that came close to "passing" the test the results were no better than, well, the judge guessing and the machines weren't said to have truly passed it. Though, this is where the cracks in the idea of the test begin to show.

So there has to be an element in there that weeds out "truly guessing" because one doesn't know and guessing because one has to provide an answer so they just pull one out of their ass. The difference between, "Yes! This one is the machine!" and "Er, um... Man, I don't know. This one?"

In one case someone has made a certain selection, this being a "valid" guess where if over the course of the test over multiple judges the results are around 50/50 for this type of answer then the machine doesn't pass, the judge was just making a guess. But if there's guesses made with hemming, hawing, and forcing an answer then the machine passes since the judge forced an answer simply because he has to provide one.
 
Many of the people participating in this discussion would certainly fail the test if Data would... If the test doesn't cater for weirdos, it's worthless, as being weird is one of the more fundamental human qualities.

The test is worthless anyway, as there is no set standard for human communication. Evaluation would be purely subjective, and if the result were a fail, it probably would be indicative of the evaluator failing rather than the subject.

Really, what would be the point? Even humans have no need for passing this test - a passing grade indicates no advantage whatsoever. Turing himself seems to have come up with the idea for lack of anything better, thinking that communication requires the sort of complexity that automatically is associated with a human-like, "advanced" thinking mind. But that's nonsense from start to finish.

Timo Saloniemi
 
. Turing himself seems to have come up with the idea for lack of anything better, thinking that communication requires the sort of complexity that automatically is associated with a human-like, "advanced" thinking mind. But that's nonsense from start to finish.

Timo Saloniemi

I always assumed the "test" was meant more as a simple thought experiment than as a serious test (with any scientific rigour)
 
If you are not already a member then please register an account and join in the discussion!

Sign up / Register


Back
Top