• Welcome! The TrekBBS is the number one place to chat about Star Trek with like-minded fans.
    If you are not already a member then please register an account and join in the discussion!

Data, Lore, B-4, Lal...

Well, still, one would think even a limited automaton would have uses for uninteresting jobs that we see crewmen performing.

Then again, we don't really see people performing boring jobs... and one might argue that in the 'enlightened' age of Trek that no one finds their job boring.

Well, we know O'Brien found standing in a transporter room all day, every day boring as hell, and was happy to get the exercise of constantly patching DS9 up :)

At the same time though, if you were going to use an automaton to play transporter chief, why bother with the robot - why not go straight to runabout-style voice commands-only? Presumably there's something necessary - or at least highly desirable - about manning the transporter rooms of large ships with sentients.

We have the technology today to take off, fly and land an airplane without human assistance.

People are there in case the shit goes down.

As for why there aren't more positronic AIs....

I'm assuming that (in Trek at least), the only known method of achieving the kind of co-dependent, parallel, learning, "natural" intelligence is to use a positronic circuit, or later a holographic matrix. Probably something to do with complexity.

The positronic net, however, is like building a very flimsy house of cards. Only one in a hundred examples actually succeeds, even if you're doing everything the same every time. Even Data was unable to successfully replicate his brain for more than a few days. So while it's basically the only way to create a truly sentient being, it's also damned near impossible to do by its very nature. Like balancing knives end to end. Maybe as time goes on they'll create a technology to do it reliably but obviously 30 or 40 years hasn't been enough.

There might be a morality issue of creating and inevitably having to dispose of dozens or hundreds of artifical intelligences in order to get a stable one, somewhat akin to the modern debate about creating embryos for in vetro fertilization.

In the novels the Federation has basically given up on positronic based AI and instead employs holograms everywhere, to the point where yes, there is a "slavery" discussion as it becomes apparent that some of them are gaining natural sentience.

Why they don't make a rule that holograms must be turned off and rebooted X times a day to prevent sentience is beyond me. They do it to droids in Star Wars. Holograms certainly don't start out that way.
 
Ok, so what exactly makes the posatronic brain so spiffy? And why can't starfleet make more that don't loose their minds like Lal did. Was this ever explained on the show?
First off... the concept of the "positronic brain" wasn't invented in Trek. It was first coined, as far as I'm aware, by Isaac Asimov (one of the greats of Sci-fi prose).

In the case of Asimov's artificial intelligences, very little emphasis was given to the hardware, though... it was really focused on software, taking the hardware entirely for granted.

Trek "borrowed" this concept from Asimov (I wonder if his estate gets royalties for this "cribbing" of his concept?)

So, the term itself doesn't carry any "real" meaning in Trek. It's just a "magic plot device" to explain how Data can be so human, when normal computers can't. (This fails to explain how holodeck characters can be, though, doesn't it? After all, they're ENTIRELY software!)

Why the difficulty with building others like him? I'm not aware of any "real" reason, other than "because it avoids the terrible pitfall that having Data being mass-producible would give the show!" Imagine... if it WERE easy to produce "Data-level" artificial intelligences, wouldn't that ultimately either result in enslavement of those AIs, or the replacement of humanity with these new "children of humanity?" Neither would be very palatable to the average modern audience, would it?

If it were up to me, I'd either (1) make it impossible for ANYONE to create a true "sentient AI" or (2) recognize that it IS possible, and rather than ignoring it, run with the concept... make it central to the storyline (can you imagine anything providing more potential conflict???)
 
It seems as though Dr. Soong was the only man able to build a stable positronic brain, and that even he never fully succeeded. The prototype, B-4, is literally rather simpleminded. Lore was an unstable sociopath. Data functioned, but only because Soong ditched those troublesome emotional subroutines. The nearly perfect simulacrum of his dead wife Juliana was another cheat, based on a scan of her living human brain. Data's daughter Lal simply fell apart, like most of Soong's early experiments.

Maybe "jumpstarting" an AI into self awareness without it going completely bonkers is the tricky part. Daystron's M5 and Roger Korby's androids used human mental patterns as a template, which is a nice shortcut, but we know how they worked out. Mudd's androids were just sophisticated puppets with a central controller, existing only to serve. TNG-era AI's like Moriarity and the Doctor seem to be self-aware and relatively stable, but they're apparently happy accidents, not suitable for mass production.

Starfleet's experience with androids and sentient AI hasn't been very positive and I can see why they'd be cautious. I can imagine that there's a pretty strong bias against such technology, though not as strong as the bias against genetic engineering. If there were any working, stable, and sane AI's out there, Kirk probably argued them into suicide before anyone had a chance to figure out how they worked.
 
My take on this still is that it's not all that difficult to build a sapient android. It's just not a worthwhile pursuit, according to UFP mainstream science.

What Soong mastered was not the creation of sapient androids per se. It was specifically the building of a positronic brain, as per the dialogue of "Datalore". If he had settled with mundane technologies like optronics or duotronics, he could probably have built functional androids blindfolded and with his right hand engaged in other, more entertaining activities. But he took on the challenge of building a positronic brain specifically - and chose a sapient android as his demonstration piece, because that's where positronic brains are especially useful.

The sapient android as such wasn't particularly interesting to UFP science, as witnessed by the fact that he was not studied much, wasn't any sort of a celebrity, and was allowed to drift to Starfleet along with other dregs of the society - easily entering this Foreign Legion -style organization that didn't even sweat his sapience. Starfleet didn't seem to have much use for androids, but neither did it see a reason to discriminate against one if it really insisted on enrolling. So Data managed to spend a decade in the Fleet without attracting attention of any sort.

In the meantime, positronics as a science was forgotten; apparently, the nature of Data's brain wasn't public knowledge. Other sorts of AI hardware and software went rushing by, assuming they hadn't already been developed to perfection long before positronics. And holograms were found to be a much more useful application of AI than physical android bodies...

Timo Saloniemi
 
The show is suggesting that the hardware capabilities of the positronic brain are somewhat exceptional; I thought the strong implication of "The Measure of a Man," if it wasn't stated outright, is that they'd really have to take Data completely apart to know everything about how he works well enough to make more like him. Since his rights triumphed over the proposed advantages of being able to create Data-like beings, no more were made.

I wouldn't be too surprised if this were regarded as a precedent that discouraged a lot of AI research.
 
But as pondered earlier on, AIs of other kinds are ubiquitous and free of moral considerations - they are the staple of the 2360s-70s entertainment industry!

It should be noted that only Commander Maddox was ever interested in the inner workings of Data. The true experts in the field, like Ira Graves, declared Data a mere tinkertoy, unworthy of closer attention. Sure, Maddox was obsessed about certain aspects of the positronic marvels inside Data - but that need not indicate that said aspects would have had significance outside very small academic circles.

The argument that finally brings down Maddox is of the straw man variety: Maddox never wanted to replicate Data, least of all for nefarious purposes, and such replication wouldn't be necessary for said nefarious purposes anyway. I very much doubt the ruling on the Data case carried any weight in other cases or arguments on the AI or android field. Even the JAG gal Louvois freely admits that the ruling as such is irrelevant, and that Data can be a toaster for all she cares. The only true significance of the court decision is to preserve Data's personal right for self-determination and self-preservation, on no other grounds apart from "he was smart enough to ask for it". An AI who doesn't ask would not be subject to similar considerations, as the court verdict certainly didn't establish any sort of universal rights of artificial lifeforms.

Timo Saloniemi
 
But as pondered earlier on, AIs of other kinds are ubiquitous and free of moral considerations - they are the staple of the 2360s-70s entertainment industry!

It should be noted that only Commander Maddox was ever interested in the inner workings of Data. The true experts in the field, like Ira Graves, declared Data a mere tinkertoy, unworthy of closer attention. Sure, Maddox was obsessed about certain aspects of the positronic marvels inside Data - but that need not indicate that said aspects would have had significance outside very small academic circles.

The argument that finally brings down Maddox is of the straw man variety: Maddox never wanted to replicate Data, least of all for nefarious purposes, and such replication wouldn't be necessary for said nefarious purposes anyway. I very much doubt the ruling on the Data case carried any weight in other cases or arguments on the AI or android field. Even the JAG gal Louvois freely admits that the ruling as such is irrelevant, and that Data can be a toaster for all she cares. The only true significance of the court decision is to preserve Data's personal right for self-determination and self-preservation, on no other grounds apart from "he was smart enough to ask for it". An AI who doesn't ask would not be subject to similar considerations, as the court verdict certainly didn't establish any sort of universal rights of artificial lifeforms.

Timo Saloniemi

I don't think I'm following you. Maddox clearly states he wants to replicate Data.

MADDOX
Ever since I first saw Data at
its entrance evaluation at
Starfleet Academy, I've wanted
to understand it. I became a
student of the works of Doctor
Noonien Soong -- Data's creator.
I've tried to continue his work,
and I believe I am very close to
the breakthrough which will enable
me to duplicate Soong's work and
replicate this.
(Maddox points to Data)
But as a first step I must
disassemble and study it. Data
is going to be my guide.

As for it being of very limited interest, he has Admiral Nakamura in his corner, who explains:

NAKAMURA
I'm sorry about that, but
Commander Maddox's work in
robotics is considered critical
by Starfleet Command. Think
what's at stake here. If
Commander Maddox can succeed in
duplicating Noonien Soong's work
other captains on other ships
would have the advantage you now
enjoy: a Data on every bridge.

Picard's opinion on the importance of the case:

PICARD
Precedent! This case will set
the precedent for all the future
Datas. It will determine their
status, and they'll all be
property.

These quotes are from a script which I don't think is quite final. I don't have access to the episode at the moment.

As for Graves viewing Data as a Tinkertoy...doesn't he merely make a crack about Data's *appearance* that he later retracts? This is the character who decides to place his intellect inside Data, intending at the time to live there forever. At no point in the episode did I get the impression he thought little of Data's design, as he gave endless speeches about how sleek, powerful and immortal he was now that he was driving Data's body.
 
If you are not already a member then please register an account and join in the discussion!

Sign up / Register


Back
Top