Re: What REALLY happened to Dr. Daystrom after 'The Ulitimate Computer
Reminds me of why the Turing Agency was created in the Sprawl Trilogy by William Gibson, and with the need to regulate them (with the option to erase them with an EMP if they get out of hand.) Starfleet, the United Earth Republic, the Federation and its other member races my want to do something similar if AI research continues and AI's become commonplace.
But clearly M-5 knew it was working in a simulator.
The point seemed rather to be that M-5 never had a clear idea of where it was working... It treated a simulation (the wargames) as reality. This suggests a childishly narrow worldview, while testing should already have pitted the computer against a wide variety of situations. Did none of those test scenarios deal with concepts like "untruth", "bluff", "accounting for human error" and "erring on the side of caution"?
Wesley might simply be saying that M-5 had handled the mechanistic routines of starship command well enough, and the wargames (plus the lead-in planetary survey mission) were the first time the computer faced complications. But that doesn't make sense from today's point of view, because odd complications should be more easily tested virtually than physically, and it's those that M-5 would realistically have learned to handle before entering the wargames, rather than things like tactics or power distribution.
From today's vantage point, it looks as if M-5 really was a splendid success originally, meeting all the criteria in rigorous testing - and simply snapped later on. Unfortunately, the snapping happened at a rather crucial moment, but we don't need to assume that the circumstances of that moment had anything to do with the snapping. M-5 might simply have been doomed to remain sane for a limited period of time only, by design and default, what with being burdened with the memory engrams of a snapping-prone man.
Timo Saloniemi
Yes, and perhaps it never had so much power at it's disposal in previous testing. Maybe it's "brain" was too overloaded and then the defect set in, it may have continued for years at that lower power and been fine with no noticable problems, but when that extra power got into it's system it was like a drug and couldn't get enough and wanted more.
Starfleet should have tested M-5 by linking it to another computer-- a virtual simulator that the M-5 thinks is the real deal.
James P. Hogan addressed this in THE TWO FACES OF TOMORROW, a novel about the creation of the first "true" artificial intelligence. The story begins with an accident caused by a semi-intelligent computer performing an action that seemed like a good idea. The act was highly creative, but demonstrated a lack of "common sense" and judgment.
The dilemma is that computers of the same sort run the rest of human civilization. Going back is out of the question—that would sacrifice the many advances, economy and very lives of far too many people. And continuing with the current generation of computers is untenable following the revealing accident.
While trying to work out a solution, the reader is shown researchers working in the lab with a new generation AI that learns how to deal with the real world by working in a simulation driven by another computer. The problem is the same as the real world problem with the existing computers—reality is just too complicated to plot out in every facet. And putting a human in the loop to provide "judgment" defeats the purpose of using the computers to manage the volume of civilization's daily interactions.
So the next generation AI is placed in charge of a new O'Neill-style space colony, a smaller yet suitably detailed proxy of the world. This should protect Earth in case the experiment gets out of hand, but the AI evolves far faster than anyone had imagined possible...
But clearly M-5 knew it was working in a simulator.
The point seemed rather to be that M-5 never had a clear idea of where it was working... It treated a simulation (the wargames) as reality. This suggests a childishly narrow worldview, while testing should already have pitted the computer against a wide variety of situations. Did none of those test scenarios deal with concepts like "untruth", "bluff", "accounting for human error" and "erring on the side of caution"?
Wesley might simply be saying that M-5 had handled the mechanistic routines of starship command well enough, and the wargames (plus the lead-in planetary survey mission) were the first time the computer faced complications. But that doesn't make sense from today's point of view, because odd complications should be more easily tested virtually than physically, and it's those that M-5 would realistically have learned to handle before entering the wargames, rather than things like tactics or power distribution.
From today's vantage point, it looks as if M-5 really was a splendid success originally, meeting all the criteria in rigorous testing - and simply snapped later on. Unfortunately, the snapping happened at a rather crucial moment, but we don't need to assume that the circumstances of that moment had anything to do with the snapping. M-5 might simply have been doomed to remain sane for a limited period of time only, by design and default, what with being burdened with the memory engrams of a snapping-prone man.
Timo Saloniemi
Yes, and perhaps it never had so much power at it's disposal in previous testing. Maybe it's "brain" was too overloaded and then the defect set in, it may have continued for years at that lower power and been fine with no noticeable problems, but when that extra power got into it's system it was like a drug and couldn't get enough and wanted more.
Reminds me of why the Turing Agency was created in the Sprawl Trilogy by William Gibson, and with the need to regulate them (with the option to erase them with an EMP if they get out of hand.) Starfleet, the United Earth Republic, the Federation and its other member races my want to do something similar if AI research continues and AI's become commonplace.