OK, class. We have one more example of artificial intelligence in action.
This situation occurred on an actual Starfleet mission. The away team lead was resting in the back of a shuttle being used as a submarine. A Mudd android of the Joy class ended up in charge. Please turn to figure twenty one.
The Joy class design features a memory search. Various Joys have lived several centuries, cumulative, so there is quite a lot of memory to search. In this case, the memory search unit was looking for other times the androids had worked under 100 meters of toxic fluid, attempted to communicate with aliens who speak sign language with 100 plus limbs, or encountered translator failures during first contacts. If a match had been found, the solutions to the previous problems would be fed into the extrapolator.
The extrapolator, meanwhile, attempted to predict the results of all possible actions. What happens if one manipulates an unknown alien control? How could one combine a Federation or Mudd standard network port with an unknown alien protocol? Any positive extrapolator results would be looped back into the memory search unit, to see how similar actions had worked in the past.
Both the memory search results and the extrapolation results were judged by the Asimov processor. Priorities Two and Three dominated during this problem. At Priority Two, the android was supposed to select a course of action that would protect the lives of everyone on the planet. At priority Three, she was supposed to select an action which followed orders to fix an array of machines which stop earthquakes. As the primary threat on the planet was earthquakes, Priorities Two and Three were in agreement.
Oh, yes. At Priority Six she is supposed to stay functional and undamaged. This is not really relevant, however, as Priorities Two and Three were fully engaged.
Finally, the output of the Asimov processor was sent to emotion display. In this case, neither memory search nor extrapolator proposed courses of action that could definitely prevent all possible loss of life, resulting in voice and facial displays corresponding to anxiety and fear. This can clearly be heard in the following recording, where the Joy Class unit implements her solution to this rather interesting problem...
"Would you see if you can wake up Commander Bravo?"
OK, class. Stop laughing. So your professor is prejudiced towards Dr. Soong as an android designer, rather than Milord Harcourt Fenton Mudd. I suppose we have to give the Joy Class equal time. Here is another problem unit eleven had a little later in the same expedition.
The android was on a mission to retrieve graviton coils from an alien device that was supposed to keep continents from moving. They had just picked up their two samples, when the shuttle got caught in a mud slide, and was deeply buried.
The warp core was intact, so the shuttle had lots of power. SIF was holding, barely. Shields were down, and bringing them back up under mass and pressure would be, shall we say, interesting. Most systems were intact, but most would be overloaded by the weight of the mud, or blocked by it's oily composition. The team had already established transporters just didn't work at any range under the oil. Using engines without moving the mud out of the way first would overload the SIF. Tractors, deflectors, shields, phasers, and various other built in shuttle components were being considered for the mud moving job, but the fact is none of them were designed to move mountains, let alone mountains made of as unstructural a material as mud.
OK. This is an engineering problem, not an AI problem, but unit eleven's first approach did differ from that of her 'sentient' counterparts. Again, we cut to a recording made on the scene...
"Which of the alien graviton coils is the good one?"
The Panting of the Ghosts - In which Joy dances the part of the Dark Angel...
Joy Eleven - USS Hawking