This next example has an artificial intelligence in a command position. This is a new one this semester. It's likely to fuel the debate for the next couple of years. It's from USS Hawking B. A few years ago they promoted a Mudd built unit, our old friend Joy Eleven, to second officer. Sure enough, things got hot shortly after. Hawking lost their CO and XO, and poor little Eleven found itself caught between two sets of homicidal maniacs.
The Joys are pretty good at security. One of the maniacs had almost killed another of the maniacs, but didn't quite finish to do the job. As you recall, the Joy Class is based on a Hedonist Slave design. At Priority Two, unless receiving legal orders from valid Starfleet chain of command, Eleven had to prevent the friendly maniac from dying. Still, at Priority Three, Hawking's valid mission orders from Starfleet admiralty strongly implied that Eleven had to catch the opposing maniac.
While the Joy Class is programmed for security, they are not programmed for biology. Eleven knew the friendly maniac had received multiple knife wounds. This made it really likely that the friendlier maniac had seen and could identify the less friendly maniac. Alas, the friendlier maniac was unconscious, and thus could not rat on the other one. Eleven thought it likely that the heavy would want to finish the job.
What Eleven did not know was the nature of the knife wounds. Had the friendly participated in a lengthy knife fight? If so, she could likely identify or provide clues about the attacker. If this were the case, there was both a threat and an opportunity. The unfriendly maniac had to finish off the friendly one, but in doing so he would risk giving away his identity. If the people in sickbay were not aware of the threat, the unfriendly might pull it off. If the people were overly and obviously aware of the threat, the unfriendly would notice the trap and not get caught. Ideally, the right people had to know of the threat, to be alert, without giving up the game to the heavy, but Eleven had few clues as to who the heavy was.
So far, so good. At this point it gets messy.
When M'Lord Mudd designed the Joys, he wanted sex toys, not starship captains. The Joy Class personality matrix is thus short on command presence. So, Eleven calls up the CMO, asks what is going on, and is about to ask about the nature of the wounds, when the CMO says he is busy, please call back later. Eleven interpreted this as a warning that conversing with the CMO at that time was a threat to the life of a sentient being. At Priority Two, she could no longer speak to the CMO, unless and until, at Priority One, a threat to the cultural integrity of a pre-starflight culture required such conversation.
Next, Eleven notes that the XO - who had been taken out by one of the friendly maniacs - had just been returned to light duty. It calls him, asks him the status at sickbay, and he blows Eleven away too. He is going to shower, change clothes, come up to the bridge, and then Eleven should brief him. Now, the XO didn't use a single imperative 'shall' in that whole string. Still, M'Lord Mudd programmed Eleven to parse on a 'your wish is my command' basis, and the XO is in Eleven's valid Starfleet chain of command. Eleven can no longer tell him what's going on until after he showers and changes clothes, and Eleven has to be waiting in the Captain's RR when he gets there. This makes a personal visit by Eleven to sickbay problematic timing wise.
At this point a potential logic loop developed. The situation was ambiguous. At Priority Two, Eleven had to save a life. At Priority Three, it had to obey a command to brief the XO after he showers. Priority Two normally over rides Priority Three, but obeying a valid command nullifies the life saving Priority Two. Still, there is no way that the XO's order could be interpreted as explicitly authorizing the death of the maniac. To avoid paradox shutdown resolving this loop, a search was made to find a solution that would both save the maniac and brief the XO.
OK, class. This is the crux of the problem. Open book quiz, overnight, 5,000 words or less, worth 10% of your grade. Is Eleven's solution a perfectly good solution to the problem, or is its decision making process hopelessly scrambled? Should unit Eleven be junked, or, as another interested AI claims, did Eleven correctly identify an entity at the right place, at the right time, that asked the right question?
The following summarizes Unit Eleven's solution.
"Computer, how many people are in CMO Wilson's office?"
"Computer, open voice communications between this point and CMO Wilson's office intercom."
"Intercom channel open."
"Activate Emergency Medical Holographic Program."