Associative Memory : A form of multiply linked memory, where all aspects of a situation are linked to similar situations. If a unit fires phasers at a romulan warbird in the badlands disabling shields, pointers will be built linking that event to other phaser shots, other warbird sightings, other events in the badlands, and other events where shields are lost. This data structure allows rapid search of memory for solutions to problems which have been encountered before.
Asimov Processor : A device that lists the priorities that a robot or android must follow. The Three Laws of Robotics - do not harm sentients or allow sentients to be harmed, obey humans, and do not allow injury or destruction of self - are the classic example of Asimov processor programming. The Joy units currently operate under a six rule system: do not disturb pre-starflight civilizations, do not kill or injure, obey Starfleet orders, obey the law, maintain chaste behavior in public, and prevent damage or destruction to self. In a hedonistic-slave design, the Asimov Processor associates pride, pleasure, or other positive emotions with meeting priorities, but horror, shame, or pain with failure to achieve objectives.
Emotional Comparitor : Decisions are made based on emotional content. Memory search and projection into the future guess the most likely outcomes of each of the android's possible actions. The Asimov Processor then judges how pleasant or unpleasant each future is, based on the current priorities. The android chooses the most pleasant possible future.
False Priority : If an android is programmed to please her owner, and smiling pleases her owner, the associated memory will soon link smiling with pleasure. The android will smile a lot. Even if the owner changes to one who does not like smiles, or an Asimov priority of pleasing the owner is removed, the android will still smile. Thus, an android who has been in operation for significant amounts of time often acquires habits not directly associated with the current Asimov priorities.
Hedonistic-Slave Artificial Intelligence : A hedonistic AI design finds pleasure in achieving goals, and has learning circuits which given experience allows the AI to achieve pleasure. A slave AI has no choice but to follow specific directives. The Joys use a hybrid hedonistic-slave design, combining the two approaches. The AI finds pleasure in obeying directives, is ashamed of failing, and has learning circuits which allow correction of past mistakes.
Lie Deflector : The original Mudd androids were designed for use by a highly organized and logical race. They had no protections against conflicting orders or impossible input information. As a result, the units often shut down when presented with irrational input. The Joy series was developed with 'Lie Deflector' software, which is designed to detect and break logic loops and paradoxes. The Joys can sometimes be confused, but very seldom shut down due to conflicting input.
Joy Class Android, Processor Block Diagram
Paradox Shutdown : The Lie Deflector code cannot handle
conflicting orders if obedience to these orders is a priority in the
Asimov Processor. Thus, Joy units will shut down if given conflicting
orders from the valid Starfleet chain of command. Further, events
just before the shutdown are erased from memory. Without such
amnesia, the AI would remember the conflicting orders, attempt to
obey them, and shut down again.
Reprogramming : The priorities set in the Asimov Processor can be changed. Immediately after such a change, there is a conflict between the emotions associated with decisions in the associative memory, and the emotions assigned by the Asimov Processor. Joy Seven was recently reprogrammed not to kill, not to break the law, and to behave chastely. This was an extreme case, where many of the actions proposed by memory search were rejected with prejudice by the reset Asimov. Hedonistic-slave androids are often not operational for a time after a reprogramming, as memories must be examined and assigned new emotional meaning. This process involves near continuous triggering of negative emotions. As a result, a false priority is virtually inevitable in any reprogrammed hedonistic-slave android, to avoid at all costs being reprogrammed again. This impulse can be very strong, perhaps overriding legitimate Asimov directives.
Value Based Artificial Intelligence : A proposed alternate name for the hedonistic-slave design. As the priorities set in the Asimov processor are implemented emotionally, it is thought that the 'laws' should be more properly called 'values'. Thus, a robot programmed with the classic three laws would value human life, and be loyal, obedient, courageous, and self sacrificing.
Human Nature An essay on creating
believable game characters.
Notes on Joy and Gaming
Joy Mudd's Android's Tale