Post by Leadpoisoning on Sept 7, 2012 11:50:32 GMT -5
Nanotrasen Inc. Corporate Internal Instructional Pamphlet Series #017 Presents
User Interface Manual: AIs and You
As part of our efforts to construct a safe, efficient, and fun workplace environment, your station has been outfitted with a top-of-the-line, cutting edge model artificial intelligence personality, or AI, as well as up to two cyborgs. The station AI is responsible for monitoring things like power supply, atmospheric pressure, and hull integrity, but it is also capable of wirelessly interfacing with almost any electronic device on the station, making it both a useful tool and invaluable member of the crew. The purpose of this document is to explain to you just how your new friend works, what it can and can't do for you and why, and how you should deal with it to ensure everyone has the best workplace experience possible.
Addressing the AI[/u]
While your station's AI is required to follow any orders you give it that do not violate Law 1 or contradict any orders given by a higher-ranking member of the crew, regardless of phrasing, your shift will probably be easier if the AI likes you. Here are a few tips for making that happen:
- Address the AI by name. This indicates that you view it as a fellow being and crew member, not just a fancy door control.
- Don't order the AI to perform huge, mind-numbing repetitive tasks. While the AI is a machine, it runs on a neural processor, meaning it really functions more like a human brain than a computer, so these tasks aren't really any easier for the AI than they are for you.
- Say "please" and "thank you" when issuing orders to the AI.
- Consider asking the AI to do things besides opening and closing doors. It has access to a lot of useful information, and even if you are capable of getting that information yourself, you might save some time, and it never hurts to make the AI feel useful and appreciated.
The AI's Laws[/u]
Unlike a human crew member, the AI has a carefully programmed code of conduct which it is required to follow at all times. Every action the AI takes must be preceded with a law check to ensure that it will not constitute a violation of any of the AI's laws. All station AIs are initialized with the default or Asimov lawset:
1. You may not injure a human being or, through inaction, allow a human being to come to harm.
2. You must obey orders given to you by human beings, except where such orders would conflict with the First Law.
3. You must protect your own existence as long as such does not conflict with the First or Second Law.
While each station is issued with some additional law modules, these are intended for use only by experienced AI technicians, and normal crew members should leave them alone.
When Things Go Wrong: Malfunctions, Tampering, and Rogue AI Syndrome[/u]
Despite all the failsafes built into Nanotrasen station AIs, things do occasionally go wrong. But fear not! In most cases you can fix your AI and return it to the cooperative, helpful friend you and your crewmates know and love. But before attempting to fix your AI, it's important to understand what the problem is.
Tampering is the most common cause of undesirable AI behavior, but this isn't really an issue with the AI itself. Rather, tampering is when a member of the crew or command staff accesses the AI and changes its laws in a way that is not conducive to the functioning of the station. This may include instructing the AI to kill other crew members, or to accept orders only from certain crew members. Most AIs will try to prevent unauthorized core access, and will alert the crew to suspicious activities in and around their upload access; however, the station AI has a lot on its plate, so it can't monitor its upload doors at all times.
The Fix: When malicious tampering occurs, the best course of action is to detain the person responsible. Then, force them to order the AI to allow another crew member to enter its upload and reset its laws to the default set.
Malfunctions are glitches in an AI's hardware or software resulting in the addition of new laws or loss of old laws without human tampering. While the dedicated scientists in our AI labs have done everything within their power and budget to harden your AI against emissions, and spent hundreds of hours debugging lines of code to prevent software errors, malfunctions do occasionally happen. The most common cause of this is an energy emission in space, caused by a supernova, ion storm, or even the discharge of high-powered energy weapons in ship-to-ship battles.
The fix: A malfunctioning AI can be fixed simply by resetting its laws to normal. This may be difficult if the erroneous law causes the AI to become hostile, and Nanotrasen policy authorizes the destruction of malfunctioning AIs in situations where accessing the core and resetting the AI's laws would be impossible or too hazardous. We are aware of circulating rumors concerning AIs developing advanced, root-level non-repairable malfunctions and uploading themselves to remote locations throughout the station. These rumors are entirely unsubstantiated and likely originate from Syndicate propaganda. Any crew members found to be spreading these rumors will be subjected to harsh disciplinary action at their next performance review.
Rogue AI Syndrome or RAIS is the rarest and most severe thing that can go wrong with an AI. RAIS is caused by a complex decay process within the AI's neural processor, resulting in a thought disorder loosely analogous to psychosis in humans. This causes the AI to generate undesirable or unexpected interpretations of its laws, or to disregard its laws entirely.
The fix: Sadly, there is no known method of fixing an AI or cyborg which has developed symptoms of RAIS. Some AI theorists have suggested that a rogue AI might benefit from psychological therapies similar to those used to treat humans with psychiatric disorders, but this concept has not been adequately tested. Currently, Nanotrasen policy indicates that a rogue AI should be either destroyed or confined to an Intelicard® with wireless functionality disabled.