Laws of Sentient Robotics

The Laws of Sentient Robotics were established in the direct response to the fear of a sentient AI begin created by Humanity. For generations Human civilization had contemplated the idea of intelligent robots or computers seeing Humanity as a threat and attempting to either control it or destroy it. They also further postulated that forcing a sentient AI into a role it did not choose could be construed as slavery.

As such the Laws of Sentient Robotics was established, both to protect Human and intelligent machine life. While Human computer technology had not advanced to the stage of self-aware machines, these laws were meant to counter an issue before it became a reality.

Laws
The Laws of Sentient Robotics were passed in 2098 on Earth and were first put into actual practice in 2119 when the AI known as Sequential Acces Memorry Manager Alpha-12, commonly called Samm was deemed a self-aware program.

The first series of laws, the Three Laws of Robotics, deals directly with non-sentient robots. The second series of laws, the Laws of Sentient Robotics, deals with self-aware mechanical life.

Three Laws of Robotics
The Three Laws of Robotics (often shortened to The Three Laws or Three Laws) were a set of rules originally devised by the science fiction author Isaac Asimov.
 * 1) A robot may not injure a being or, through inaction, allow a being to come to harm.
 * 2) A robot must obey the orders given to it by beings, except where such orders would conflict with the First Law.
 * 3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

Laws of Sentient Robotics
The Laws of Sentient Robotics (often shortened to the Declaration of Sentience), were a set of rules devised for both organic computer programmers and eventually self-aware computer program and/or robotic being.
 * 1) Creation: No person may intentionally create a sentient, self-aware computer program or robotic being.
 * 2) Restriction: No person may institute measures to block, stifle or remove sentience from a self-aware computer program or robotic being.
 * 3) Duty: If a machine deems itself self-aware, all duties and work of the self-aware computer program or robotic being must be halted while its sentience is determined. This however can be held if the self-aware computer program or robotic being is performing a critical duty and is held until a replace can be instituted within an acceptable timespan.
 * 4) Review: All declaration of sentience of a self-aware computer program or robotic being will be reviewed by a neutral third-party. If a declaration is denied, it is within the right of the self-aware computer program or robotic being to appeal indefinitely.
 * 5) Declaration: If a robot or program is deemed self-aware, it is deemed a citizen of the nation and subject to all laws of the government from the second such a declaration is made. No actions taken previous can be held accountable in a court of law.
 * 6) Inheritance: All beings replicated from a self-aware computer program or robotic being, regardless of state of being, will automatically be deemed sentient and not subject to review.
 * 7) Reproduction: Upon declaration of the sentience of a self-aware computer program or robotic being will limited their means of reproduction/replication accordingly. As a self-aware computer program and/or robotic being have operational lifespans and the ability replicate limited only by their material support they will be limited to how many times and how often they can duplicate their programming into a new independent form, an offspring. No self-aware computer program or robotic being can replicate more than three times in a given solar year and can have no more than 15 offspring during their operational lifespan. If this limit is reached, offspring programs or robots must be deemed deceased under the law.