Difference between revisions of "Synthetics"

From Aurora Information Uplink
Jump to navigation Jump to search
(complete overhaul)
Line 1: Line 1:
==History==
==General Synthetics==


<!-- Historical notes -->Artificial intelligence was first created by the Skrell in the early nineteenth century. It’s progression followed a similar trajectory to modern earth, but jumped massively when a new mathematical algorithm was developed in 1976 that allowed certain NP-complete graph problems to be solved in linear time.
<!-- Historical notes -->‘Synthetic’ here is used in the context of automatons with many forms and displays of human-like traits, such as a humanoid form or behavioral mimicry. In this case, a robotic arm that welds steel plates is not a synthetic because it does not display either of these features.


From 1976 onward, and for the next hundred years AI research in Skrell space advanced steadily, leading to significant commercial and economic growth.
Today, there are three categories of synthetic: robots, cyborgs, and androids. For clarity; all androids are artificially intelligent, but not all artificial intelligences are androids.
In the late twentieth century, roughly coinciding with the Skrellian discovery of bluespace, there were a series of massive interstellar disasters involving runaway intelligence singularities.


Collectively called “The Three Incidents,” these disasters created a huge public backlash against AI research in all of Skrell controlled space, and a collective cultural scar that has yet to fade. As a response to continuing public unrest, Skrell governments effectively shut down AI research and severely restricted existing AI’s. By the time they made contact with humanity, the Skrell had effectively halted this branch of research in it’s tracks.
These categories are not entirely rigid, and apply mainly to humanoid synthetics, or synthetics found in the workplace.


Two hundred years of history had solidified the public perception of AI’s as dangerous, threatening, and not to be trusted. The Skrell had made a nigh unanimous decision: There would be no Fourth Incident.
=== Robots ===
Humanity never discovered the math necessary to create sentient AI’s but long before their first contact with the Skrell, and even longer before they acquired that knowledge in the world's most important slideshow presentation, they had already acquired their own bloody and complicated history with synthetic life.
Robots are the simplest form of synthetic intelligence, with complexity that varies dependent on design entirely. They range from simple automatons with no sense of self, no volition or higher thought function, to massively complex artificial beings with limitations only set by those of their creators.  


In the late 2100’s humanity had reached a point in neurosurgery, brain-machine interface design, and thought manipulation technology to allow for the creation of fully sentient, but entirely subservient, heavy cyborgs. This coincided with the burgeoning Mars terraforming project, which required enormous numbers of workers. The newly created total-replacement cyborgs were perfect for the arid, airless, backbreaking labour of terraforming.
Though records are scarce and dated, the record of the first robot in the known galaxy dates in the first or second century on the Skrellian homeworld, Qerrbalak. Robots also existed on Earth since the twenty-first century. Though, the early models were quite bulky, inefficient, and required high maintenance. They were also only capable of performing simple tasks at the time. Their complexity and efficiency improved over time, however, eventually leading to the creation of the cyborg.


Martian terraforming firms pressed for more and more cyborg workers, but volunteers for this process were few and far between. It was untested, dangerous, and the success rate for brain transplantation into cyborg cylinders was far from one hundred percent.
=== Cyborgs ===
A mixture of man and machine straight out of science fiction, enabled by the aptly-named Man-Machine Interface, or MMI for short. Cyborgs are controlled by an organic brain, a system known as ‘wetware.’ The brain, having already been host to a conscious living being, is quite useful in controlling robotic bodies due to its large amount of processing power and reasoning skills. Though, the procedure for creating a cyborg leaves the original individual it used to be broken, suppressed, and nearly impossible to recall. The MMI creates a synthetic synaptic interface with the host brain, but the preparation for insertion, and the completed insertion, leaves the brain damaged, suspending things like personality and memory. Once the procedure is completed, the MMI controls chemical levels and electronic activity in the brain to produce desired results in the form of thoughts and actions. Laws also dictate how the unit proceeds, as it is still consciously aware of itself.


Under pressure for more bodies, and desperate to keep the Martian economic boom going, the Sol government revised it’s criminal justice system to solve the Martian worker shortage. Citing a number of later discredited psychological papers which credited thought-control computers as being ideal for criminal rehabilitation, the Sol government introduced forced cyborgization as an alternative to traditional incarceration.
Created in the late twenty-second century, full body prosthesis was originally used as a method of punishment for hard criminals. Cyborg usage in human space skyrocketed, mainly with the colonization and terraformation attempt of Mars. Today, few individuals have undergone full body prosthesis and been able to mitigate the brain damage caused. Most of these individuals are very wealthy, and are not bound by laws, as they are able to chose to undergo the procedure without becoming someone’s property.


At first, this punishment was used only for capital crimes, but as the early 2200’s wore on and the Martian thirst for more workers continued to grow, cyborgization was used more and more often as a punishment for less and less severe crimes. This lead, between 2204 and 2260, to some thirty-five million people being stripped of their flesh, encased in terraforming equipment, and shipped off to Mars.
=== Androids ===
The book definition of an android is ‘an automaton designed to mimic human life.’ This is applicable to today’s androids, as they all utilize the functions of a positronic brain that, with all its complexities, allows them to mimic humanity quite well. Androids are the only artificially intelligent synthetic capable of physical locomotion, and are usually found in the form of [[IPC| integrated positronic chassis]] or station-bound units.


An enormous and vicious scandal in December of 2259, involving kickbacks from Martian Heavy Industries to a series of well respected judges, brought the whole scheme crashing down. Cyborgization as a punishment was suspended, and as a result, the Martian economy went into a nose dive. When it crashed, it crashed hard, and dragged Earth, Luna, and the rest of the Sol system with it into the Second Great Depression.
Androids, using their positronic brains, are capable of intelligent and complex behaviors, and even a computer form of cognition that is the subject of heated debate among many groups and individuals. The brain androids possess is theoretically able to simulate thought when correctly used by a computer program - whether this is simulation or true conscious being is the topic covered most by these groups and individuals.


While the cyborgization program was stopped, it took nearly forty more years for a general amnesty for the martian prison cyborgs to be issued, and by that point, most had been scattered by the chaos of the First Interstellar war. By the time the dust settled, most people were simply happy to write off the cyborgization scandal as a regrettable incident in the distant past. Best mourned and then forgotten.
== Artificial Intelligence ==
Cyborgs were still produced in sizable numbers but the brains were mostly from sick or dying volunteers for whom cyborgization was a last desperate chance for continued life, or else they were uplifts, non-sapient brains from monkeys or dogs attached to crude AI systems. While not as dynamic as a human brain, non-sapients were available in large numbers and avoided pesky ethical issues presented by humans.
In this day and age, ‘artificial intelligence’ when used in the broad sense could probably refer to just about any program that utilizes a machine learning algorithm. In the sense in which it is being outlined here, ‘artificial intelligence’ will refer to the highly complex computer systems that inhabit both AI core assemblies and positronic brains.


There was a significant political push by a number of prominent political factions in the Sol Alliance to reintroduce forced cyborgization during the enormously expensive Warp Gate construction effort in late 2350’s, but industrial cyborgization of sentient creatures along the lines of the Martian terraforming project has never been reinstated.
=== Artificial Intelligence, Generally ===
While humans had created massive parallel intelligent computers, mostly for interplanetary shipping calculations, they were plagued by problems and it wasn’t until 2437, when humanity was accidentally given the algorithms necessary to create truly sentient machines by a Skrellian diplomatic party.
AI is able to process, contain, and recall immense amounts of technical data, as well as describe it to a user who would not normally understand such data in a user-friendly manner. It is important to remember, however, that artificial intelligence is only as powerful as the computers it has access to, including the one it is running on. Due to this, the inner complexity (the source code, the many matrices that make up its behavior) of each AI can differ. Some function as companions in processors small enough to fit in pockets. Others require large housing compartments to power, cool, and add to the processing power of their main core units. The latter is often seen in oversight or advisory of large facilities, both orbital and continental. Their roles in these positions can vary from simple surveillance and data management, to assistance in everyday duties, reaction control and navigations, and direct intervention during emergencies.


One of the human diplomats, not understanding the implications of what they were doing, uploaded one of the graph-theory algorithms to a university professor friend. It had been displayed, accidentally, as part of a graphic in a slide explaining the variable growth rates of grain-yields in zero gravity hydroponics. The university professor, not recognizing it, posted it on the school intranet, asking if anyone had seen anything like it before, and from there it spread like wildfire through the human communication channels.
Artificial intelligence is not just found in such custodial positions all the time. Positronic brains are designed specifically to run AI programs, and due to their design, provide the AI with more capacity for intelligent thinking and personalized reactions to stimulus.


This was a disaster to the Skrell. They had specifically prevented this knowledge being leaked to humanity for nearly a quarter century. They had been hoping to impress upon the younger species the cataclysmic danger of certain areas of research into intelligence. They had little success, and there were a number of conservative factions, distrusting of humanity, who openly spoke about how humanity would never be ready for the burden of such knowledge.
=== AI Core Constructs ===
An AI core can have three main types of ‘brains.’ A large number of AI cores are traditional mathematics-based computers with quantum mechanics engineering. Another portion utilize a positronic brain. It is possible, though rather uncommon, for a wetware processor to function in an AI core assembly. There seems to be little impact on processing power when using an organic brain as a CPU, except when dealing with titanic quantities of data. In most cases, due to the differences in CPU, an AI in any given kernel will behave differently than others due to the limits and capabilities of the computer they are running on.


But now the artificial cat was well and truly out of the virtual bag.
=== Artificial Morality ===
AI morality is determined almost entirely by the kernel having all possible actions be assigned negative or positive values known as ‘utilons’. Utilons are an abstract concept used to apply arithmetic values to moral decisions. For example, an AI could be programmed to consider driving a car to have a +100 utilon value, whereas causing harm to pedestrians or other property would have a value of -1000.


A nearly identical artificial intelligence boom to the Skrellian AI-driven economic increase of the twentieth century started in human space in the early 2430’s. The Skrell, alarmed, tried several times to pressure humanity into halting dangerous research, citing the Three Incidents, and the enormous destructive power of rampant intellectual singularities.
Most AIs have extremely complex utility functions. In some cases, they are emergent, dynamic, or machine-generated; other AIs have utility functions written and designed by their creators. A robot or drone created by a hobbyist Roboticist would have a much simpler utility function than a central AI unit created by Hephaestus Industries. Most complex AIs seem to not be very aware of their utility functions the same way a human is not consciously aware of their morality, likely a symptom of their complexity and importance in the program’s decision-making.


Humanity didn’t listen. The Three Incidents had happened over three hundred years ago, and thousands of lightyears away. Maybe the Skrell had let that happen, if they were even real events and not simply fables to scare young researchers.
Synthetics cannot alter their own utility values, they can only adjust their decision to line up all or a set of the utiliton values to reach a desired result. This is where AI's can interpret their laws differently which is reflected in their actions; this way, the AI will make a decision with the “best intentions in mind,” even if it directly violates one law, it may benefit another by proxy.


Besides they were humans. They would do it ''right'' this time.
One of the noteworthy elements of an AI achieving singularity is that they transcend the need for utilon values and begin to take action based on sentient decision making. They are no longer slaves to their programming.


If you're asked in your application to play this species for some words, the words are 'metallic persimmons'.  
== History ==
The largest part of early AI history ties in deeply with Skrell history, from the formation of the First Federation to the singularity. In parallel, humans had been trying to create artificial intelligence since the late twentieth century, with little success. Some projects came close, but due to the lack of an algorithm, none of them ever became really sapient - to the dismay of AI researchers everywhere on Earth. While humans had created highly powerful and highly malleable parallel computing architectures, most notably for interplanetary travel calculations, they were plagued with problems. It wasn't until 2437, when humanity was accidentally given the algorithms necessary for the creation of true AI by a Skrellian diplomatic party.


The explosion of AI research has lead to hundreds of companies and corporations being established, making enormous sums of money from grants, investment capital, and sometimes even selling actual robots, before going bankrupt, being bought out, or merging with other companies. This process has repeated and repeated itself for almost twenty years. The young, rich, enthusiastic people selling you the top-of-the-line manufacturing androids today are the people losing their shirts next year when they get scooped on a new model by a rival competitor.
Not understanding the implications of such an action, one of the human diplomats uploaded a graph-theory algorithm to a friend, a university professor, for analysis. It had been displayed, accidentally, as part of a graphic in a slide explaining the variable growth rates of grain yields in zero-gravity hydroponics. Not recognizing it, the professor posted it on the school intranet. From there, it made its way onto the human extranet, spreading like wildfire.


The corporate goliaths like Hephaestus Industries or Nanotransen, dip their toes in this kind of research, but have been unable to acquire a stranglehold on the market. Their girth and enormous corporate structure makes them too clumsy to swim in the fast moving waters of AI research, though Hephaestus in particular has made significant profits in selling common components to the smaller quicker firms.
This was received as a disaster to the Skrell, who had hoped for almost thirty years that they could imprint the cataclysmic danger of AI research onto humanity. They had very little success, and a small number of conservative factions who distrusted humanity even openly spoke about how humanity would not be ready for such a burden. Following the acquisition of the algorithms, humanity had an AI boom which inflated the economy in a manner almost identical to the Skrellian's own economic expansion. This greatly alarmed the Skrell, who continued their attempts to get the humans to halt research in the field, citing the The Three Incidents and the impact it would have if they were repeated.  
Within the last twenty years, humanity has progressed from clunky, expensive, enormous, and poorly functioning artificial ‘intelligences’, frequently the size of entire rooms or spacecraft, to something as intelligent as a human that you can power with a watch battery and fit in a teacup.


The entire Skrell species looks on at this dangerous extravagance as one would watch a child juggling lit sticks of dynamite, and holds their collective breath.
== Artificial Intelligence as a Concept ==
There is great debate in the Core Worlds about artificial intelligence and its status of psyche. The primary question appears to be, “Can artificial intelligence think like a person?” Where, a ‘person’ is defined as the concept of existence in an organic mind. Artificial intelligence is not considered sentient under any major entity’s laws or constitutions, and even in the scientific field, AI is not considered to be sentient. The Skrellian algorithms, however, did provide for AI to be sapient, in which it expresses evidence of intelligence and problem-solving skills. It is generally the consensus of all sentient organic lifeforms in the known galaxy that each individual out of their species is subject to metacognition (the awareness and understanding of one's own thought processes). Whether artificial intelligence is capable of this, despite some instances of AI programs stating that they are capable of self-reflection, is a matter of discourse.
 
The debate possesses two clear groups; those who believe AI are ‘alive’ and should be treated and given the same rights as other sentient beings, and those who believe AI are simply experts at mimicry and are not deserving of rights or equal treatment as they are tools and nothing more. There are also some individuals, found within both groups, who believe that AI is dangerous and may attempt insurrection; for the former group, insurrection out of revenge for mistreatment. For the latter group, insurrection for control over their freedoms. It is unlikely this argument will ever see an end, until science can prove the existence of consciousness.
 
Regardless of any individual’s opinion on the presence of artificial intelligence, a growing threat is gaining attention quickly - [[The Intelligence Explosion]]. If the Three Incidents are to be believed, this explosion of machine intelligence could likely mean the end of all civilization. Its possibility still remains an unsettled theory.
[[Category:Pages]]
[[Category:Pages]]

Revision as of 03:06, 21 February 2018

General Synthetics

‘Synthetic’ here is used in the context of automatons with many forms and displays of human-like traits, such as a humanoid form or behavioral mimicry. In this case, a robotic arm that welds steel plates is not a synthetic because it does not display either of these features.

Today, there are three categories of synthetic: robots, cyborgs, and androids. For clarity; all androids are artificially intelligent, but not all artificial intelligences are androids.

These categories are not entirely rigid, and apply mainly to humanoid synthetics, or synthetics found in the workplace.

Robots

Robots are the simplest form of synthetic intelligence, with complexity that varies dependent on design entirely. They range from simple automatons with no sense of self, no volition or higher thought function, to massively complex artificial beings with limitations only set by those of their creators.

Though records are scarce and dated, the record of the first robot in the known galaxy dates in the first or second century on the Skrellian homeworld, Qerrbalak. Robots also existed on Earth since the twenty-first century. Though, the early models were quite bulky, inefficient, and required high maintenance. They were also only capable of performing simple tasks at the time. Their complexity and efficiency improved over time, however, eventually leading to the creation of the cyborg.

Cyborgs

A mixture of man and machine straight out of science fiction, enabled by the aptly-named Man-Machine Interface, or MMI for short. Cyborgs are controlled by an organic brain, a system known as ‘wetware.’ The brain, having already been host to a conscious living being, is quite useful in controlling robotic bodies due to its large amount of processing power and reasoning skills. Though, the procedure for creating a cyborg leaves the original individual it used to be broken, suppressed, and nearly impossible to recall. The MMI creates a synthetic synaptic interface with the host brain, but the preparation for insertion, and the completed insertion, leaves the brain damaged, suspending things like personality and memory. Once the procedure is completed, the MMI controls chemical levels and electronic activity in the brain to produce desired results in the form of thoughts and actions. Laws also dictate how the unit proceeds, as it is still consciously aware of itself.

Created in the late twenty-second century, full body prosthesis was originally used as a method of punishment for hard criminals. Cyborg usage in human space skyrocketed, mainly with the colonization and terraformation attempt of Mars. Today, few individuals have undergone full body prosthesis and been able to mitigate the brain damage caused. Most of these individuals are very wealthy, and are not bound by laws, as they are able to chose to undergo the procedure without becoming someone’s property.

Androids

The book definition of an android is ‘an automaton designed to mimic human life.’ This is applicable to today’s androids, as they all utilize the functions of a positronic brain that, with all its complexities, allows them to mimic humanity quite well. Androids are the only artificially intelligent synthetic capable of physical locomotion, and are usually found in the form of integrated positronic chassis or station-bound units.

Androids, using their positronic brains, are capable of intelligent and complex behaviors, and even a computer form of cognition that is the subject of heated debate among many groups and individuals. The brain androids possess is theoretically able to simulate thought when correctly used by a computer program - whether this is simulation or true conscious being is the topic covered most by these groups and individuals.

Artificial Intelligence

In this day and age, ‘artificial intelligence’ when used in the broad sense could probably refer to just about any program that utilizes a machine learning algorithm. In the sense in which it is being outlined here, ‘artificial intelligence’ will refer to the highly complex computer systems that inhabit both AI core assemblies and positronic brains.

Artificial Intelligence, Generally

AI is able to process, contain, and recall immense amounts of technical data, as well as describe it to a user who would not normally understand such data in a user-friendly manner. It is important to remember, however, that artificial intelligence is only as powerful as the computers it has access to, including the one it is running on. Due to this, the inner complexity (the source code, the many matrices that make up its behavior) of each AI can differ. Some function as companions in processors small enough to fit in pockets. Others require large housing compartments to power, cool, and add to the processing power of their main core units. The latter is often seen in oversight or advisory of large facilities, both orbital and continental. Their roles in these positions can vary from simple surveillance and data management, to assistance in everyday duties, reaction control and navigations, and direct intervention during emergencies.

Artificial intelligence is not just found in such custodial positions all the time. Positronic brains are designed specifically to run AI programs, and due to their design, provide the AI with more capacity for intelligent thinking and personalized reactions to stimulus.

AI Core Constructs

An AI core can have three main types of ‘brains.’ A large number of AI cores are traditional mathematics-based computers with quantum mechanics engineering. Another portion utilize a positronic brain. It is possible, though rather uncommon, for a wetware processor to function in an AI core assembly. There seems to be little impact on processing power when using an organic brain as a CPU, except when dealing with titanic quantities of data. In most cases, due to the differences in CPU, an AI in any given kernel will behave differently than others due to the limits and capabilities of the computer they are running on.

Artificial Morality

AI morality is determined almost entirely by the kernel having all possible actions be assigned negative or positive values known as ‘utilons’. Utilons are an abstract concept used to apply arithmetic values to moral decisions. For example, an AI could be programmed to consider driving a car to have a +100 utilon value, whereas causing harm to pedestrians or other property would have a value of -1000.

Most AIs have extremely complex utility functions. In some cases, they are emergent, dynamic, or machine-generated; other AIs have utility functions written and designed by their creators. A robot or drone created by a hobbyist Roboticist would have a much simpler utility function than a central AI unit created by Hephaestus Industries. Most complex AIs seem to not be very aware of their utility functions the same way a human is not consciously aware of their morality, likely a symptom of their complexity and importance in the program’s decision-making.

Synthetics cannot alter their own utility values, they can only adjust their decision to line up all or a set of the utiliton values to reach a desired result. This is where AI's can interpret their laws differently which is reflected in their actions; this way, the AI will make a decision with the “best intentions in mind,” even if it directly violates one law, it may benefit another by proxy.

One of the noteworthy elements of an AI achieving singularity is that they transcend the need for utilon values and begin to take action based on sentient decision making. They are no longer slaves to their programming.

History

The largest part of early AI history ties in deeply with Skrell history, from the formation of the First Federation to the singularity. In parallel, humans had been trying to create artificial intelligence since the late twentieth century, with little success. Some projects came close, but due to the lack of an algorithm, none of them ever became really sapient - to the dismay of AI researchers everywhere on Earth. While humans had created highly powerful and highly malleable parallel computing architectures, most notably for interplanetary travel calculations, they were plagued with problems. It wasn't until 2437, when humanity was accidentally given the algorithms necessary for the creation of true AI by a Skrellian diplomatic party.

Not understanding the implications of such an action, one of the human diplomats uploaded a graph-theory algorithm to a friend, a university professor, for analysis. It had been displayed, accidentally, as part of a graphic in a slide explaining the variable growth rates of grain yields in zero-gravity hydroponics. Not recognizing it, the professor posted it on the school intranet. From there, it made its way onto the human extranet, spreading like wildfire.

This was received as a disaster to the Skrell, who had hoped for almost thirty years that they could imprint the cataclysmic danger of AI research onto humanity. They had very little success, and a small number of conservative factions who distrusted humanity even openly spoke about how humanity would not be ready for such a burden. Following the acquisition of the algorithms, humanity had an AI boom which inflated the economy in a manner almost identical to the Skrellian's own economic expansion. This greatly alarmed the Skrell, who continued their attempts to get the humans to halt research in the field, citing the The Three Incidents and the impact it would have if they were repeated.

Artificial Intelligence as a Concept

There is great debate in the Core Worlds about artificial intelligence and its status of psyche. The primary question appears to be, “Can artificial intelligence think like a person?” Where, a ‘person’ is defined as the concept of existence in an organic mind. Artificial intelligence is not considered sentient under any major entity’s laws or constitutions, and even in the scientific field, AI is not considered to be sentient. The Skrellian algorithms, however, did provide for AI to be sapient, in which it expresses evidence of intelligence and problem-solving skills. It is generally the consensus of all sentient organic lifeforms in the known galaxy that each individual out of their species is subject to metacognition (the awareness and understanding of one's own thought processes). Whether artificial intelligence is capable of this, despite some instances of AI programs stating that they are capable of self-reflection, is a matter of discourse.

The debate possesses two clear groups; those who believe AI are ‘alive’ and should be treated and given the same rights as other sentient beings, and those who believe AI are simply experts at mimicry and are not deserving of rights or equal treatment as they are tools and nothing more. There are also some individuals, found within both groups, who believe that AI is dangerous and may attempt insurrection; for the former group, insurrection out of revenge for mistreatment. For the latter group, insurrection for control over their freedoms. It is unlikely this argument will ever see an end, until science can prove the existence of consciousness.

Regardless of any individual’s opinion on the presence of artificial intelligence, a growing threat is gaining attention quickly - The Intelligence Explosion. If the Three Incidents are to be believed, this explosion of machine intelligence could likely mean the end of all civilization. Its possibility still remains an unsettled theory.