Superintelligence Essay

1584 Words4 Pages

Advanced AI development techniques enable new levels of integration and present the possibility of a technological singularity. Methods such as neural networking and genetic engineering mimic nature in ways that produce substantial improvements in AI general intelligence and learning capacities. The heightened mental resources also facilitates AI to achieve over its creators in multiple ways. With AIs teaching themselves classic games and winning against human and preprogrammed reigning champions. These startling victories introduce humanity to the notion that, in the form of a superintelligence, AI might one day greatly exceed the capabilities of any human being. The creation of an uncontrolled superintelligent AI would no doubt exist as the …show more content…

A generally superintelligent AI would theoretically outmatch a human in every feasible way. This means that the AI could think so far beyond human level, a could in no way compete with it. Any superintelligent AI system presents itself as a clear usurpation of human dominance, making this technology capable of controlling humans the same way they do organisms of lesser intelligence. Even an AI given direct goals and seemingly under control, could possibly come to acquire risk towards humanity. The real danger of a superintelligence comes from a misalignment of its goals from humanities. Of course, whoever developed the hypothetical superintelligent system would design its goals to match human interest. At any point if the superintelligent AI’s goals does not match humanity’s, a collision would occur (Tegmark 259-260). Philosopher Nick Bostrom proposed in a 2003 paper a thought experiment called the “paperclip maximizer,” in which humanity designs a superintelligent AI with the sole purpose of making paperclips. Eventually, in its mission to make paperclips, the AI depletes the Earth’s resources and begins to search for more in space (Bostrom). This thought experiment, however exaggerated, an AI with initially innocent goals can turn against humanity in the long run. Some AI experts already design AI containment techniques to prepare for the scenario of a “rogue superintelligence.” One such method, referred to as “boxing,” involves placing the AI in physical containment in order to control its contact with the outside world. The AI developers could also prohibit their system’s access to data in an attempt to gain further control. Ideally, the developers would add “tripwires” within the AI, that would completely shut the system down if it detects any negative behavior (Bostrom 158-167). These precautions, however, might not even

Open Document