Are Advancements in Artificial Intelligence Sowing the Seeds of Humanity’s Annihilation?

Are Advancements in Artificial Intelligence Sowing the Seeds of Humanity’s Annihilation?
This post was published on the now-closed HuffPost Contributor platform. Contributors control their own work and posted freely to our site. If you need to flag this entry as abusive, send us an email.
Artificial Intelligence

Artificial Intelligence

Pixabay Images

In the past two decades, we have watched the United States military engage in three wars, two in Iraq and one in Afghanistan, and posture itself as the most technically advanced fighting force on Earth. For example, during this period, we witnessed the deployment of many new weapons, most notably:

· Stealth Aircraft – from the F-117 Nighthawk (1981–2008), dubbed the “bat plane,” to the latest addition, the F-35 Lightning II

· Smart bombs – bombs guided precisely to targets via a laser or geographic coordinates

· The GBU-43/B Massive Ordnance Air Blast Bomb – a conventional bomb with a 8-ton warhead capable of delivering a 11-ton TNT equivalent destructive blast, which some analysts attribute to its nano-catalysts, as discussed in my recently publish book, Nanoweapons: A Growing Threat to Humanity

· Computer Technology/Artificial Intelligence – the inclusion of computers, as well as artificial intelligence (AI), in almost every aspect of warfare and by every branch of the US military

· Cyber Warfare – the United States, like other nations employing professional hackers as “cyber soldiers,” sees cyberspace as a battlefield and established a new cyber strategy in April 2015

The United States, and other nations, uses supercomputers to design advanced weapons, including fledgling autonomous and semi-autonomous weapons. The process is termed “computer aided design” or CAD. In addition, the advanced weapon typically employ a computer to make it artificially intelligent. We term such a weapon as a “smart weapon.” The term “smart” in this context means “artificially intelligent.”

The weapons the United States deploys currently would have been the subject of science fictions just a few decades back. However, the relentless advance of computer technology, as well as artificial intelligence, brought them to fruition. This begs a question, What drives this relentless advance?

Moore’s law describes the driving force behind computer technology and artificial intelligence. In 1975, Gordon E. Moore, the co-founder of Intel and Fairchild Semiconductor, observed that the number of transistors in a dense integrated circuit doubles approximately every two years. The semiconductor industry adopted Moore’s law to plan their product offerings. Thus, it became a self-fulfilling prophecy, even to this day. In view of Moore’s law, Intel executive David House predicted that integrated circuit performance would double every 18 months, resulting from the combined effects of increasing the transistor density and decreasing the transistor size. This implies computer power will double every eighteen months, since integrated circuits are the lifeblood of computers. Since computers are a pillar of artificial intelligence (AI), capabilities in AI are also increasing exponentially.

On the surface, this may appear beneficial, advances in weapons increasing our security and computer advances enabling us to address complex problems. However, advances in computer technology are reaching critical milestones. Most researchers in AI expect computers will equate to human intelligence by approximately 2025. Those same researchers predict that computers will exceed the combined intelligence of all humans by 2050, which researchers term the “singularity.”

What will singularity-level computers think about humanity? Wars, nuclear weapons capable of destroying the Earth, and the malicious release of computer viruses, mar our history. Will singularity-level computers, alarmed by this information, seek to rid the Earth of humans? That is one possibility I discuss in my book, The Artificial Intelligence Revolution. By increasing our reliance on computers, in society and warfare, we are increasing their capability to eliminate us.

This frames the issue, namely that singularity-level computers may become adversarial and seek to annihilate humanity. However, being aware of this possibility allows us to guard against it. The most obvious path would be to build-in safeguards, such as “hardwired” circuitry, in addition to directives in software.

Given the deity-like intelligence of singularity computers, the task of controlling them will be difficult. However, if we fail to do so, we put the survival of humanity at risk.

Popular in the Community

Close

What's Hot