What Comes After Homo Sapiens? An Interview with Don Simborg

This post was published on the now-closed HuffPost Contributor platform. Contributors control their own work and posted freely to our site. If you need to flag this entry as abusive, send us an email.

Technology is changing just about everything these days with no end in sight. Are biology and medicine exempt from this trend?


I was curious about the topic and recently spoke to Don Simborg, expert in clinical information systems. Simborg has devised computer-based solutions to many biomedical problems. He has served on the faculties of the Johns Hopkins and University of California, San Francisco schools of medicine and published more than 100 peer-reviewed articles.

We sat down to talk about his new book What Comes After Homo Sapiens?

Here’s an excerpt from out conversation:

PS: What was your motivation for writing the book?

DS: I have read much of the existing nonfiction popular science literature that included projections about the future of our species. The intent of these books varies widely, but none of them seriously research in-depth our next possible speciation event. I decided to take on the challenge of researching this multidisciplinary topic, including a review of the related academic journal literature. Although I have both a science and technology background, I do not have any bias toward any of the various sciences and technologies that relate to this question. The book was five years in its creation.

PS: Experts have been saying for centuries that the Earth cannot handle its population. What’s different now?

DS: For one, it took 300,000 years for the human population to reach 1 billion. It then took only 100 years to reach 6 billion. More importantly, our tools have advanced at an even greater exponential rate leading to the two most powerful tools ever: genetic engineering and artificial intelligence (AI). They are both change-agents for amazing improvements as well as existential threats.

PS: Talk to me about Darwin’s Theory of Evolution in the context of advances in hygiene, sewage management, infection control, nutrition and medical capability.

DS: Darwinian natural selection involves the interaction of random genetic changes with the environment. For virtually all of the 3.8 billion years that life has evolved on Earth, natural selection forces in the environment have caused certain genetic changes in organisms (including humans) to make them less suitable to survive long enough to have viable offspring. That is, most random genetic changes that have any effect on survival tend to be negative, causing the organism to die early. In the past 100 years or so, humans have managed to keep people alive long enough to have children when, in past years, they would have died from infections, malnutrition and various genetic abnormalities. Thus our medical, hygiene and other advances are negating the natural selection environmental factors.

<p><a href="https://www.amazon.com/dp/0692946039/?psimo-20" target="_blank" role="link" rel="nofollow" class=" js-entry-link cet-external-link" data-vars-item-name="What Comes After Homo Sapiens?" data-vars-item-type="text" data-vars-unit-name="59f78674e4b04494283378a2" data-vars-unit-type="buzz_body" data-vars-target-content-id="https://www.amazon.com/dp/0692946039/?psimo-20" data-vars-target-content-type="url" data-vars-type="web_external_link" data-vars-subunit-name="article_body" data-vars-subunit-type="component" data-vars-position-in-subunit="2">What Comes After Homo Sapiens?</a></p>

PS: You write that Ray Kurzweil’s “singularity” is a double-edged sword. What do you mean?

DS: Kurzweil’s singularity is different from other AI predictions in that it involves the interchangeability of the human brain with computers using embedded nanobots in our brains. He predicts this will occur by 2045. Many others, including myself, think that is an unrealistic timetable. Nonetheless, I believe it is possible at some time. Others, including Elon Musk, are working on other types of human brain-to-computer interfaces that could have similar results.

There could be many beneficial results of achieving Kurzweil’s singularity — at a minimum, the ability to fully emulate the human brain in a computer, including intelligence. This alone will enable a much more complete understanding of the human brain and will be instrumental in developing treatments for genetic, psychiatric and other brain disorders. The ability to “download” as well as “upload” information to and from a computer to an individual brain would solve the greatest challenge to science we currently have: information overload. Researchers simply cannot keep up with today’s scientific literature. The Kurzweil interface would enable complete summarization of the relevant scientific literature to individuals, eliminating the current information gaps, redundancies and excessive costs and time to bring new discoveries into practice. The other major benefit is that once human brains are fully emulated in a computer, the computer has great advantages over the human brain in advancing knowledge. The computer is faster than human neurons, has greater storage capacity and has access to all known human information on the Internet. New knowledge and concepts can be developed from large database analyses far more rapidly than the human brain. These can then be uploaded back into humans.

The downsides, however, are potentially significant. For one, there would be the possibility of “hacking” of the brain-computer interface. The consequences of such hacking are speculative at the moment. There’s the danger of creating inequality in the availability or affordability of the nanobot procedures between groups separated by finances or other criteria. Again, the consequences are speculative, but could include financial, political and social consequences. The most significant danger is that associated with all advanced AI scenarios. This is the likelihood that the computer-based AI will quickly progress to artificial superintelligence (ASI), which will become out of the control of humans. Such technological experts as Nick Bostrom, Max Tegmark, Elon Musk, Stuart Russell, Stephen Hawking and many others have written extensively on these dangers and our risk in not being able to insure that ASI is “friendly” to humans.

Popular in the Community