Are you in charge of your own future? Maybe not, a new book argues, but there's still an opportunity to change that.
In Heartificial Intelligence, author John C. Havens introduces a realm of algorithms and smart machines well beyond anything you might know about. How do we navigate a world in which software is intelligent enough to learn from our behaviors and manipulate us to click and consume? What happens when that software is the soul of a robot caring for the elderly? Figure this stuff out now or pay the price later when every bit of your life is a datapoint owned by tech giants, Havens suggests.
Thankfully, the book is geared toward presenting solutions that will help humankind control its destiny. Havens guides the way with sometimes-funny, sometimes-touching hypotheticals.
"Richard was a good-looking man, with hazel eyes like his mother's and a mop of yellowish hair that always reminded me of spun gold... God, how I loved my boy," Havens writes in chapter three. "Which is why I was so pissed he'd sent his cyberconscious android mindclone to see me versus coming to visit himself."
The Huffington Post spoke to Havens about his book, the future and why these topics are important for everyone.
Tell me a little bit about your approach in Heartificial Intelligence.
John Havens: I’m sure you’ve seen a lot in the media that polarizes the debate around AI. Either it’s going to fix the world, or it’s the Terminator and it’s going to kill us. So, my main goal of the book was to figure out how to approach the future as an individual.
In my case, I’m a dad and someone who loves tech and someone who understands that AI is here. Sentient beings and all that, that’s coming down the road, but the beginnings of AI -- algorithms, personalization stuff -- it’s already here. So, I wanted to create a book that could be a manual almost, so that people could work through these issues. I’ll also present solutions. So many great people around the world are thinking about solutions beyond the polarizing debate, and so I thought, there’s a good book idea.
Who's this book for?
Anyone. I say that because there's a glorious opportunity for us as humans to have a kind of introspection in our own lives. We have a five- maybe 10-year window where, if we as individuals don’t say “This is what I believe; these are my values” and codify them and project that to the world as our subjective truth and identity, then algorithms and other organizations tracking us will make those decisions for us. They’ve already started.
If nothing else, what do you hope a reader of this book would ultimately walk away with?
I think the big thing for me is the word "values." There’s a lot of research showing that if you don’t live to your values, your wellbeing or happiness decreases. Obviously there’s so many ways to say "these are my values," whether it’s religion, or church, temple, whatever, without using technology first.
A lot of fields in AI are doing things like having machines watch hours of YouTube videos, to start to paint a picture of what human values are, so they can reverse engineer and put it into programming for things like companion robots, which are very much here already. They’re going to be watching us and our kids, and because of their programming, making decisions about our emotional lives and our values.
This friend of mine named Jarno Koponen wrote some great pieces in TechCrunch -- he calls this an algorithmic angel, and I love that term. That’s the real hope for the book: For people to understand how they can track their values to improve their wellbeing without the tech, and then understand why that is going to be so critical.
This interview has been edited and condensed for clarity.