When I was a 25-year-old Ph.D. candidate at the University of Toronto, I led a team in discovering a field of medical biometrics that can authenticate a person's heartbeat with the accuracy of a fingerprint. Our discovery resulted in a consumer product called the Nymi, a wearable device that leverages a user's unique heartbeat as a secure biometric identifier to allow them to do everything from making a payment to opening a password-enabled mobile device.
I've since left that project and shifted back into research at Architech Labs, where I now lead a team in building innovative future technologies that use deep learning. Deep learning is the "next generation" of machine learning that trains computers how to learn like humans. Yes, in the past few years we've arrived at the moment of breakthrough in artificial intelligence that had for decades eluded some of the most brilliant minds in computer science. It's thanks to their perseverance and refusal to give up on their research that we are now at a place that will forever change our relationship to technology. This is not too bold a statement.
But because deep learning is a new subfield of artificial intelligence -- and one that the pioneers are only starting to understand themselves as they go -- there are a number of legitimate concerns about what this means for the rest of us. For example, if I can invent a technology that can read someone's emotional state just through a camera and an algorithm what does this mean for the future of privacy?
As it stands there are already concerns about who can see our photograph or home address on social media, or record our behavioural analytics based on how we interact with a website. So imagine the legitimate fear that we could no longer hide our real feelings and thoughts. Perhaps we don't even want to know our own mind.
These are technologies that we are building now thanks to deep learning and it's imperative that we not only address these concerns, but also consider as many potential outcomes as we can anticipate. More to the point, we need to take extreme precaution to apply controls and focus on uses for this technology that serve to benefit rather than exploit.
That's why one of the issues closest to my heart is privacy by design. Privacy by design is an international security approach that was spearheaded in Ontario in 1995 under former Information and Privacy Commissioner, Ann Cavoukian, in partnership with the Dutch Data Protection Authority and the Netherlands Organisation for Applied Scientific Research. Under privacy by design, technology companies must account for human values when creating their systems and ensure they have engineered for maximum individual privacy in every step of their process. It's a costly and time-consuming measure, but it's one of the only measures standing in the way of a digital Wild West.
Even if this weren't an international standard, however, I would still invest major time and resources embedding privacy by design measures into everything we build at Architech Labs. It's imperative to me that people have the choice to opt out of our biometric and facial recognition programs and that we don't store sensitive user information that can be accessed by those with ill intentions.
But I also invite you to think about this: 20 years ago, many large corporations resisted the implementation of email into their internal communications structure. They felt it would be too distracting and disruptive. Twelve years ago, Facebook normalized the process of sharing personal photos on a social media platform that could easily be captured and placed forever on a public interface. We knew these things and we resisted these things and many still resist, but email is now ubiquitous and as of the first quarter of 2015, there were 1.44 billion active users each month sharing photos and personal information on Facebook. There's nothing that shapes the acceptance of new behaviours faster than technology. Our children's generation will not even understand the concept of privacy the way previous generations have. They will read Orwell's 1984 as a curious anachronism of a past that no longer seems relevant.
There will also be those who abuse good technology for evil ends. That's human nature. Does that mean we shouldn't push forward? Should we refuse to innovate because something that could improve the life of 99 per cent of the population might be exploited by one per cent of the same? My feeling is that there are measures we can take to mitigate the risks. If this technology is going to be invented, may it be invented by companies that have an investment in ethical practices like privacy by design. Restrict the level at which machines can probe the mind. Direct its capabilities toward identifying the positives and minimizing the negatives. If you knew a computer could identify sentiments of loneliness or distress in elderly residents of a nursing home, and you could send a message to family to call or dispatch a social worker to sit with them for an hour, would you not consider the enormous benefit to the patient and to the healthcare system as a result? This is the same emotion detection technology I described earlier, seen through a different filter.
The conversations around this topic have only begun. Part of the reason is that we don't know what the capabilities we're building now will look like a few decades down the road. By establishing an imperative around protecting individual privacy to the best of our ability and defining the parameters around what this even means, we can shape the direction in which artificial intelligence is headed. At the same time, we need to keep in mind that our idea of privacy as it is today will have little to do with the way we conceive of it in every subsequent decade to come.
I am fascinated by these conversations and invite you to weigh in with your thoughts as we continue to explore the exciting topics of deep learning, artificial intelligence, privacy, emotion detection, and women in tech.
ALSO ON HUFFPOST: