Tim Cook did not shy away from hyperbole when introducing the iPhone X.
“The first iPhone revolutionized a decade of technology and changed the world in the process,” he declared. “Now, 10 years later, it is only fitting that we are here, in this place, on this day, to reveal a product that will set the path for technology for the next decade.”
Tim Cook’s hyperbole may be warranted, indeed. Think of what the iPod did for music. Think of what the iPhone camera did for photography. Now think of what the iPhone X will do for machine learning or, rather, what iPhone X developers and customers will do with machine learning.
It starts with what’s inside the iPhone X.
600 billion operations per second
The iPhone X contains a new chip dubbed the “A11 bionic neural engine” that is tailor made for the demands of artificial intelligence and can handle a jaw-dropping 600 billion operations per second — up to 70-percent faster than its A10 predecessor.
This gain in computing power — specialized hardware built for machine learning algorithms — follows Apple’s recent release of Core ML, a set of developer tools to more easily integrate machine learning with apps. According to Gene Munster of Loup Ventures, less than one-percent of more than 2.4 million apps currently rely on machine learning: “We believe Core ML will be a driving force in bringing machine learning to the masses in the form of more useful and insightful apps.”
Apple touted the prowess of its new hardware and software by unveiling just two applications: Face ID, facial recognition to unlock the new iPhone X and to make purchases with Apple Pay, and and animated emoji’s — so-called animojis. The company also described how how facial recognition can work with augmented reality apps. But don’t let these modest examples disappoint you. With Apple (and other device makers), the real magic often comes from the developer community.
Hinting that other third party applications are on the horizon, Apple’s vice president of worldwide marketing Phillip Schiller noted that Face ID can work with existing apps like Mint, 1Password, and E*Trade, which currently offer Touch ID to execute transactions with a finger print.
Admittedly, Apple’s facial recognition isn’t a first, and the company has a long history of being a second mover. Microsoft, for instance, uses facial recognition to unlock it’s Surface Pro computers. Samsung introduced a similar feature a few weeks ago for its Galaxy smartphones. And in China, facial recognition from local firms Megvii and SenseTime are used to buy chicken at KFC, catch jaywalkers, and make purchases at Ant Financial, an Alibaba subsidairy.
Yet with about 715 million iPhones in use, Apple has the market share to move mountains, and inspire imitation. Industry pundits expect the likes of Google and Samsung to quickly build similar machine learning capabilities into their mobile devices. And ahead of the Apple announcement, even news site BoingBoing was spreading the gospel with a special developer bundle, urging: “The time to learn iOS 11 coding is now.”
The atomization of machine learning
Technology improvements transfer the power to create, share, and enjoy from from the few to the many. Rip-Mix-Burn, streaming, and playlists ended the hegemony of record companies and radio. VCRs, video cameras, and YouTube broke Hollywood’s grip on the movie industry. Desktop publishing, blogging, and a little help from Craiglist, eroded the localized monopolies of newspapers. Admittedly, I’m simplifying things to make a point.
The power has shifted from the center to the edge, from the organization/group to the individual. We, the edglings, are operating in a new context, of our own making, and we won’t go back, we won’t give it back. And the centroids will have to realize that something profound has happened, over here, out at the edge, where the social applications are being invented. — Stowe Boyd
Machine learning is following this pattern. Kai-Fu Lee, head of Beijing’s Sinovation Ventures and former head of Google China, told me that in previous decades the algorithms, data, and processing needed for machine learning where solely the realm of research labs and large corporations. Now, thanks to the advent of developments like Tensorflow, open data sets, and inexpensive cloud computing, developers can “connect the dots” in ways they never could before.
For instance, I spoke recently with an AI entrepreneur who built a mobile app using very inexpensive software from Google, Amazon, IBM Watson, as well as open source deep learning tools. This team of three required very little investment and created something that it is already being used by nearly 100,000 people after only a few months. I can only guess what Apple’s new iPhone may make possible for them.
The power of X
The iPhone X represents a significant push of machine learning — it’s creation, distribution, and consumption — from the center to the edge, to the masses, to the edglings. What will developers build for these devices? What everyday problems will they solve?
If we are to believe Tim Cook’s hyperbole, then we need to ask, if it took 10 years for the iPhone to advance into an machine learning supercomputer, what will the next 10 years look like? I can’t wait to find out.