Back at TEDGlobal, the most difficult aspect of preparation was the editing process, chipping away at three hours of material to cut it down into 14 live minutes.
Of all the darlings that were killed, the ones I missed most were a few minutes about genetic algorithms. Like everything I presented, it's a subject for which I have interest and passion, not training or expertise. It's going to seem very dull. But stay with me for a few of the missing minutes.
To oversimplify, genetic algorithms (GAs) are a special type of algorithmic analysis. A friend explained it this way: Let's say you were running one to figure out the optimal shape for the wing of an airplane. You have some variables: size, weight, material, shape, for example, and you have criteria for their effects: fuel consumption, wind drag, speed, and so on.
You know how to find out what variables will yield what effects. But there are so many variables to consider; it's impossible to know where to start. Do you start by just improving what we know works? What if there might be some radical new way to do it better? We're using GAs to get results for questions like these.
A GA takes all the variables (size, weight, material, in the wing example) and puts in a bunch of random values, creating a set of 20 randomly determined variations: one "generation." The generation runs its simulated course, generating better or worse results, given the criteria you have in mind. Random wing #4 might perform much better than #8 -- but #19 is the clear stand-out.
If you were to brute-force the problem, you'd have to try every possible combination of every possible number for every variable. So GAs take their cue from biology, and "learn" from one generation to another. The GA for the wing will take the randomly assigned wing #19 from that first generation -- the outperformer -- and when it runs the second generation of new wings, it will give genetic weight to the comparable success from that first generation. If wing #19 was using aluminum, perhaps, there might be more offspring in the second generation of wings that use aluminum.
Essentially, we are harnessing the principles of evolution -- crossover, mutation, natural selection -- to speed up the race toward optimal outcome. But where biology takes place at a geological scale, GAs are running in computer time, microseconds and milliseconds. So a bunch of variables go in, and after a day or two, out come some optimized results.
Like everything, it's easiest to understand if you can watch it work. This slows it all down to human time, which defeats the point. But to try and visualize it, let me point you to an amazing site: the Boxcar2D "Computation Intelligence Car Evolution Using Box2D Physics." You can try it for yourself here and get deep into the process of it here.
Stories without authors.
I remember when my friend Adam first showed me the Boxcar GA. I was silent the rest of the night, watching the random assembly of weight and wheels make their way across a random landscape of hills. It was like watching Beckett, a bunch of dead weight with one tiny wheel, trying to drag itself up a hill, unaware that there's another wheel spinning freely on top of it. It's like watching a puppy in the snow -- you want to say: it's OK! Just put the other wheel on the bottom!
But this is exactly the point: you're not in any dialog with the process. Instead, you can passively watch it like I did for a few hours. You'll see that it figures out for itself: wheels go on the bottom. The wheels go on the bottom, the weight starts to streamline and shape itself into something we can recognize as streamlined, and then our little boxcar makes it all the way to wherever it was going.
What's unnerving for many -- at least for me -- is that the solution doesn't start with a hypothesis, with an idea. No hunch, no sense of the world. It doesn't know that the wheels should go on the bottom, any more than a human fetus knows that its brain should be housed in the skull. It just worked out best that way over a few trillion randomly differentiated events. That's why "artificial intelligence" is a bit strange when considered in this context. The result is the result an Intelligent entity would produce, but it's not a form of intelligence we can recognize as analogous to our own.
Post-TED, this is what I've been thinking about the most lately, thinking about it as the most important part of the story. It's the story of stories themselves. The Boxcar has no narrative except the straight motion from A to B, over simulated distance and compressed time. So a million boxcars ship every day in a million forms, some drive the market, some actually design real cars, others are just recommending your next movie on Netflix. We recognize a beginning and a conclusion. But we can no longer recognize the author.
The market used to be the result of a general sense of the world, people who could "hear the music," as one of the bankers say in Margin Call. Automobile design used to have fashion that came down from iconic designers, like when Harley Earl took the tailfin GM motif from P-38 fighter planes and began a designer's race to the streamlined future. Movies used to be something that we heard about from that critic, or that friend.
These days, the authorless stories are stories without intention, stories that could never explain how they got to where they got to. All they could do is puzzle through it by stepping backwards through time, the same way we try to piece together how we ended up with an appendix inside each one of us. In staring down a Genetic Algorithm, you are staring down the larger questions of what happens when no one is driving, no one is editing, a world without supervision. A world in which darlings die because they fail a fitness test. Not because anyone killed them.
The physics of culture.
Stories are how we understand the world. But here we are, and more and more of what we experience as story is no longer subject to a storyteller's criteria. News is aggregated, information is optimized, these systems become the physics of culture. We are displaced in a world in which choice, delivery, and editing can no longer be assumed to have intention inside them. That's what I wanted to tell in Edinburgh, but didn't have the time. That was the choice I made.
But it's also the meaning of what I write here: I arrived here, this is how I got here, it wasn't perfect then and it's not perfect now. But if it feels like something was transmitted that's difficult to quantify -- something besides and beyond information -- then maybe this other thing is the reason we care about it to begin with.