Mathematicians have always been fascinated by the great problems of their subject, the deep and difficult questions that they would give their right arm to answer. Some have been puzzling us for thousands of years; others have been around for only a few decades. Some are questions that anyone can understand, such as "is there a really efficient way to find the prime factors of a large number?" (Right now, we have no idea.) Others are esoteric issues requiring a lot of specialist background knowledge, such as the mass gap hypothesis in quantum field theory: Is there a lower limit to the mass of a fundamental particle? In between are practical problems, such as the existence of solutions of the standard equations for fluid flow, valid for all future times. Or the Kepler conjecture, now proved: the closest way to pack spheres is to stack them the way a supermarket stacks oranges. It sounds obvious, but it's not.
Most great problems become great when something that looks as if it ought to be simple turns out to be much harder than anyone expected. Fermat's last theorem started as an offhand remark, which one of the leading mathematicians of the 17th century wrote in the margin of a classic textbook. The problem became notorious because no one could decide whether Fermat was right or wrong. His simple statement about whole numbers (two nth powers can't add up to another nth power if n is three or more) remained an enigma for 350 years, until Andrew Wiles dispatched it after seven years of toil. The Poincaré conjecture appeared in 1904, when Henri Poincaré belatedly realised that he had tacitly made a harmless-looking assumption in earlier work, which he couldn't prove. "This question would lead us too far astray," he wrote, but actually he had no idea how to solve it. It remained unanswered for more than a century until the eccentric genius Grigori Perelman proved Poincaré was right. He then declined all academic honors and a million-dollar prize. The Riemann Hypothesis, about a fundamental issue in complex analysis related to prime numbers, continues to baffle the world's mathematicians, and remains as impenetrable as ever after 150 years. If there is a Holy Grail of mathematics, this is it.
Great mathematical problems sometimes arise from questions about the natural world, but more often they emerge because gaps appear in our mathematical knowledge. Often mathematicians don't really care about the answer as such; what worried them is that they don't know what it is. Problems that we struggle to solve are like a canary in a coalmine: a warning that there's something vital going on, which we don't know about.
New scientific discoveries often lead to new mathematics. Isaac Newton's laws of motion and gravity didn't provide an immediate understanding the solar system. Instead, mathematicians had to grapple with a whole new range of questions: now that we know the laws, what do they tell us? Newton invented calculus to answer that question, but his new method often rephrases the question instead of providing the answer. It turns the problem into a formula, a differential equation. To get the answer, you have to solve the equation. Which can be difficult.
Over the ages, as humanity's mathematical knowledge has grown, a second source of inspiration has played an ever-increasing role in the creation of even more: the internal demands of mathematics itself. If a particular piece of mathematics keeps appearing in, say, questions about the physics of waves -- ocean waves, vibrations, sound, light -- then it makes sense to investigate that idea in its own right. You don't need to know exactly how it will be used: waves arise in so many important areas that significant new insights are bound to be useful for something. In this case, the applications included radio, television, and radar.
If somebody thinks up a new way to understand heat flow, and invents a brilliant but controversial new technique to do the sums, it clearly makes sense to sort the whole thing out as mathematics. Even if you don't give a fig about how heat flows, the results are likely to be applicable elsewhere. Fourier analysis, which emerged from this particular line of investigation, underpins modern telecommunications, makes digital cameras possible, and helps clean up old movies and recordings. The FBI uses a modified version to store fingerprint records.
After a few thousand years of this kind of interchange between the external uses of mathematics and its internal structure, these two aspects of the subject have become so densely interwoven that picking them apart is pointless. At the research frontiers, the boundaries between the traditional areas of mathematics are not just blurred; they don't exist. Mathematics is not like a political map of the world, with each specialty neatly surrounded by a clear boundary, each country tidily distinguished from its neighbors by being colored pink, green, or pale blue. It is more like a natural landscape, where you can never really say where the valley ends and the foothills begin, where the forest merges into woodland, scrub, and grassy plains, where lakes insert regions of water into every other kind of terrain, where rivers link the snow-clad slopes of the mountains to the distant, low-lying oceans.
This ever-changing mathematical landscape consists not of rocks, water, and plants, but of ideas. It is tied together not by geography, but by logic. And it is a dynamic landscape, which changes as new ideas and methods are discovered or invented. Important concepts with extensive implications are like mountain peaks, techniques with lots of uses are like broad rivers that carry travelers across the fertile plains. The more clearly defined the landscape becomes, the easier it is to spot unconquered peaks and unexplored terrain that creates unwanted obstacles. Over time, some of the peaks and obstacles acquire iconic status. These are the great problems.
Ian Stewart is Emeritus Professor of Mathematics at the University of Warwick in England, and writes frequently about mathematics for the general public. His new book Visions of Infinity: the Great Mathematical Problems is published by Basic Books.