It’s been a long time since I posted anything here… and that doesn’t change now, except in a technical sense. My writing activities are currently split between my work for the European Mathematical Society (see here for why you should join) and writing about the current political situation, with which I am greviously displeased, on social media. If you are interested in the latter, see my Twitter account on your right, and I have also started a blog on Medium. The first post is: Remaining Angry.
Imagine the benefits that could reaped if economic activity could be organised in a rational and scientific way, instead of abandoned to the chaos the marketplace! Imagine the efficiency gains there would be, with workers, managers, farms, and factories all pulling together instead of wastefully competing against each other!
For a period, in the Soviet Union of the 1950s and 60s, there was a genuine and exhilarating belief not just that communism was morally preferable to capitalism, but that it could actually beat capitalism at its own game. There was even a moment, at least for those with the eyes to see it, when it looked as if that might just be beginning to happen.
It is this era which is so brilliantly captured in Francis Spufford’s fictionalised account, Red Plenty. I was recommended the book by the estimable Miranda Mowbray, when we were both speakers at a maths outreach day in London. Her talk was on “Drinking from the fire hose – data science”. Mine was on Linear Programming, and afterwards Miranda remarked that she’d read a book in which Linear Programming was the main character. And so it is.
For the question arises: in the absence of a market to balance supply and demand, how should the central planners set about their work? How much viscose should they instruct a particular factory to produce, given the number and locations of other factories, the availability of sulphur, salt and coal, and the requirements of the fabric, cellophane, and tyre manufacturers?
Astonishingly, the mathematician Leonid Vitalevich Kantorovich was able to devise a tool to answer to this sort of conundrum, in his seminal 1939 work on optimal resource allocation. (It would earn him a Nobel Memorial Prize in Economics in 1975.) The consequence of this breakthrough was spectacular: the political apparatus of central planning could be armed with linear programming, the technical means to accomplish that task, and thus would usher in a new era of Soviet abundance.
Well, it’s hardly a spoiler to say that it didn’t work out quite like that. Red Plenty recounts the rise and fall of that tide: from the elation of discovery and the hope of a better world, to frustration, cynicism, and the ultimate tragedy of failure.
Now, a book about a doomed political philosophy and a technical mathematical procedure may be admirable, but is it entertaining? Reader, it is rip-roaringly so. The story is told episodically, each chapter built around one character, sometimes real, sometimes fictional, each passage invested with the significance that its inhabitants feel. Some are hilarious, some horrifying.
There is Kantorovich, of course, the prodigy and professor. There is the ambitious but sincere (fictional) young economist Emil Shaidullin, trudging through fields in his best city suit, determined to improve the lot of the rural poor. Sasha Galich is a (real) flamboyant song-writer and playwright, becoming uneasy with the ends to which his art is put. Zoya Vaynshteyn is a (fictional) scientist enjoying a mad midsummer’s night, but quietly pitied by her colleagues for the unsayable truth: that her subject, genetics, is afflicted with the plague of Lysenkoism. Sergei Lebedev is a (real) computer pioneer, toiling away in his Institute’s basement to build the machines that will perform the enormous economic calculations far faster than any capitalist market. We meet Mr Chairman, Nikita Sergeyevich Khrushchev himself, travelling to the USA to strike a deal and issue oafish challenges. A (fictional) central planner Maksim Maksimovich Mokhov juggles the balances for 373 commodities in the chemical and rubber goods sector.
What’s so compelling is the colour and humanity of all these people as they live their lives entangled in the Soviet system. Some embrace the socialist dream, some resist, many simply try to organise their affairs around it. There are a few striking characters we meet only once, such as the (fictional) wheeler-dealer Chekuskin, frantically digging his clients (and himself) out of political holes in the Urals. But several we revisit at later stages of their careers, when dreams have died (or been revised downwards), consciences have been pricked, or lines have finally been crossed. Whilst an idea, that of Linear Programming, may indeed be the story’s main character, it is the human supporting cast that makes it so engrossing.
As a postscript, it is worth stressing that Linear Programming really did change the world, and in an altogether more desirable fashion than can be said for the command economy. As so often during the Cold War, very similar work was carried out independently and in parallel on opposite sides of the Atlantic. Linear Programming arrived in the USA with George Dantzig’s 1947 discovery of the Simplex Algorithm. Nowadays, these techniques are employed daily by countless organisations around the world to solve otherwise intractable optimisation problems.
Barry Cooper, who very sadly died on Monday, was a central member of the Leeds logic group since the 1960s. I joined that group as a graduate student in 2001, and since then have had the pleasure to get to know him. He always took an active interest in his younger colleagues, myself included, and was enthusiastic about mathematical outreach. Of all the senior mathematicians at Leeds, I would say Barry was the most vocally supportive of my early efforts in that area, and I remain grateful for his support.
Barry’s research interests were in the field of computability (or more accurately incomputability) and in particular the structure the Turing degrees. Roughly speaking, a set of whole numbers X has a higher Turing degree than another (Y) if a computer with access to X has the power to tell which numbers are and are not in Y. Thus, in a very natural sense, X contains all the information that Y does (and possibly more). It may be that Y can do the same for X, in which case the two sets have the same Turing degree.
This simple idea produces a fascinating and fundamental structure, known to the experts as the upper-semi-lattice of Turing degrees. There are all kinds of weird and wonderful configurations hiding within it: two degrees where neither is higher than the other, individual degrees which are minimal (in that there is nothing below them besides the zero degree of computable sets), two degrees which have no greatest lower bound (this is what makes it a semi-lattice rather than a full lattice), and a great deal else besides. This structure (and assorted close relatives) has been the subject of a huge amount of investigation. Barry has played a leading role in this programme over many years.
Outside research mathematics, Barry was popular, active, and successful in a frankly alarming number of different arenas. He was an excellent and well-liked teacher, and will surely be missed by Leeds undergraduate mathematicians as well as by his colleagues and numerous current and former graduate students.
In sport, he was a keen long-distance runner, with a personal best marathon time of 2hrs 48mins. His most recent outing was the 2010 London marathon. One common interest he and I shared was jazz, with Barry having a particular taste for its wilder and more avante garde varieties. He was a founder of Leeds Jazz, and helped attract numerous top artists to the city, including Art Blakey, Courtney Pine, Paul Motian, and Loose Tubes (to pick 4 examples just from 1986). One regret I have is not going to more gigs with him.
A committed political activist and unapologetic left-winger, Barry was involved in various political campaigns, including the the Chile Solidarity Campaign that was set up following the military coup of 1973.
In recent years, Barry devoted a huge amount of energy to the Alan Turing centenary events of 2012. An utter triumph, this anniversary had an astonishing global impact (and overflowed enormously beyond its allotted twelve months), and made great progress in bringing Turing the long overdue recognition he deserves. One outcome of the year was the book Alan Turing: His Work and Impact, edited by Barry and Jan van Leeuwen, a hefty and definitive volume which scooped several prizes, including the Association of American Publishers’ R.R. Hawkins Award. Another result of the increased publicity was a Royal pardon for Turing in 2013; another was the Imitation Game film of 2014. The success of the whole project was in large part due to Barry’s leadership, and the mathematical and computer science communities surely owe him a large debt of gratitude.
Barry announced just two weeks ago that he had been diagnosed with untreatable cancer, a development he met with a characteristic selflessness and equanimity. He died on Monday, surrounded by his family. Over the course of his life, Barry touched many people in many ways, and just as many will now miss him.
Yesterday, my twin sons turned one. I have spent an amazing number of hours over the last year watching them. I wondered if this experience might teach me something too, about how to learn. After all, babies are the grandmasters on this subject. In the same time that it has taken me to incrementally advance my knowledge of some tiny corner of mathematics, my children have moved from a total inability to do anything besides scream, crap themselves, and scream again, to being able to feed themselves (messily, so messily, but still), crawl, clap, grab, wave, recognise people, stand, and so on. And these are just the most visible manifestations of a deep mental transformation during which their brains have learnt huge amounts about processing sensory data and coordinating muscle movement.
If that rate of learning was to continue through their lives, they would grow into geniuses far surpassing anything humanity has seen so far. So how do they do it? Of course a large part of the answer is physiological: babies’ brains are a lot more plastic than adults’, highly efficient sponges for the absorption of new skills. There’s not a great deal we can do about that (although there may be something). All the same, I think we might be able to see some other, useful principles in action too:
- Play. We don’t usually talk about babies “working” – but they are, just as assuredly as a student revising or a scientist researching. The difference is that babies are also undeniably playing – and we wouldn’t usually describe either of the other two in that way. Babies are not motivated by exam grades or pressure to publish. The more fun you find your work, the better you do it.
- Be curious. The babies immediately home in on any new item which appears in their playing area, and start investigating. They are always exploring the room’s boundaries, and grabbing at anything unfamiliar or interesting (my laptop, mugs of hot tea, etc..). But they are not searching for anything in particular. Set aside some time for open-ended exploration and experimentation.
- First, learn one thing well. The boys learned to clap quite early on. With this under their belt, other manual skills such as waving and pointing were comparatively easy to pick up. Likewise, an experienced mathematician will find it easier than a novice to master an unfamiliar mathematical topic. Even if neither has any directly relevant knowledge, the fact that one is practised in the art of learning mathematics should carry them a long way. Building skills can be worthwhile, even when the skills themselves are not.
- A change is as good as a rest. In the opposite direction, the twins do not spend hours at a stretch practising one thing, such as walking. Instead they do it for a little bit, then get distracted by a toy, move onto another toy, have a crawling race, try holding a conversation with their mother, then they do some more walking, and so it goes on. Have more than one project on the go.
- Don’t be scared. Most of the time, my sons appear completely fearless. They happily crawl into perilous situations, pull over heavy objects, and invite disaster in any number of imaginative ways. This is despite the fact that they regularly do fall down and otherwise upset themselves. Take risks. Even if they don’t immediately pay off, continue to take risks.
- Don’t be embarrassed. Babies are not only unworried about taking a tumble, they’re also unafraid of looking like fools. The more I ponder this, the more important I think it is. In my efforts to learn Japanese, for example, I am hindered (perhaps more than I have realised) by the fear of making embarrassing mistakes in conversation with my in-laws. Likewise, mathematicians do not enjoy admitting errors, or gaps in their knowledge, in front of their colleagues (let alone their students). I think this is a bad habit. In order to learn from your mistakes, you must first allow yourself to make some.
- Accept help. The boys are utterly dependent my wife, me, and the other generous people who help us look after them (thank you!). Obviously, adults shouldn’t be that reliant on others, except in extremis. Nevertheless, there may be people in your life who would like to help you succeed. Let them.
- “Good enough” isn’t good enough. My children have reached the point where crawling is a highly efficient form of travel – they can zoom around the house to wherever they want to be. Walking, meanwhile, is a faltering, risky business. It would be a perfectly rational short term decision if they opted not to bother with it. Of course, babies don’t reason that way, which is just as well. Invest in your long-term skills, even at short term cost.
- Don’t focus on the scale of the challenge. My children’s vocabulary currently consists of little more than “dadadada”, “mamamama”, “aaaaghh”, and “pmmpphh”. It will be quite a journey from these noises to mastery of the language of Shakespeare (and indeed that of Chikamatsu). Of course they have no idea about that. The journey matters more then the destination.
I have recently been going through my book Maths 1001 making updates for a forthcoming foreign edition (of which more in future). So I have been looking over mathematical developments since approximately 2009. Thus, I present ten major developments in the subject since around then, arranged arbitrarily in ascending order of top-ness.
10. Mochizuki’s claimed proof of the abc conjecture. The countdown kicks off on an awkward note. If Shinichi Mochizuki’s 2012 claimed proof of the abc conjecture had gained widespread acceptance, it would definitely top this list. As it is, it remains in limbo, to the enormous frustration of everyone involved.
9. The weak Goldbach conjecture. “From 7 onwards, every odd number is the sum of three primes.” We have known since 1937 that this holds for all large enough odd numbers, but in 2013 Harald Helfgott brought the threshold down to 1030, and separately with David Platt checked odd numbers up to that limit by computer.
8. Ngô Bảo Châu’s proof of the Fundamental Lemma. Bending the rules to scrape in (time-wise) is this 2009 proof of a terrifyingly technical but highly important plank of the Langlands Program.
7. Seventeen Sudoku Clues. In 2012, McGuire, Tugemann, and Civario proved that the smallest number of clues which uniquely determine a Sudoku puzzle is 17. (Although not every collection of 17 clues yields a unique solution, their theorem establishes that there can never be a valid Sudoku puzzle with only 16 clues.)
6. The Growth of Univalent Foundations/ Homotopy Type Theory. This new approach to the foundations of mathematics, led by Vladimir Voevodsky, is attracting huge attention. Apart from its inherent mathematical appeal, it promises to recast swathes of higher mathematics in a language more accessible to computerised proof-assistants.
5. Untriangulatable spaces. In sixth position is the stunning discovery, by Ciprian Manolescu, of untriangilatable manifolds in all dimensions from 5 upwards.
4. The Socolar–Taylor tile. Penrose tiles, famously, are sets of tiles which can tile the plane, but only aperiodically. It was an open question, for many years, whether it is possible to achieve the same effect with just one tile. Then Joan Taylor and Joshua Socolar found one (pictured above).
3. Completion of the Flyspeck project. In 1998, Thomas Hales announced a proof of the classic Kepler conjecture on the most efficient way to stack cannon-balls. Unfortunately, his proof was so long and computationally involved that the referees assigned to verify it couldn’t complete the task. So Hales and his team set about it themselves, using the Isabelle and HOL Light computational proof assistants. The result is not only a milestone in discrete geometry, but also in automated reasoning.
2. Partition numbers. In how many ways can a positive integer be written as a sum of smaller integers? In 2011, Ken Ono and Jan Bruinier provided the long-sought answer.
1. Bounded gaps between primes. It’s no real surprise to find that the top spot is taken by Yitang Zhang’s wonderful 2013 result that there is some number n, below 70 million, such that there are infinitely many pairs of consecutive primes exactly n apart. The subsequent flurry of activity saw James Maynard, and a Polymath Project organised by Terence Tao, bring the bound down to 246.
Where’s Hairer’s work on the KPZ equation? What about Friedman’s new examples of concrete incompleteness?! What can I say? It’s just for fun, folks. If you think I’ve got it horribly wrong, then feel free to compile your own lists. (The real answer for such things being left out is that I couldn’t easily update my book to include them.) And now…
Bonus feature! Progress in computational verifications and searches
In no particular order:
- The simple continued fraction of π has now been computed to the first 15 billion terms by Eric Weisstein, up from 100 million.
- The decimal expansion of π has been computed to 13.3 trillion digits, up from 2.69999999 trillion.
- The search for the perfect cuboid: if one exists, one of its sides must be at least 3 trillion units long, up from 9 billion.
- Goldbach’s conjecture has been verified for even numbers up to 4 × 1018 by Oliveira e Silva, up from 1018.
- The largest known twin primes are the pair either side of 3756801695685 × 2666669, up from 2003663613 × 2195000.
- The largest known prime, and the 48th known Mersenne prime (up from 47), is 257885161-1, up from 243112609-1.
- The Encyclopedia of Triangle Centres contains 7719 entries (up from 3587).
- The longest known Optimal Golomb Ruler is now 27 notches long (up from 26):
(0, 3, 15, 41, 66, 95, 97, 106, 142, 152, 220, 221, 225, 242, 295, 330, 338, 354, 382, 388, 402, 415, 486, 504, 523, 546, 553)
- The most impressive feat of integer-factorisation using classical computers, is that of the 232-digit number RSA-768:
1230186684530117755130494958384962720772853569595334792197322452 1517264005072636575187452021997864693899564749427740638459251925 5732630345373154826850791702612214291346167042921431160222124047 9274737794080665351419597459856902143413
into two 116 digit primes
(The previous record was the 200 digit semiprime RSA-200.)
- The most impressive feat of integer-factorisation using a quantum computer is that of 56,153=233 × 241. The previous record was 15.
- The Collatz Conjecture has been verified for numbers beyond 2 × 1021. The previous record was 5.76 × 1018. (However, this has happened via a patchwork of distributed computing projects, and I have not been able to establish with any certainty that every number up to the new higher limit has been checked. I encourage someone in this community to organise all the results in a single location.)
I have a new post at The De Morgan Forum.
I mean the one about partitions.
What is a partition? It’s simply a way of splitting up a number (a positive integer) into smaller pieces. For instance 2+2+1 is a partition of 5. The question is: how many ways of doing this are there?
Well there’s just one partition of one, namely 1.
There are 2 partitions of two: 2 and 1+1.
There are the 3 partitions of three: 3, 2+1, 1+1+1. (Notice that for these purposes, we count 2+1 and 1+2 as the same partition.)
So far the pattern looks rather easy. But there are the 5 partitions of four:
4, 3+1, 2+2, 2+1+1, 1+1+1+1
And then 7 partitions of five:
5, 4+1, 3+2, 3+1+1, 2+2+1, 2+1+1+, 1+1+1+1+1
Continuing the sequence, we find 1, 2, 3, 5, 7, 11, 15, 22, 30, 42, 56, 77, 101, 135, 176, 231, 297, 385, 490, 627, 792, 1002, 1255, 1575, 1958, 2436, 3010,…
What’s the pattern now? Finding an explicit formula for the number of partitions of is seriously hard. Even Euler couldn’t manage it. But he was able to make some progress: he found a generating function.
Next question: what is a generating function? To answer with an example, the generating function for the sequence is the power series .
More generally, the generating function for the sequence is a series: .
Next question: what’s the point? Often the resulting series can be radically compressed. For instance, a bit of mathematical magic tells us that the series can be rewritten as . Now the whole infinite sequence has been encapsulated by a short, simple algebraic expression. So, if you want to know the th element of the sequence, you can extract it (if you know how) fairly easily from the generating function.
Well, there is not a lot of mystery in the sequence – certainly there are no prizes for guessing its th term. But what of the partition sequence: ? Euler’s great insight was that there is a generating function for this too:
You can see that this is not as nice as the function above – and it’s certainly not nice enough to close the book on the whole problem of computing partition numbers. All the same, this function is hugely easier to work with than scrabbling about with partitions with bare hands. Euler’s generating function can also be written as
On first sight, this looks like an algebraic nightmare: you have to open up an infinite number of brackets, each of which contains an infinite series! However, if you want to compute the th partition number, you only care about the first brackets, and only the first few terms in each: from the first bracket you care about the first terms, from the second bracket approximately the first , from the third approximately the first , and so on. In fact the total number of terms you need to worry about is approximately , which is a manageable number when is not too big.
So although Euler didn’t find a formula for partition numbers, his generating function does provide a manageable procedure for computing these numbers. But what of an actual formula?
Further progress came courtesy of a pair of mathematical superstars to rival Euler: Hardy and Ramanujan. The th one is given approximately by the formula
As grows bigger and bigger, this number gets proportionally closer to the right answer. But what about an exact formula? That’s something there has been huge progress on in recent years… but perhaps that’s a story for another day.
 There are issues of convergence here, which I will ignore for now.