I have a new post at The De Morgan Forum.

I’ve recently been revisiting Euler’s Theorem. Not that one. No, not the one on this commemorative stamp either. No, no, not the totient function one. (Good guess though.)

I mean the one about *partitions*.

What is a partition? It’s simply a way of splitting up a number (a positive integer) into smaller pieces. For instance 2+2+1 is a partition of 5. The question is: how many ways of doing this are there?

Well there’s just one partition of one, namely 1.

There are 2 partitions of two: 2 and 1+1.

There are the 3 partitions of three: 3, 2+1, 1+1+1. (Notice that for these purposes, we count 2+1 and 1+2 as the same partition.)

So far the pattern looks rather easy. But there are the 5 partitions of four:

4, 3+1, 2+2, 2+1+1, 1+1+1+1

And then 7 partitions of five:

5, 4+1, 3+2, 3+1+1, 2+2+1, 2+1+1+, 1+1+1+1+1

Continuing the sequence, we find 1, 2, 3, 5, 7, 11, 15, 22, 30, 42, 56, 77, 101, 135, 176, 231, 297, 385, 490, 627, 792, 1002, 1255, 1575, 1958, 2436, 3010,…

What’s the pattern now? Finding an explicit formula for the number of partitions of is *seriously hard*. Even Euler couldn’t manage it. But he was able to make some progress: he found a *generating function*.

Next question: what is a generating function? To answer with an example, the generating function for the sequence is the power series .

More generally, the generating function for the sequence is a series: .

Next question: what’s the point? Often the resulting series can be radically compressed. For instance, a bit of mathematical magic tells us that the series can be rewritten^{[1]} as . Now the whole infinite sequence has been encapsulated by a short, simple algebraic expression. So, if you want to know the th element of the sequence, you can extract it (if you know how) fairly easily from the generating function.

Well, there is not a lot of mystery in the sequence – certainly there are no prizes for guessing its th term. But what of the partition sequence: ? Euler’s great insight was that there is a generating function for this too:

You can see that this is not as nice as the function above – and it’s certainly not nice enough to close the book on the whole problem of computing partition numbers. All the same, this function is hugely easier to work with than scrabbling about with partitions with bare hands. Euler’s generating function can also be written as

On first sight, this looks like an algebraic nightmare: you have to open up an infinite number of brackets, each of which contains an infinite series! However, if you want to compute the th partition number, you only care about the first brackets, and only the first few terms in each: from the first bracket you care about the first terms, from the second bracket approximately the first , from the third approximately the first , and so on. In fact the total number of terms you need to worry about is approximately , which is a manageable number when is not too big.

So although Euler didn’t find a formula for partition numbers, his generating function does provide a manageable procedure for computing these numbers. But what of an actual formula?

Further progress came courtesy of a pair of mathematical superstars to rival Euler: Hardy and Ramanujan. The th one is given approximately by the formula

As grows bigger and bigger, this number gets proportionally closer to the right answer. But what about an exact formula? That’s something there has been huge progress on in recent years… but perhaps that’s a story for another day.

[1] There are issues of convergence here, which I will ignore for now.

*As an ambitious young researcher, you can miss the wood for the trees. When you’re presented with a theorem, your inclination is to dive straight into the proof, and start grappling with the toughest ideas in there. But when you’re addressing a general audience, you have to step back, pause, and ask “What is the point of this theorem? Where did it come from? Why should we care?” You’re forced to adopt a different perspective on the subject, and that’s refreshing.*

Read more of my interview over at Math Frolic.

I have a guest post at The Aperiodical reporting on the London Mathematical Society’s birthday party last week, where Doctor Who was a recurring theme.

There are many songs in the world about love and loss, heartbreak and heart-ache. There are altogether fewer about algebraic geometry in the style of Alexander Grothendieck. Here is my attempt to fill that gap:

**Disclaimer:** I hope none of this needs saying, but just in case of misinterpretation:

1. No views expressed therein are attributable to any organisation to which I am affiliated.

2. Most of the views expressed therein are not attributable to me either, but are a deliberate exaggeration, for comic effect, of a common initial reaction to one’s first meeting with Grothendieck-style algebraic geometry.

3. It is not intended as a serious critique of any mathematician or school of mathematics!

**Lyrics**

When I was a young boy doing maths in class

I thought I knew it all.

Every test that I took, I was sure to pass.

I felt pride, and there never came a fall.Up at university, I found what life is for:

A world of mathematics, and all mine to explore.

Learning geometry and logic, I was having a ball.

Until I hit a wall…For I adore Euler and Erdős,

Élie Cartan and Ramanujan

Newton and Noether. But not to sound churlish

There’s one man I cannot understand.No, I can’t get to grips with Grothendieck,

My palms feel sweaty and my knees go weak.

I’m terrified that never will I master the technique

Of Les Éments de Géométrie Algébrique.He’s a thoughtful and a thorough theory-builder, sans pareil.

But can anybody help me find the secret, s’il vous plaît

Of this awe-inspiring generality and abstraction?

I have to say it’s driving me to total distraction.For instance… A Euclidean point is a location in space, and that we can all comprehend.

René Descartes added coordinates for the power and the rigour they lend.

Later came Zariski topology, where a point’s a type of algebraic set

Of dimension nought. Well, that’s not what I thought. But it’s ok. There’s hope for me yet!But now and contra all prior belief

We hear a point’s a prime ideal

In a locally ringed space, overlaid with a sheaf.

Professor G, is truly this for real?No, I can’t make head nor tail of Grothendieck

Or Deligne, or Serre, or any of that clique.

I’ll have to learn not to care whenever people speak

Of Les Fondements de la Géométrie Algébrique.But don’t take me for a geometrical fool.

I can do much more than merely prove the cosine rule.

I’ll calculate exotic spheres in dimension 29

And a variety of varieties, projective and affine.I’m comfortable with categories (though not if they’re derived)

I’ll tile hyperbolic space in dimension 25

I can compute curvature with the Gauss-Bonnet law

And just love the Leech Lattice in dimension 24.But algebro-geometric scheming

Leaves me spluttering and screaming.

And in logic too, you may call me absurd

But I wouldn’t know a topos, if trampled by a herd.I’ve tried Pursuing Stacks but they vanished out of sight,

I’ve fought with étale cohomology with all my might.

And Les Dérivateurs. It’s 2000 pages long.

I reach halfway through line 3, before it all goes badly wrong.No, I’ll never get to grips with Grothendieck

And I’m frightened that I’m failing as a mathematics geek.

All the same, I can’t deny the lure and the mystique

Of Le Séminaire de Géométrie Algébrique.– Richard Elwes, 2015

Comments are closed on the Youtube page (because obviously) and here (because broken) but open on Google plus, Facebook, and Twitter.

and here

Last week, we Leeds mathematicians were treated to a talk by ~~Professor Green~~ Professor Green with the delightfully elementary title of *Points and Lines*. It reported on joint work of Ben Green and Terence Tao. Here I’m going to talk about the background to their work. As always, any errors are mine.

The idea is that we will be presented with some *points*, specifically a finite collection of points in the plane. We want to predict the answer to questions of the form: how many *lines* are there through exactly (n) many of these points? We want to do this while knowing as little as possible about the details of what we’ll be given. Actually, on a first pass, these questions are pretty trivial:

- There are guaranteed to be infinitely many lines through zero points since when you’re positioning a line, a finite set of points is very easy to miss.
- Similarly, there will be infinitely many lines which hit exactly one point, since once you’ve decided which to hit, you can easily tweak that line to miss the rest.
- Must there be any lines which hit exactly two points? Not necessarily. If all the points lie on a single straight line (and if there are more than two of them), then any line either goes through 0, 1, or all of them.
- Arranging the points around a circle illustrates that there also need not be any lines which hit exactly 3 points. The same argument works for n= 4,5,6…

So far, so straightforward: we can predict the answer exactly for (n=0,1) and not at all otherwise. But let’s think again: in the case of (n=3), there’s obviously nothing special about a circle. There are numerous other curves which would do just as well: a parabola, hyperbola, squircle, catenary, … all of these and many others share the property than any straight line hits them at most twice.

However, for the case (n=2), there is something undeniably special about demanding that all our points lie along a single straight line. So the question arises: is this configuration the only obstacle? Must it be the case that *either* all the points lie along a line, *or* we will be able to find a line hitting exactly two of them? This problem was first posed, so far as we know, by J.J. Sylvester in the back pages of the Educational Times in 1893, under the unassuming title of *mathematical question 11851*. (The Royal Society’s *Sylvester Medal* is depicted above, of which Ben Green is the most recent winner, for another famous result of his jointly with Tao.)

The question was later re-asked by Paul Erdős and then solved in 1940 by Tibor Gallai, who established that the answer was *yes*: if a set of points do not all lie on a line, then there must be some line through exactly two of them.

A later proof by Leroy Kelly is, as Ben put it, a `classic of mathematics’ and actually induced an audible gasp from the audience. So let’s see it:

We assume that our points don’t all lie along a single line. That means we can definitely find two of them which lie on a line **L**, along with some other point **X** lying off **L**, as shown. In fact, there are likely to be numerous such configurations; we choose the one where the perpendicular distance from **X** to **L** is the least possible.

The claim is then that **L** contains *only* those two points. Suppose not. Then there must be two points **A & B** both lying on **L** and on the same side of the perpendicular line **XL**. Now, let **M = XA**. This line also contains two points (obviously), but the distance from **B** to **M** is less than that from **X** to **L**, which is a contradiction. **QED**

Clever, right? Anyway, for reasons unexplained, lines which pass through exactly two of our specified collection of points are known as *ordinary* lines. The Sylvester-Gallai theorem above therefore says that (so long as our points are not colinear) an ordinary line will always exist. But the next question is: how many must there be? In most cases there will be lots: if you try this for a random scattering of (m) points, then in all likelihood, whenever you draw a line through two points it will miss all the others. Thus there are around (frac{1}{2}m^2) ordinary lines.

But it is not hard to come up with sets with fewer. For instance, take (m-1) points all lying along a line, and add in one extra point (X) off it. Then the only ordinary lines are those which connect (X) to another point, giving (m-1) many.

A cunning construction by Károly Böröczky brings this down even further: a set of (m) points with only (frac{1}{2}m) many ordinary lines.[1]

Now we come to the theorem of Green and Tao. In their paper, they have shown that Böröczky’s examples are as good as it gets. For large enough values of (m), they prove that every non-colinear set of (m) points must have at least (frac{1}{2}m) many ordinary lines.

How large is `large enough’? The bound they establish is in the vicinity (10^{10^{10}}), but the true answer may be as low as 14. For smaller sets, it is possible to do better. The left-hand picture here shows a set of 7 points with only 3 ordinary lines.

And how is it proved? Well, I’m not going to go into details, but the key step is the unexpected (to me) connection of the property `has few ordinary lines’ with the property `can be covered with a single cubic curve’.

[1] For the cognoscenti: it achieves this by moving the action to the real projective plane, arranging (m) points symmetrically around the circle, and placing another (m) on the line at infinity at positions corresponding to the directions determined pairs of points from the first set. Then the only ordinary lines are the (m) many tangents to the circle at points. Finally, since no-one mentioned anything about lines at infinity being permitted, the whole set-up needs to be transported back to the plain old Euclidean plane, which can be achieved via a projective transformation. This however will distort the circle into an ellipse. See page 11 of their paper for a picture. )

If you’ve got a spare hour[1] at some stage here is a video of a talk I gave last month on Gödel, Incompleteness & Unprovable Theorems.

[1] The joy of the pause button is that the hour doesn’t have to form a single contiguous block.

If you shuffle a deck of cards perfectly eight times, you get back exactly to where you started. Here’s a video of the magician Adam West doing it:

In this post, we’ll see why this works. First let’s tighten up our terminology: a “perfect shuffle” here is really what magicians call a *Faro Out-Shuffle:* you split the deck in half, and then interleave the two halves. To make things easier, let’s work with a deck of ten cards, initially in numerical order.

Step one: split 1,2,3,4,5 from 6,7,8,9,10.

Step two, interleave the two: 1,6,2,7,3,8,4,9,5,10.

Notice that the outermost cards (1 and 10) remain in place. (That’s why this is an *out-shuffle* — interleaving the other way produces an *in-shuffle*.)

At the same time, notice that the card initially in 4th position is now in 7th, and vice versa. We can indicate this in *cycle notation* by putting the two together in brackets: (4 7).

What about the rest? The card in second position to start with is now in third place, while the card from third position is now in fifth, the card from fifth is now in ninth,… Continuing this line of thought produces the cycle (2 3 5 9 8 6).

So the shuffle is totally described by the collection of cycles (1) (10) (4 7) (2 3 5 9 8 6). (I’ve included the two trivial cycles of length one.)

Now, notice that performing any cycle of length two twice brings us back to where we started: in this case 4 goes to 7 and then back to 4. Similarly, performing a cycle of length 6 six times takes use back to the starting position. In fact then, performing this ten card shuffle six times brings everything back to where it started.

Ok. What about a full 52 card deck? Well, for cards initially in the top half of the deck (i.e. numbered 1-26), the card in (n)th position is moved to ((2n -1))th position after one shuffle, (meaning (1 to 1, 2 to 3, 3 to 5, ldots, 26 to 51)).

For cards in the second half (numbered 27-52), the card in (n)th position is moved to ((2n-52))th position (meaning (27 to 2, 28 to 4, ldots, 55 to 54, 56 to 56 )).

Repeatedly applying the first rule, we can see that 2 goes to 3 which goes to 5 which goes to 9 which goes to 17 which goes to 33. Now applying the second rule, 33 goes to 14 which goes to 27 (by the first rule) which goes back to 2 (by the second rule). Thus we have the cycle:

(2 3 5 9 17 33 14 27).

The same reasoning tells us the other cycles which make up the shuffle, and altogether they turn out to be

(2 3 5 9 17 33 14 27) (4 7 13 25 49 46 40 28) (6 11 21 41 30 8 15 29)

(10 19 37 22 43 34 16 31) (12 23 45 38 24 47 42 32) (20 39 26 51 50 48 44 36)

(18 35) (1) (52)

which is to say six cycles of length eight, one of length two, and the two trivial cycles of length 1. Each of these cycles will take us back to the starting position when applied eight times… as the video above shows!

**Question:** what if we do a *Faro in-shuffle* instead? This means interleaving the other way. On ten cards, after one shuffle the deck reads 6,1,7,2,8,3,9,4,10,5. How many applications of this rule are needed to bring us back to the starting position in this case?

I’m teaching a third year Combinatorics module this term, which has led me to revisit some old friends: permutations and combinations. I thought I knew them well, but have already learned something new!

First a quick refresher course:

**Combinations:** say (C(n,k)) is the the number of subsets of size (k) from a collection of size (n). So if you select 3 pieces of fruit from a total of 6 (all different), and don’t care about the order, then the number of selections you might make is (C(6,3)). The formula^{[1]} here is (C(n,k) = frac{n!}{k!(n-k)!}) and (C(6,3)=20).

**Permutations:** If you do care about the order of selection (maybe you want to arrange your fruit into military columns so that “strawberry, apple, pear” is different from “apple, pear, strawberry”), then the number you need is (P(n,k)), the number of permutations of (k) objects from (n). Its formula is (P(n,k) = frac{n!}{(n-k)!}) and (P(6,3)=120).

**Total numbers of combinations:** A set of size (n) has (2^n) subsets in total. To say the same thing another way, (C(n,0)+C(n,1)+…+C(n,n)= 2^n). So the total number of collections of fruit you could take from the box of 6 is (2^6=64) (this includes both the empty-plate option and the take-everything option).

All of the above I have known for many years. But I recently discovered something new (to me):

**Total numbers of permutations:** if you pick between (0) and (n) objects, in order, from a set of size (n), how many choices can you make? To ask the same thing another way, what is (P(n,0)+P(n,1)+…+P(n,n))? And the answer turns out to be ( lfloor n! cdot e rfloor ), that is the number you get if you round (n! cdot e) down to a whole number. So the number of military columns of fruit (including the empty column) you could make from your six are ( lfloor 6! cdot e rfloor = 1957).

I think this is very cool, and the proof of this is not hard. It’s been written out very nicely here by Michael Lugo. (When I first saw this result I assumed that it would be a consequence of Stirling’s approximation – but that’s not required.)

[1] These formulae all use factorial (! ) notation where (7!=7times 6 times 5 times 4 times 3 times 2 times 1)

There’s been a bit of a rumpus recently about this video from *Numberphile* which purports to show that [1+2+3+ldots = -frac{1}{12}]

It got posted on Slate with an accompanying blogpost by Phil Plait, which then got shot to pieces by various incensed bloggers, and Phil now has a much better follow-up post.

I’m not going to rehearse the story again – the short version is that the equation above is a really bad and misleading way of communicating a really interesting and surprising result. But I thought I’d share another titbit which has a similar moral, namely:

Infinite series often behave in very weird ways.

This in turn has two corollaries, both amply illustrated in the story above:

Unexpected and cool things can happen.

but at the same time

It’s very easy to start talking nonsense if you’re notextremelycareful.

The video is about the series (1+2+3+4+5+ldots ) This is what we call a *divergent* series, meaning that as you add up more and more terms, the series grows without limit. An example of a *convergent* series is (0.9+0.09+0.009+0.0009+ldots) Here, as you add up more and more terms, you get ever closer to the fixed value 1. This is the *limit* of the series. Convergent series are generally friendlier beasts than divergent series, but the purpose of this post is to illustrate that even convergent series can do shocking things.

Humans are not used to infinite series. But we are used to adding up finite collections of numbers. If we have three numbers (A,B,C) we know that the order in which we add them up does not matter: (A+B+C=B+A+C=A+C+B) etc.. This sort of thinking is so natural and automatic that it is all too easy to transfer it to the realm of infinite series, where they may not hold – not even (necessarily) for convergent series.

This post is about the following series: [S: 1 – frac{1}{2} + frac{1}{3} – frac{1}{4} + frac{1}{5} – frac{1}{6} + frac{1}{7} – frac{1}{8} + frac{1}{9} ldots]

The first obvious question is: does (S) converge? And the answer is yes. It turns out to converge to a number near 0.69. (More precisely, but unexpectedly and coolly, the limit is^{[1]} the natural logarithm of 2 or ( ln 2 ). But you don’t have to know what that means to follow the rest of the post.)

Now, let’s reorder the elements of (S). Notice that the numbers on the bottom alternate between being odd and even. Let’s mix things up so that they go ‘odd even even odd even even’ instead, but being careful to keep the sign of every element the same at it was before:

[ 1 – frac{1}{2} – frac{1}{4} + frac{1}{3} – frac{1}{6} – frac{1}{8} + frac{1}{5} – frac{1}{10} – frac{1}{12} + ldots ]

To reiterate, we have not added or removed any elements here, just reordered the elements of (S). But if you look carefully at this new series, you’ll see that the positive terms are all followed by the negative of their halves – I’ll group these together in brackets to make it clearer:

[ left(1 – frac{1}{2} right)- frac{1}{4} + left( frac{1}{3} – frac{1}{6} right) – frac{1}{8} + left( frac{1}{5} – frac{1}{10} right) – frac{1}{12} + ldots ]

Now let’s compute the bits in the brackets and give a name to the new series:

[ T: frac{1}{2} – frac{1}{4} + frac{1}{6} – frac{1}{8} + frac{1}{10} – frac{1}{12} + ldots ]

But what’s this? If you compare a term of (T) to the term in the corresponding position in the original series (S), you’ll see that it’s exactly half. And indeed, the limit of (T) is exactly half the limit of (S) – around 0.34 (more preciely, (frac{1}{2} ln 2 )).

So just by rearranging the terms of (S) we have completely changed its limit! That might seem surprising, but Riemann’s rearrangement theorem tells us that this is the tip of the iceberg. It is actually possible to rearrange the terms of (S) so that it converges to any number you care to name – or so that the series diverges!

So, be very careful when handling infinite series! Here endeth the lessen. But to continue with the mathematics a little: this happens because (S) is what is termed *conditionally convergent*, that means that if you replace all its negative terms with their positives:

[H: 1 + frac{1}{2} + frac{1}{3} + frac{1}{4} + frac{1}{5} + frac{1}{6} + frac{1}{7} + frac{1}{8} + frac{1}{9} ldots]

you get a series which doesn’t converge. (H) is called the harmonic series; that it diverges was one of the first truly surprising facts that mathematicians discovered about infinite series. *Absolutely convergent series* (ones where the corresponding entirely positive series converges, as 0.9 + 0.09 + 0.009 +… does) are much better behaved, and Riemann’s theorem doesn’t apply there.

**Finally, a shameless plug:** All this material is covered in my book Maths 1001.

[1] This is the case (x=1) of the series expansion for ( ln (1+x)) which is given by the *Mercator series:* (1 – x + frac{x^2}{2} – frac{x^3}{3} + ldots )