If you’ve got a spare hour[1] at some stage here is a video of a talk I gave last month on Gödel, Incompleteness & Unprovable Theorems.

[1] The joy of the pause button is that the hour doesn’t have to form a single contiguous block.

If you’ve got a spare hour[1] at some stage here is a video of a talk I gave last month on Gödel, Incompleteness & Unprovable Theorems.

[1] The joy of the pause button is that the hour doesn’t have to form a single contiguous block.

If you shuffle a deck of cards perfectly eight times, you get back exactly to where you started. Here’s a video of the magician Adam West doing it:

In this post, we’ll see why this works. First let’s tighten up our terminology: a “perfect shuffle” here is really what magicians call a *Faro Out-Shuffle:* you split the deck in half, and then interleave the two halves. To make things easier, let’s work with a deck of ten cards, initially in numerical order.

Step one: split 1,2,3,4,5 from 6,7,8,9,10.

Step two, interleave the two: 1,6,2,7,3,8,4,9,5,10.

Notice that the outermost cards (1 and 10) remain in place. (That’s why this is an *out-shuffle* — interleaving the other way produces an *in-shuffle*.)

At the same time, notice that the card initially in 4th position is now in 7th, and vice versa. We can indicate this in *cycle notation* by putting the two together in brackets: (4 7).

What about the rest? The card in second position to start with is now in third place, while the card from third position is now in fifth, the card from fifth is now in ninth,… Continuing this line of thought produces the cycle (2 3 5 9 8 6).

So the shuffle is totally described by the collection of cycles (1) (10) (4 7) (2 3 5 9 8 6). (I’ve included the two trivial cycles of length one.)

Now, notice that performing any cycle of length two twice brings us back to where we started: in this case 4 goes to 7 and then back to 4. Similarly, performing a cycle of length 6 six times takes use back to the starting position. In fact then, performing this ten card shuffle six times brings everything back to where it started.

Ok. What about a full 52 card deck? Well, for cards initially in the top half of the deck (i.e. numbered 1-26), the card in (n)th position is moved to ((2n -1))th position after one shuffle, (meaning (1 to 1, 2 to 3, 3 to 5, ldots, 26 to 51)).

For cards in the second half (numbered 27-52), the card in (n)th position is moved to ((2n-52))th position (meaning (27 to 2, 28 to 4, ldots, 55 to 54, 56 to 56 )).

Repeatedly applying the first rule, we can see that 2 goes to 3 which goes to 5 which goes to 9 which goes to 17 which goes to 33. Now applying the second rule, 33 goes to 14 which goes to 27 (by the first rule) which goes back to 2 (by the second rule). Thus we have the cycle:

(2 3 5 9 17 33 14 27).

The same reasoning tells us the other cycles which make up the shuffle, and altogether they turn out to be

(2 3 5 9 17 33 14 27) (4 7 13 25 49 46 40 28) (6 11 21 41 30 8 15 29)

(10 19 37 22 43 34 16 31) (12 23 45 38 24 47 42 32) (20 39 26 51 50 48 44 36)

(18 35) (1) (52)

which is to say six cycles of length eight, one of length two, and the two trivial cycles of length 1. Each of these cycles will take us back to the starting position when applied eight times… as the video above shows!

**Question:** what if we do a *Faro in-shuffle* instead? This means interleaving the other way. On ten cards, after one shuffle the deck reads 6,1,7,2,8,3,9,4,10,5. How many applications of this rule are needed to bring us back to the starting position in this case?

I’m teaching a third year Combinatorics module this term, which has led me to revisit some old friends: permutations and combinations. I thought I knew them well, but have already learned something new!

First a quick refresher course:

**Combinations:** say (C(n,k)) is the the number of subsets of size (k) from a collection of size (n). So if you select 3 pieces of fruit from a total of 6 (all different), and don’t care about the order, then the number of selections you might make is (C(6,3)). The formula^{[1]} here is (C(n,k) = frac{n!}{k!(n-k)!}) and (C(6,3)=20).

**Permutations:** If you do care about the order of selection (maybe you want to arrange your fruit into military columns so that “strawberry, apple, pear” is different from “apple, pear, strawberry”), then the number you need is (P(n,k)), the number of permutations of (k) objects from (n). Its formula is (P(n,k) = frac{n!}{(n-k)!}) and (P(6,3)=120).

**Total numbers of combinations:** A set of size (n) has (2^n) subsets in total. To say the same thing another way, (C(n,0)+C(n,1)+…+C(n,n)= 2^n). So the total number of collections of fruit you could take from the box of 6 is (2^6=64) (this includes both the empty-plate option and the take-everything option).

All of the above I have known for many years. But I recently discovered something new (to me):

**Total numbers of permutations:** if you pick between (0) and (n) objects, in order, from a set of size (n), how many choices can you make? To ask the same thing another way, what is (P(n,0)+P(n,1)+…+P(n,n))? And the answer turns out to be ( lfloor n! cdot e rfloor ), that is the number you get if you round (n! cdot e) down to a whole number. So the number of military columns of fruit (including the empty column) you could make from your six are ( lfloor 6! cdot e rfloor = 1957).

I think this is very cool, and the proof of this is not hard. It’s been written out very nicely here by Michael Lugo. (When I first saw this result I assumed that it would be a consequence of Stirling’s approximation – but that’s not required.)

[1] These formulae all use factorial (! ) notation where (7!=7times 6 times 5 times 4 times 3 times 2 times 1)

There’s been a bit of a rumpus recently about this video from *Numberphile* which purports to show that [1+2+3+ldots = -frac{1}{12}]

It got posted on Slate with an accompanying blogpost by Phil Plait, which then got shot to pieces by various incensed bloggers, and Phil now has a much better follow-up post.

I’m not going to rehearse the story again – the short version is that the equation above is a really bad and misleading way of communicating a really interesting and surprising result. But I thought I’d share another titbit which has a similar moral, namely:

Infinite series often behave in very weird ways.

This in turn has two corollaries, both amply illustrated in the story above:

Unexpected and cool things can happen.

but at the same time

It’s very easy to start talking nonsense if you’re notextremelycareful.

The video is about the series (1+2+3+4+5+ldots ) This is what we call a *divergent* series, meaning that as you add up more and more terms, the series grows without limit. An example of a *convergent* series is (0.9+0.09+0.009+0.0009+ldots) Here, as you add up more and more terms, you get ever closer to the fixed value 1. This is the *limit* of the series. Convergent series are generally friendlier beasts than divergent series, but the purpose of this post is to illustrate that even convergent series can do shocking things.

Humans are not used to infinite series. But we are used to adding up finite collections of numbers. If we have three numbers (A,B,C) we know that the order in which we add them up does not matter: (A+B+C=B+A+C=A+C+B) etc.. This sort of thinking is so natural and automatic that it is all too easy to transfer it to the realm of infinite series, where they may not hold – not even (necessarily) for convergent series.

This post is about the following series: [S: 1 – frac{1}{2} + frac{1}{3} – frac{1}{4} + frac{1}{5} – frac{1}{6} + frac{1}{7} – frac{1}{8} + frac{1}{9} ldots]

The first obvious question is: does (S) converge? And the answer is yes. It turns out to converge to a number near 0.69. (More precisely, but unexpectedly and coolly, the limit is^{[1]} the natural logarithm of 2 or ( ln 2 ). But you don’t have to know what that means to follow the rest of the post.)

Now, let’s reorder the elements of (S). Notice that the numbers on the bottom alternate between being odd and even. Let’s mix things up so that they go ‘odd even even odd even even’ instead, but being careful to keep the sign of every element the same at it was before:

[ 1 – frac{1}{2} – frac{1}{4} + frac{1}{3} – frac{1}{6} – frac{1}{8} + frac{1}{5} – frac{1}{10} – frac{1}{12} + ldots ]

To reiterate, we have not added or removed any elements here, just reordered the elements of (S). But if you look carefully at this new series, you’ll see that the positive terms are all followed by the negative of their halves – I’ll group these together in brackets to make it clearer:

[ left(1 – frac{1}{2} right)- frac{1}{4} + left( frac{1}{3} – frac{1}{6} right) – frac{1}{8} + left( frac{1}{5} – frac{1}{10} right) – frac{1}{12} + ldots ]

Now let’s compute the bits in the brackets and give a name to the new series:

[ T: frac{1}{2} – frac{1}{4} + frac{1}{6} – frac{1}{8} + frac{1}{10} – frac{1}{12} + ldots ]

But what’s this? If you compare a term of (T) to the term in the corresponding position in the original series (S), you’ll see that it’s exactly half. And indeed, the limit of (T) is exactly half the limit of (S) – around 0.34 (more preciely, (frac{1}{2} ln 2 )).

So just by rearranging the terms of (S) we have completely changed its limit! That might seem surprising, but Riemann’s rearrangement theorem tells us that this is the tip of the iceberg. It is actually possible to rearrange the terms of (S) so that it converges to any number you care to name – or so that the series diverges!

So, be very careful when handling infinite series! Here endeth the lessen. But to continue with the mathematics a little: this happens because (S) is what is termed *conditionally convergent*, that means that if you replace all its negative terms with their positives:

[H: 1 + frac{1}{2} + frac{1}{3} + frac{1}{4} + frac{1}{5} + frac{1}{6} + frac{1}{7} + frac{1}{8} + frac{1}{9} ldots]

you get a series which doesn’t converge. (H) is called the harmonic series; that it diverges was one of the first truly surprising facts that mathematicians discovered about infinite series. *Absolutely convergent series* (ones where the corresponding entirely positive series converges, as 0.9 + 0.09 + 0.009 +… does) are much better behaved, and Riemann’s theorem doesn’t apply there.

**Finally, a shameless plug:** All this material is covered in my book Maths 1001.

[1] This is the case (x=1) of the series expansion for ( ln (1+x)) which is given by the *Mercator series:* (1 – x + frac{x^2}{2} – frac{x^3}{3} + ldots )

For many years, maths lessons have run in roughly the same way: the teacher stands at the blackboard, giving a mini-lecture on some mathematical topic or technique, introducing the idea, outlining the theory, and then running through an example or two. The students would sit patiently (or not), taking notes (or not), so that at the end of the day they will be able to tackle some exercises on the same subject as homework… or not.

As the years have rolled by, the blackboard may have been replaced by a whiteboard, and then by a smartboard, but otherwise the formula has remained by and large the same: theory in the classroom followed by exercises for homework.

The idea of “flipping” the classroom, is to reverse this process. The insight is that in the age of youtube, today’s students can perfectly well take in the lectury bit at home — with the added advantage of being able to pause, rewind, and rewatch in their own time. This leaves lesson-time free for practice, providing the teacher with more time to go around talking to students individually or in groups, meaning more opportunities to help those who are struggling or to supply extra challenges to those who need stretching.

I have no experience of this method myself — all the same it immediately appeals to me as a way in which technology may really be able to improve the teaching and learning experience, rather than just adding bells and whistles. One person who is convinced by this new approach is Colin Hegarty, an old friend of mine from university, who I’m delighted to say has returned to the world of maths after a few years in finance, and is now expertly flipping classrooms in North London.

Even if you don’t have the good fortune to be one of Mr Hegarty’s students, you can still peruse the 611 (!) videos that he and his colleague Brian Arnold have created for this purpose, all of which are freely available on Hegartymaths, or on youtube. Judging by the 800,000 odd views their videos have gathered, it’s not only their own pupils who are benefiting from these experiments in flipping.

As a sample I’m embedding one in which Colin talks through the Chinese postman problem:

You can read my interview with Kevin Houston (or should that be Kevin’s interview with me?) on his blog.

**A sure route to mathematical fame is to resolve a problem that has stood open for centuries, defying the greatest minds of previous generations. In 1837, Pierre Wantzel’s seminal analysis of constructible numbers was enough to settle not just one, but an entire slew of the most famous problems in the subject, namely those relating to ruler-and-compass constructions.**

As with so much in the history of mathematics, the topic had its origins in the empire of ancient Greece. The geometers of that period were interested not only in contemplating shapes in the abstract, but also in creating them physically. Initially, this was for artistic and architectural purposes, but later for the sheer challenge it posed. In time, mathematicians came to understand that the obstacles theyencountered in these ruler-and-compass constructions brought with them a great deal of mathematical insight. Nowhere was this more true than in the ancient enigma of squaring the circle, and what that revealed about the number (pi).

**Classical problems**

Greek geometers decided on a set of simple rules for building shapes, using only the simplest possible tools: a ruler and pair of compasses. The ruler is unmarked, so it can only be used for drawing straight lines, not for measuring length (therefore these are sometimes called straight-edge-and-compass constructions). The compass is used to draw circles, but it may only be set to a length that has already been constructed.

Today’s schoolchildren still learn how to use these devices to divide a segment of straight line into two equal halves and to bisect a given angle. These were two of the very first ruler-and-compass constructions. A more sophisticated technique allows a line to be trisected, that is, divided into three equal parts. What of trisecting an angle, though? Various approximate methods were discovered, which were accurate enough for most practical purposes, but no one could find a method which worked exactly. This proved a mystery, and gave the first hint that there was real depth beneath this question. But what does it mean if one task can be carried out by ruler and compass and another cannot?

The most famous of the ruler-and-compass problems, and indeed one of the most celebrated questions in mathematics, is that of squaring the circle. The question is this: given a circle, is it possible to create, by ruler and compass, a square which has exactly the same area? At the heart of this question lies the number (pi) (see page 54). The problem ultimately reduces to this: given a line 1 unit long, is it possible to construct by ruler and compass another line exactly (pi) units long?

Another classical problem was that of doubling the cube. This problem had its origins in a legend from around 430 BC. To overcome a terrible plague, the citizens of the island of Delos sought help from the Oracle of Apollo. They were instructed to build a new altar exactly twice the size as the original. At first they thought it should be easy: it could be done by doubling the length of each side. But that process leads to the volume of the altar increasing by a factor of 8 (since that is the number of smaller cubes that can fit inside the new one). To produce a cube whose volume is double that of the original, the sides need to be increased by a factor of ( sqrt[3]{2}) (that is the cube root of 2, just as 2 is itself the cube root of 8). The question of doubling the cube therefore reduces to this: given a line segment 1 unit long, is it possible to construct another exactly ( sqrt[3]{2}) units long?

**Wantzel’s deconstruction**

Working in the turbulent setting of France in the early 19th century, Pierre Wantzel turned these ancient questions over in his mind. He recognized that the form of many ruler-and-compass questions is the same. The key to them was this: given a line 1 unit long, which other lengths can be constructed? And which cannot? If a line of length (x) can be constructed, then Wantzel deemed (x) a *constructible* number. Setting aside the geometrical origins of these problems, he devoted himself to studying the algebra of constructible numbers. Some things were obvious: for example, if (a) and (b) are constructible, then so must be (a + b), (a – b), (a times b), and (a div b). But these operations do not exhaust the range of constructible numbers; Wantzel realized that it is also possible to construct square roots, such as (sqrt{a}).

His great triumph came in 1837, when he showed that everything constructible by ruler and compass must boil down to some combination of addition, subtraction, multiplication, division and square roots. Since (sqrt[3]{2}) is a cube root, and cannot be obtained via these algebraic operations, it followed immediately that the Delians’ ambition to double the cube was unattainable. A similar line of thought revealed the impossibility of trisecting an angle.

As for the greatest problem of all, squaring the circle, the final piece didn’t fall into place until 1882, when Ferdinand von Lindemann proved that (pi) is a transcendental number (see page 197). Then Wantzel’s work immediately implied the non-constructibility of (pi), and the impossibility of squaring the circle was finally established.