Category Archives: Mathematics

1200-year-old problem ‘easy’

Oh, for once this is the BBC at it’s virtual best. Lot’s and lot’s of none sense published for whatever reasons no human will ever understand. The article tells us about Dr. James Anderson, from the University of Reading’s computer science department, and is subtitled Schoolchildren in Caversham have become the first in the country to learn about a new number – ‘nullity’ – which solves maths problems neither Newton nor Pythagoras could conquer. Now if that isn’t something.

So here are some quotes from the article.

The theory of nullity is set to make all kinds of sums possible that, previously, scientists and computers couldn’t work around.

“We’ve just solved a problem that hasn’t been solved for twelve hundred years – and it’s that easy,” proclaims Dr Anderson having demonstrated his solution on a whiteboard at Highdown School, in Emmer Green.
– Quote 1 from the article

Computers simply cannot divide by zero. Try it on your calculator and you’ll get an error message.

But Dr Anderson has come up with a theory that proposes a new number – ‘nullity’ – which sits outside the conventional number line (stretching from negative infinity, through zero, to positive infinity).
– Quote 2 from the article

The only valuable thing about all this is one comment, although poorly formatted, by Kurt Fitzner – in my opinion at least.

The “problem” of a computer with divide-by-zero errors is not a problem, it’s a feature. It’s not something you need to or even want to fix. You could easily design a computer that doesn’t have an error in that situation if that’s what you want. Replacing the error condition with a new symbol accomplishes nothing. The program still has to deal with the issue in order to present a real-world result to the user. A divide-by-zero error is the way programs do that. It’s easy to solve a “problem” when you’re the architect of the definition of the problem in the first case. Dr. Anderson first defines a problem: calculators and computers throw an error when you try to divice by zero, and then defines an artificial solution – but the problem was artificial in the first place. We’ve all run into poorly designed programs that don’t handle divide-by-zero errors properly and crash. This isn’t a problem of dividing by zero, this is a problem of a computer program not handling its data properly. We’ve also all run into programs that attempt to reference a null pointer. By the same reasoning, we could define the memory that a “null pointer” points to as some new type of virtual space called “nullspace” (trekies should appreciate my resistance to the temptation to call it “subspace”), and call it valid. Make the computer such that reading from “nullspace” always returns a null. Suddenly no programs crash from dereferencing a null pointer any more. It doesn’t mean that the program is going to now do something useful. It probably means it will end up displaying garbage to the user, hanging in an infinite loop, or branching off to never never land. As far as it goes mathematically, there’s nothing you can do with nullity on paper that you can’t do by simply leaving it as (0/0) in the equation. So from either approach (mathematically or from a computer science perspective), it’s nonsense. The author’s own response to some of the critics (or, I should say, alleged response) doesn’t help my opinion. Tossing out the names of two other Ph.Ds and offering vague references to undescribed “axioms” built around this new symbol all reinforce my opinion that Doctor Anderson sounds precisely like the character Robert from the movie “Proof”.
– Kurt Fitzner

N.B. I noticed to late that you can do whatever you want to the formatting of the comments on BBC, the cut the newlines out of it. In that regard – sorry Kurt.