Thursday, 3 February 2011

"Almost none of the information we think we possess is true."

Title quote taken from wikipedia, which took it from the QI website.

So, this post was prompted by the pet peeves thread on the Playground, in which I mentioned today my pet peeve of arbitrary contrariness/pedantry. I'll explain what I mean by that in a minute, but suffice to say my primary examples of it are things I learned from QI. So the subject of this blog post is basically 'Things I learned from/saw on QI with which I disagree'.

So, arbitrary contrariness/pedantry. It may seem strange that this annoys me, because I can be rather contrary and pedantic myself. But in a way, that's why. They are things which I do, and I hate to see them abused. The arbitrariness is my issue with it. Actually arbitrary may not be the right word. It's not just that there is no clear reason for it, but also that there seems to me to be an obvious reason not to do it (To wit, that it's annoying).
This is best illustrated through examples. So, examples you shall have!

1. Berries.
As I'm sure many if not all of you are aware, a significant number of fruits with names containing the word 'berry' are not strictly speaking berries, by the botanical definition. Of course, one might dispute the point on QI (Though I'm pretty sure no-one did), on the grounds that a dictionary will give you other definitions than the botanical one, which do encompass said fruits. But I digress. The point is the botanical definition.
Which do you think came first? The names of the fruits, or the botanical definition?
I would guess the fruit names came first. In which case, why would some botanist come up with this definition which does not include many of said fruits? If they needed a name for that particular subset of fruits that are considered 'true berries', why couldn't they choose or invent one which would be less confusing?
It appears to be simple contrariness, for no apparent reason, while the confusion caused is an obvious reason to do otherwise. Or, alternatively, one possible reason for it is simply to facilitate arbitrary pedantry, in the form of "Oh no, strawberries aren't berries, whyever would you think that? Utterly ridiculous! Ha ha ha!" Whereupon the layman, having been unfairly tricked into ignorance, through a betrayal of his language, experiences considerable embarassment, and a not insignificant desire to punch the offending botanist in the nose.

Now this one, I'll grant you, is based on an assumption on my part. An assumption which could be at least partly wrong. It's possible the botanical definition of 'berry' predates the naming of at least some of the fruits (Though I'm nigh-certain it can't have predated all of them), in which case the blame goes to whoever named the fruits instead; but a failure in nomenclature has still taken place. Most generously, I could assume that at the time of the naming in question, whichever one it was, the fruits were believed to fit the definition, but the more advanced science of today allows us to determine that they don't. But even if that's the case, it's still annoying.

2. Henges
This one is much like the 'berries' example, but worse. Stonehenge, it turns out, as revealed on QI, is not a henge.
Do you know where the word 'henge' comes from? This was also revealed on QI.
Yes, the word henge was taken from the name Stonehenge, and then given a definition which excludes Stonehenge itself. No assumption here.
And my generous possible assumption for the previous example doesn't really apply here. The definition of a berry has something to do with which bit of it the edible part comes from, it has to do with the biology of plants, so you can see why one might make a mistake about that in less scientifically advanced times. But the definition of a henge is something to do with the placement of a ditch near a stone circle. The reason Stonehenge is not technically a henge is because the ditch is in the wrong place. And I'm pretty sure we haven't made any staggering scientific advances in the field of looking at where a ditch is in relation to some stones in the ground.
So, one concludes contrariness. And gets annoyed by it.

There may be other such examples. In fact I consider it pretty likely. But I don't know what they are.

I do, however, have what may be considered an example specifically for the pedantry rather than the contrariness, and this is  the one case where I genuinely disagree with QI. I mean, with the previous two, should I become famous, go on QI, and get asked the question, I intend to give the obvious 'wrong' answers, in protest against contrary nomenclature, and argue my case (I don't expect it to work, but I'll do it). But they're not responsible for it - they're just pointing it out to the nation. QI is a wonderful means of education on the subject of popular misconceptions. But on this one I disagree with them, because I don't think it's quite so objective.

3. Henry VIII's wives.
Of course, everyone (At least everyone from Britland who payed any attention in history lessons) knows that Henry VIII had 6 wives. And QI has told us this is wrong. I disagree.
The argument that Henry VIII had less than 6 wives is, as I recall, that several of his marriages were anulled, and therefore considered never to have happened.
But they did happen. He hasn't gone Big Brother on us and erased all records of them after the marriages were anulled. So, regardless of what he may have later declared, the marriages did happen, with the arguable exception of Anne of Cleves, since consummation was considered a crucial element of a marriage in those days, I believe. So by the standards of the time, you can argue that she didn't count.
The others though, he did marry. He declared after the fact that he didn't and they weren't his wives, but while he was married to them he would have said that they were. As I see it, it's a matter of when you count. Because of course, if you count at just one point in his life, then he either had no wives or one wife. He didn't have 6 (Or 5) wives all at the same time, certainly. So this suggests that we should look at multiple points in his life. And for each of his 6 wives, I believe there was some point in history where they were each officially considered to be his wife. Add them up, and voila! Six wives.

Now, one can argue the other side, of course. But that does seem a bit like arbitrary pedantry to me.
In any case, the simple fact that one can debate the point suggests to me that QI was wrong to declare outright that it was untrue to say Henry VIII had 6 wives. You can contest that he didn't, using certain arguments, but you can also contest that he did, using different arguments. And in the end, it all really just comes down to arguing over things which really don't matter. We know, to some level of accuracy, the historical facts. We don't need to keep score.


So, changing the subject. There's one other thing which I saw on QI with which I vehemently disagree. I was already aware of it before seeing it mentioned on QI, but because it was mentioned on QI and it annoys me, I do associate it with the other three I mentioned above.

1+1=2

I have no problem with the above equation. I do, however, take issue with the fact that apparently it had to be proved by a really good pure mathematician. I first was told about this in a set theory lecture when we came to the mention of Russell's paradox, and my lecturer mentioned that of course, Bertrand Russell, who first found this paradox, then went on to (IIRC (If I recall correctly)) come up with axiomatic set theory, which sorts out the paradox, and that he basically built it from the ground up starting with 1+1=2 and it was so amazingly insightful of him to realise that this had to be done and blah blah blah blah blah.
And of course, on QI, the guests joked about it, like "Surely that's a bit late? Surely if it turned out that one plus one didn't equal two, the whole of maths would be screwed?" (I'm paraphrasing, but that was the gist)
Both these things bother me, because, well, taking the second point first, maths is not screwed, you cannot destroy the foundations of maths. It's true, if one plus one didn't equal two maths would be screwed, but it does. Of course it does. And it doesn't take any great insight or genius to prove it. I can prove it. Watch:

1+1=2.
By definition.
QED.

And there we go. That's it, that's all you need. You cannot make one plus one not equal two unless you change the definition of 'one', 'plus', 'equals', or 'two'. This is a concept which predates maths. Because at some point in the extremely distant past, some early human decided that when you had a thing, that was one thing (1). And when you got more things, that was adding things (+). And when you'd changed how many things you had, you got an answer, and if it was the same as what someone else had, they were equal (=). And finally, they decided that when you had one thing and added one more thing, rather than call that one and one more one, which was a bit too long-winded, they would call that having two things (2).
OK, so actually this early human probably wasn't speaking english, but you know what I mean.
You don't need to prove that 1+1=2, because the definition of 2 is 1+1. You don't need to prove it again to build from the ground up, because it is the ground which you've already built up from.
Now, the actual circumstances of this proof were more complicated. There were apparently paradoxes shaking the very foundation of maths, or something (Though of course I've already indicated my scepticism as regards the feasibility of doing that). It may be that the point was not to prove 1+1=2, but rather to prove that you could prove 1+1=2 without recourse to real world examples. I'm still uncertain about it, though, because it just sounds like rationalism taken way too far. I may be a proponent of avoiding contact with the real world, but a little bit of empirical data can be a very good thing under certain circumstances.

And, in the course of ranting about maths it appears I have forfeited my opportunity to go to the library and get out some books on it. Oh well, there's always tomorrow (And tomorrow, and tomorrow...)

9 comments:

  1. The berries thing (and even more so with the henges) is kinda ridiculous, I completely agree, there's no point redefining an existing word to mean something that excludes the orgigin of the word.

    The Henry VIII rant pretty much sums up what I thought to myself first time I saw that episode of QI, and I remember Jonty, Jonty's mum and myself having a conversation about it too.

    I have to admit the maths issue I never really understood, so thank-you for explaining that it isn't actually an issue at all :)

    Mostly just commenting here because I'm watching QI right now (instead of revising, oops) and there are occasional bits which I disagree with and it's a fun show to rant at whilst you watch it :P

    ReplyDelete
  2. "I'm pretty sure we haven't made any staggering scientific advances is the field of looking at where a ditch is in relation to some stones in the ground."

    I lol'd.

    I'll try to comment on the 1+1=2 issue tomorrow after I've thought about it on the bus. Cos I do have opinions on the matter.

    ReplyDelete
  3. OK, so here’s my understanding of it, presently bolstered by a chapter of Gödel, Escher, Bach:

    Sure, you can say that by definition 1+1=2, but that doesn’t tell you anything about the properties of this number “2” that you’ve created, other than that it’s some kind of calligraphic or lexical shorthand for “1+1”. To have a mathematically useful definition of “2”, you need to be able to relate it to the set of natural numbers in which every number above 0 has a successor that’s also a member of the set of natural numbers. To equate “the following member of the set of natural numbers” with “this number plus one” is introducing a new axiom that can’t be derived from formal logic. (To go off on a tangent: to define the successor of *any particular* natural number as that number plus one would be an individual axiom, so in this axiomatic number theory you’d need as many axioms defining numbers as you had numbers. And then who’s to say that the successor of the successor of a natural number equals that number plus “2”? “2” might object very strongly to being used in such a manner. You’d need a whole nother set of axioms to define what happens when you add “2” to something… 3… etc… you’d need infinitely many axioms.)

    It may not be particularly interesting whether or not we can prove 1+1=2 without recourse to real world examples, but if we can’t prove 1+1=2 using formal logic, or rather, if we have to resort to definitions that don’t proceed from formal logic to prove 1+1=2, then we are admitting that the formal system we use to do maths is not necessarily logically consistent. This implies that if a paradox arises, it might be a fundamental feature of the system and not able to be resolved. This is bad if you’ve come up with a proof of a theorem and want to know if your proof is valid, because proof requires absolute truth, and paradoxes deny absolute truth. So Russell and A. N. Whitehead wrote Principia Mathematica as an attempt to derive the whole of mathematics from formal logic, assuming nothing and defining nothing, and thus to eliminate paradox (or more importantly, establish which methods of mathematical reasoning were valid within formal logic and which were invalid, which would provide the acid test for any proof of any theorem.) Alas, Gödel showed up and nuked Principia with his Incompleteness Theorem, but shit happens.

    With regard to set theory in particular, Russell and Whitehead tried to come up with an axiomatic set theory – type theory – in which it was impossible to construct a paradox, by arranging sets and statements about sets into a variety of levels consisting of elements from lower levels, and declaring that a statement that couldn’t be placed in any of the levels, or had to be placed in more than one level at the same time, was an invalid statement. But there’s a reason we use Zermelo-Fraenkel set theory and not type theory: the scope of any sort of statement *about* type theory as a whole (like “it’s internally consistent”) would need to encompass the highest possible level, so the statement would therefore need to take place on a level that wouldn’t exist, and would be disallowed. When the very act of discussing a theory is the most blatant possible violation of that theory, you know something’s gone wrong.

    Chi aka Sushi Monster
    Professor of Looking At Where Ditches Are In Relation To Stones In The Ground
    Queen Edward's University

    ReplyDelete
  4. OK, I've NO idea why blogger thought that was spam... nor can I find a means of turning off the spam filter (Since I'm not expecting many people to try and spam my blog) or to create any sort of 'trusted list' whose comments would be automatically assumed not to be spam. Hm.

    My issue with that explanation is that you're considering the set of natural numbers to be something separate to the number system we have created, whereas I understand the set of natural numbers as being the numbers 1,2,3,... which were already defined. For that matter, what defines a number as being 'the successor of the previous one in the set of natural numbers'? Because I'm pretty sure that is just 'the previous number plus one', and that's how you define the set of natural numbers. It seems ridiculously esoteric to consider that set as having some sort of even more abstract meaning separate to that by which it was originally conceived.
    I'd be interested to read more about this. But as it stands, I don't particularly expect my opinions to change if/when I do.

    ReplyDelete
  5. So your early human sees one buffalo, and then another buffalo comes and joins it, and the early human thereby discovers the concept of "2 buffalo" in which the concept "2" is defined as the cardinality of the set of buffalo, and as such does not exist independently of the buffalo. Now if she goes on a peyote bender, and ends up seeing four buffalo, she can interpret this one of two ways: either "2 buffalo" has been added to the existing "2 buffalo", or the "2 buffalo" has been doubled, reduplicated, multiplied by a different kind of 2, an abstract form of 2, not 2 of anything. To call those two different 2s the same *concept* is a leap of logic. (This early human hasn't encountered the idea of a number, and of course has no numerals to represent these 2s, so to say that the 2 of "2 buffalo", and the 2 by which the set of buffalo has been multiplied, are the same *number*, means nothing.) In buffalo-free terms: to declare that natural numbers may be generated by abstraction from cardinal numbers is axiomatic, not logical (and wrong, come to think of it, when you start running into infinities).

    OK, there's nothing intrinsically wrong with assuming things to be true if the real world seems to indicate that they’re true, but if that’s your golden rule, why not just assume the Axiom of Choice too? Why not just assume Euclid’s Parallel Postulate? Why not just assume that negative numbers have no square roots of any kind? Why not just study physics?

    ReplyDelete
  6. I'm pretty sure people do just assume the Axiom of Choice, so that may not be the best example...
    Furthermore, you're making a strawman argument. Nowhere did I say that "assuming things to be true if the world seems to indicate that they're true" is my 'golden rule'. But assuming things to be true if they are self-evident seems to me to be a fairly sensible rule (Note, still not my 'golden rule')
    I don't see your point about "2 buffalo" not existing independently of the buffalo. There's a reason I didn't mention any specific item but simply said 'thing(s)'. You claim that an abstract 2 and '2 buffalo' are different kinds of 2. I disagree. There is one kind of 2, which in one instance has been attached to an item, to wit, buffalo. You're working in particular units, but the units are not an inherent part of the numbers, nor vice versa.
    As far as the question of natural and cardinal numbers goes, to quote wikipedia: "The cardinality of a finite set is a natural number – the number of elements in the set." Now infinite sets would not be logically assumed from anything I've said, since it deals with real things and you can't see an infinite set of anything, so what I say applies. I'm pretty sure the natural numbers are just the finite cardinal numbers, aren't they?

    ReplyDelete
  7. The fact that we can talk about natural numbers and finite cardinal numbers, the fact that they have two different names, suggests they are in some way two different things, no? Sure, for any finite cardinal number you can find a natural number going by the same name, but they are defined differently, and as such are different until you either prove otherwise, or accept their equivalence as an axiom of your system. The fact that it may be self-evident (a subjective measure) when expressed in natural language, which contains this convenient, catch-all concept "number" is neither here nor there. Define self-evidence! In what sense is it not self-evident that, if there is no number larger than infinity, then the set of cardinal numbers must stop at infinity? In what sense was it not self-evident to the inventor of "i" that there was no number that could be squared to give a negative number, since the operation of squaring was *defined* in such a way as to always produce a positive result?

    It doesn't matter if the units are buffalo or just things, the point is that your early human has defined 2 as the cardinality of a set consisting of one thing and one other thing. That you have now invented a particular "number", whatever such a thing is, that can be applied to other operations unconnected to the cardinality of sets, in the step in your reasoning I object to. Even if the abstraction from cardinal number to natural number were self-evident, this doesn't give you a schema for generating all natural numbers; you can only generate them one at a time from examining the cardinality of sets of various sizes. Also, who's to say what order you're supposed to put these natural numbers in, once you've generated them? How do you know, when you're combining sets together to allow you to generate more natural numbers, that the addition you're doing is commutative? So many questions...

    ReplyDelete
  8. Of course, because we could never have two different names for the same thing, that would be absolutely unheard of. The word 'synonymous' doesn't actually have a purpose, someone just invented it for no reason. Perhaps they were drunk or intoxicated at the time.
    Why is it neither here nor there? Why should natural language be entirely inapplicable to mathematics?
    Besides which, this is still not what I said. I didn't say "These two things are equivalent." I said "There is only one thing here."

    I dispute that the 2 has been defined in that instance as the cardinality of a set, actually, because that excludes the possibility of fractions. It is possible to have half a buffalo. So we're actually dealing with the real numbers, or at least the positive rational numbers, of which the natural numbers are merely a subset. So we're not actually dealing with cardinality, rather with measurements (Though the subtleties were likely lost on our early human).
    My schema for generating all natural numbers, incidentally, is very simple. It requires the existence of the number 1, and addition. 1+1=2. 2+1=3... n-1+1=n. n+1=n+1...
    Nothing to do with examining sets of various sizes, or indeed of any size other than 1, so long as you can conceive the possibility that such a thing might exist.
    Also incidentally, addition is commutative because it is. It's defined that way, same as 2 is defined as 1+1. This is part of my original point.

    ReplyDelete
  9. Because natural language is imprecise, and leads to false statements seeming self-evident, like the one I made about cardinal numbers "larger" than infinity not existing.

    Surely if you're going to assume that finite cardinal numbers are synonymous with natural numbers, such that the two kinds of 2 get to be the same by definition, then the act of restricting your definition to the *finite* cardinal numbers means you're defining numbers with reference to infinity, which is itself defined with reference to numbers? If you're building maths from logic with no assumptions, which comes first, infinity or the finite cardinals?

    In the particular case of buffalo, yes, to avoid fractions, you need to count the cardinality of the smallest set of whole buffalo of which any set of whole and part buffalo is a subset. How about lakes? You can't have half a lake, only a lake that's half as big as another lake.

    Your schema is circular. (n-1)+1=n is a circular equation. You need to already know your n before you can generate it. Or you need as many definitions as numbers.

    ReplyDelete