17 May 2009
Computer Math

2+2=5 In early April I said “The real problem was that new users either believed that computers understood and used ideal mathematics, or had come from ‘mainframe’ backgrounds where all math was done with decimal data types rather than defaulting to low-precision floating point.”

So I was tickled to see Jeff Atwood’s blog post on 5/13, “Why Do Computers Suck at Math?” in which he mentions several infamous glitches (mostly fixed now) in the Google and MS Windows calculators, MS Excel 2007, and the ill-fated Ariane 5 rocket. He also gives a summary of floating point and recommends David Goldberg’s “What Every Computer Scientist Should Know About Floating-Point Arithmetic“. You may want to see Wikipedia’s “floating point” entry for a less technical overview.

As usual, the many comments expand and correct his main article. I encourage you to at least skim through them. However, there were a couple things that I did find more amusing than edifying.

First, there was a comment that said floating point math was developed for the x87 coprocessors to compensate for low power and scarce memory. Ahem. That’s right up there with Al Gore inventing the Internet and Bill Gates inventing UNIX. Digging through my own junk closet I found a 1975 printing of “Introduction to Programming” which includes assembly level programming of the PDP-8e’s floating point unit. A little more digging turned up a 1970 copy of Schaum’s Outline Series “Introduction to Computer Science” which not only discusses floating point, but also has you work out fp logic circuits (yes, really: and, or, and not gates)! Wheee, I don’t remember doing those exercises. Oh, to close the circle from the comments, there’s BCD — my 1981 MC6809 Microprocessor Programming Manual shows the MC6809 had instructions for BCD. These were mini- and micro-processors; the actual history of these concepts is much older of course.

More entertaining, however, is the lengthly discussion of whether “0.999… == 1”. Really. Now, to be right up front about it, this is true. 0.999... is 1. None of this “approaching” or “almost but not quite” stuff. Those are two ways of representing a single ideal number: 1. Apparently this is a big deal, kinda like the thermal properties of blankets I also mentioned in the April post. For the short answer, see “Why does 0.9999… = 1 ?“; for a longer discussion, see “0.999…“.

Q. How many mathematicians does it take to screw in a light bulb?
 
A. 0.999…

I guess I have the sort of twisted mind in which all of this makes perfect sense. Everything is just layer upon layer of convention, filtering, abstraction, and indirection.

I just recalled a discussion of how computers “really” store numbers that came up while going over converting among decimal, binary, octal, and hex representations. Along the lines of how can “it” tell whether it’s storing a hex or octal number, or maybe even a character, or something else. Hmm, kinda like the “how does a thermos ‘know’ whether to keep stuff hot or cold” — it’s the wrong question; you’re using the wrong filter.

I once had someone dabbling with programming ask me what arrays were good for, since they didn’t understand them at all; I wasn’t sharp enough to give them a good answer. But for “real” programming, it seemed that the big sythe-level filter was pointers, or more generally, indirection. It either clicked or they were lost. Nightmare: an array of pointers to functions returning pointers to structures containing pointers to ….

But I love that sorta stuff; you shouldn’t be surprised that Gödel, Escher, Bach: An Eternal Golden Braid is one of my favorite books.

As a friend of mine says “Well, that explains a lot.”

Category: programming
Tags: ,
(comments closed) | (trackbacks closed) | Permalink | Subscribe to comments |

Site last updated 2015-01-12 @ 13:31:07; This content last updated 2009-05-17 @ 04:46:55