Screen shot of my galaxy generator in action
Screen shot of my galaxy generator in action

I’ve been developing stuff with Unity in my spare time for something like eight years (I started at around v1.5). I was initially drawn in by its alleged Javascript support. Indeed, I like Unityscript so much, I defended it vocally against charges that using C# is better, and wrote this article to help others avoid some of my early stumbles. I also contributed significant improvements to JSONParse — although the fact that you need a JSON module for Unityscript tells you something about just how unlike Javascript it really is.

I’m a pretty hardcore Javascript coder these days. I’ve learned several frameworks, written ad unit code that runs pretty much flawlessly on billions of computers every day (or used to — I don’t know what’s going on with my former employers), created a simple framework from scratch, helped develop a second framework from scratch (sorry, can’t link to it yet), built services using node.js and phantom.js, built workflow automation tools in javascript for Adobe Creative Suite and Cheetah 3D, and even written a desktop application with node-webkit.

The problem with Unityscript is that it’s not really Javascript, and the more I use it, the more the differences irk me.

Anyway, one evening in 2012 I wrote a procedural galaxy generator using barebones Javascript. When I say barebones, I mean that I didn’t use any third-party libraries (I’d had a conversation with my esteemed colleague Josiah Ulfers about the awfulness of jQuery’s iterators some time earlier and so I went off on a tangent and implemented my own iteration library for fun that same evening).

Now, this isn’t about the content or knowledge that goes into the star system generator itself. It’s basic physics and astrophysics, a bit of googling for things like the mathematics of log spirals, and finding Knuth’s algorithm for generating gaussian random distributions. Bear in mind that some of this stuff I know by heart, some of it I had googled in an idle moment some time earlier, and some of it I simply looked up on the spot. I’m talking about the time it takes to turn an algorithm into working code.

So the benchmark is: coding the whole thing, from scratch, in one long evening, using Javascript.

Now, the time it took to port it into Unityscript — NaN. Abandoned after two evenings.

I’m about halfway through porting this stuff to C# (in Unity), and so far I’ve devoted part of an afternoon and part of an evening. Now bear in mind that with C# I am using the Mono project’s buggy auto-completing editor, which is probably a slight productivity win versus using a solid text editor with no autocomplete for Javascript (and Unityscript). Also note that I am far from fluent as a C# programmer.

So far here are my impressions of C# versus Javascript.

C#’s data structures and types are a huge pain. Consider this method in my PNRG class (which wraps a MersenneTwister implementation I found somewhere in a far more convenient API):

// return a value in [min,max]
public float RealRange( double min, double max ){
    return (float)(mt.genrand_real1 () * (max - min) + min);

I need to cast the double (that results from mt.genrand_real1 ()). What I’d really like to do is pick a floating point format and just use it everywhere, but it’s impossible. Some things talk floats, others talk double, and of course there are uints and ints, which must also be cast to and fro. Now I’m sure there are bugs caused by, for example, passing signed integers into code that expects unsigned, but seriously. It doesn’t help that the Mono compiler generates cryptic error messages (not even telling you, for example, what it is that’s not the right type).

How about some simple data declarations:


var stellarTypes = {
    "O": {
        luminosity: 5E+5,
        color = 'rgb(192,128,255)',
        planets = [0,3]


public static Dictionary<string, StellarType> stellarTypes = new Dictionary<string, StellarType> {
    {"O", new StellarType(){
        luminosity = 50000F,
        color = new Color(0.75F,0.5F,1.0F),
        minPlanets = 0, 
        maxPlanets = 3

Off-topic, here’s a handy mnemonic — Oh Be A Fine Girl Kiss Me (Right Now Smack). Although I think that R and N are now referred to as C-R and C-N and have been joined by C-H and C-J so we probably need a replacement.

Note that the C# version requires the StellarType class to be defined appropriately (I could have simply used a dictionary of dictionaries or something, but the declaration gets uglier fast, and it’s pretty damn ugly as it is. I also need use the System.Collections.Generic namespace (that took me a while to figure out — I thought that by using System.Collections I would get System.Collections.Generic for free).

Now I don’t want to pile on C#. I actually like it a lot as a language (although I prefer Objective-C so far). It’s a shame it doesn’t have some obvious syntax sugar (e.g. public static auto or something to avoid typing the same damn type twice) and that its literal notation is so damn ugly.

Another especially annoying declaration pattern is public int foo { get; private set; } — note the lack of terminal semicolon, and the fact that it’s public/private. And note that this should probably be the single most common declaration pattern in C#, so it really should be the easiest one to write. Why not public int foo { get; }? (You shouldn’t need set at all — you have direct internal access to the member.)

I’m also a tad puzzled as to why I can’t declare static variables inside methods (I thought I might be doing it wrong, but this explanation argues it’s a design choice — but I don’t see how a static method variable would or should be different from an instance variable, only scoped to the method. So, instead I’m using private member variables which need to be carefully commented. How is this better?

So in a nutshell, I need to port the following code from Javascript to C#:

  • astrophysics.js — done
  • badwords.js — done; simple code to identify randomly generated names containing bad words and eliminate them
  • iter.js — C# has pretty good built-in iterators (and I don’t need most of the iterators I wrote) so I can likely skip this
  • mersenne_twister — done; replaced this with a different MT implementation in C#; tests written
  • planet.js — I’ve refactored part of this into the Astrophysics module; the rest will be in the Star System generator
  • pnrg.js — done; tests written; actually works better and simpler in C# than Javascript (aside from an hour spent banging my head against weird casting issues)
  • star.js — this is the galaxy generator (it’s actually quite simple) — it basically produces a random collection of stars offset from a log spiral using a gaussian distribution.
  • utils.js — random stuff like a string capitalizer, roman numeral generator, and object-to-HTML renderer; will probably go into Astrophysics or be skipped

Once I’ve gotten the darn thing working, I’ll package up a web demo. (When Unity 5 Pro ships I should be able to put up a pure HTML version, which will bring us full circle.) Eventually it will serve as the content foundation for Project Weasel and possibly a new version of Manta.

Killer vs. Clone

This looks to me like the best iPhone on the market right now -- if only it ran iPhone apps

I remember an old interview with one of the original Mac team members (Andy Hertzfeld or Bill Atkinson, I think) in which the interviewer asked the person whether he was sad that, in the end, the Mac had “lost” and he replied — I can only assume with some indignation — that on the contrary the Macintosh had won. After all, pretty much all computers were now — for all intents and purposes — Macs.

In this — rather important — sense, there are no iPhone-killers, actual or planned, around right now. What we have is a bunch of imitations. Just as Apple redefined what a computer was and could be with the Mac, they’ve redefined what a phone (or phone-sized-device) can and should be with the iPhone, and everyone else is simply copying them. Even RIM — the only iPhone competitor not losing marketshare — is making its phones as iPhone-like as possible within the constraints of not pissing off its existing customer-base. It’s barely possible that in a few years Apple will hold a minuscule share of the cellphone market (or even none at all), but it’s just as improbable that anyone will have changed cellphones the way Apple has, unless it’s Apple.

Apple’s 100,000 apps and 3,000,000,000 app downloads probably mean that Apple’s staying power in the cellphone market will be considerably greater than its staying power in the desktop computer market was.

Back in 1984, Apple shipped a computer with perhaps 20 applications available (most of them rubbish), and pretty much everyone developing apps for the Mac was also shipping apps for rival platforms . This was in an environment where developers expected to have to learn new tools and rewrite apps from scratch for every new platform (and backwards compatibility was almost non-existent). On top of this, Apple didn’t make it easy to become a Mac developer (initially you had to buy a Lisa/Macintosh XL just to run MPW, when an officially sanctioned Mac-hosted SDK finally came out it required you to code in 68k Assembler). Given all this, it’s pretty astonishing Apple managed to cling to around 10% market share until the late 80s.

(Aside: it’s not like Windows was the only Mac-clone that came out. Pretty much everyone who tried to break into the personal computer market in the 80s offered some form of Mac desktop environment clone, usually GEM.)

So, chances are, Apple’s iPhone will be around for quite some time. And, chances are, within a few years pretty much everyone will be using something that is as close an imitation of an iPhone as the vendor can manage. Just as Apple has become better at figuring out how to hang on to customers, the rest of the world has gotten better at imitating Apple.

Go (The Programming Language)


There’s a companion post to this piece in which I pretty much lay out my experience of programming languages. If you’d prefer to avoid reading through all that crap, I can summarize by repurposing one of my favorite lines from Barry Humphries (“I’m no orator, far from it, I’m an Australian politician.”) — I’m no systems programmer, far from it I’m a web developer. Go is explicitly a systems programming language — a tool for implementing web servers, not web sites. That said, like most programmers who live mainly in the world of high level / low performance languages, I would love to program in a lower level / higher performance language, if I didn’t feel it would get in the way. At first blush, Go looks like it might be the “Holy Grail” of computer languages — easy on the eye and wrist, readable, safe, and fast. Fast to compile, and fast at runtime.

What’s my taste in languages? Something like JavaScript that compiles to native binary and has easy access to system APIs. Unity’s JavaScript is pretty damn close — except for the system APIs part (it’s also missing many of my favorite bits of JavaScript, and not just the stuff that’s hard to write compilers for). Actually if I really had my druthers it would be JavaScript with strict declarations (var foo), optional strong typing (var foo: int), case-insensitive symbols, pascal’s := assignment, = a binary comparison operator, syntactic sugar for classic single inheritance, and ditch the freaking semicolon.

Of course, I am not — as I have already said — a systems programmer, so things like the value of concurrency haven’t really been driven home to me, and this is — apparently — one of Go’s key features.


  • Clean syntax — it looks easy to learn, not too bad to read, and certainly easy to type. They seem to have managed to climb about as far out of the C swamp as is possible without alienating too many people (but I’m a lot less allergic to non-C syntaxes than most, so maybe not).
  • Explicit and implicit enforcement of Good Programming Style (via Gofmt in the former case, and clever syntactic tricks such as Uppercase identifiers are public, lowercase are private in the latter).
  • Compiler is smart enough to make some obvious ways of doing things that turn out to be serious bugs in C into correct idioms.
  • Safety is a design goal. (E.g. where in C you might pass a pointer to a buffer, in Go you pass an array “slice” of specified size. Loops use safely generated iterators, as in shiny but slow languages like Ruby.)
  • Very expressive (code is clear and concise, not much typing, but not too obscure — not as self-documenting as Obj-C though).
  • Orthogonality is a design goal.
  • Seems like a fairly small number of very powerful and general new “first class” concepts (e.g. the way interfaces work generalizes out both inheritance and (Java-style) interfaces, channels as a first class citizen are beautiful).
  • It really looks like fun — not hard work. Uncomplicated declarations mean less staring at a bunch of ampersands and asterisks to figure out WTF you’re looking at. Fast (and, I believe, simple) builds. It makes me want to start working on low level crap, like implementing decent regex… Makes me wonder how well Acumen would work if it were implemented as a custom web server, how much effort this would be, and how hard it would be to deploy…
  • What look like very fast compiles (and in his demo video (above) Pike recompiled more code in less time on a Macbook Air) — but then I’ve seen insanely fast C++ compiler demos that didn’t pan out to much in the Real World.
  • Really nice take on OO, especially interfaces (very elegant and powerful).
  • Almost all the things you want in a dynamic language — closures, reflection — in a compiled language.
  • Really, really nice support for concurrency and inter-process communication. Simple examples — that I could understand immediately — showed how you could break up tasks into huge numbers of goroutines, perform them out of order, and then obtain the results in the order you need to, without needing to perform mental gymnastics. (Lisp, etc., coders will of course point out that functional programming gives you these benefits as well in a simpler, cleaner way.) The 100k goroutine demo was pretty damn neat.


  • To quote a friend, “The syntax is crawling out of the C swamp”. So expect the usual nonsense — impenetrable code that people are proud of.
  • Concurrency is way better supported in the 6g, 8g, 5g version than the GCC (the first muxes goroutines into threads, the latter uses one thread per goroutine).
  • They promise radical GC, but right now it’s a promise (and no GC in the gcc version yet).
  • Pointers are still around (although, to be fair, largely avoidable, and reasonably clearly declared).
  • No debugger (yet), obviously very immature in many ways (weak regex, and see the point about GC, above).
  • The syntax seems a bit uncomfortably “open” (nothing but spaces where you are accustomed to seeing a colon, or something), but it’s a matter of familiarity I suppose.

Unknowns (at least to me)

  • How well does it compare to evolutionary improvements, such as Obj-C 2.0 (which is garbage-collected) combined with Grand Central Dispatch (for concurrency support and hardware abstraction, but at library rather than language level) and LLVM?
  • Will GC actually work as advertised?
  • Is someone going to step up and write a good cross-platform GUI library? (It doesn’t need to be native — it just needs to not suck — something that has conspicuously escaped both Java and Adobe.)
  • Will we have a reasonable deployment situation within — say — six months? E.g. will I be able to ship a binary that runs on Mac / Linux / Windows from a single source code base?
  • Will it get traction? There’s a lot of other interesting stuff around, albeit not so much targeting this specific audience (and it seems to me that if Go delivers it will appeal to a far broader audience than they’re targeting since it’s essentially very JavaScript / Python / PHP like with — what I think is — obviously bad stuff removed and better performance.


I’m not quite ready to jump on the Go bandwagon — I have some actual projects that have to ship to worry about — but it does look very, very attractive. More than any other “low level” language, it actually looks like fun to work with. The other newish language I’m attracted to — for quite different reasons — is Clojure. Because it lives on the Java VM it’s a lot more practical now than Go is likely to be for some time, and because it’s essentially “yet another Lisp” it affords a chance for me to really build some competence with a fundamentally novel (to me) programming paradigm. In a sense, Go is more attractive because it’s closer to my comfort zone — which speaks well for its chance of success. Clojure because of its conceptual novelty is almost guaranteed to remain obscure — the latest Lisp variant that Lisp adherents point to when complaining about <insert name of language people actually use here>. Time will tell.

Programming Languages I’ve Learned

Image courtesy of Wikipedia
Sinclair ZX-80: Image courtesy of Wikipedia

Aside: I apologize for the ridiculous length of this post. Please feel free to skip over it or ignore it (obviously you are exactly this free). It’s something of a brain-dump on my part, and related to my consideration of the new Go programming language, about which I will shortly pontificate.

I’m a lazy person. I even have a degree in laziness (i.e. Pure Math). (And Comp Sci is Applied Laziness.) As a result, I’ve made my career in large part as a programmer, and I’ve always been fascinated by computers. Well, ever since I found out that they exist, anyway. I’ve also tried to learn and acquire expertise in as few programming languages as possible over the years, as a result of which I’m only proficient in twenty or so.

The first computer I ever used was some kind of minicomputer at a CSIRO research facility we visited on a school outing.

HP-65: my first hands-on programming experience was with a friend of the family’s HP-65 calculator. This device — which cost around $800 in 1977 — was a programmable calculator with something like seventy steps of programming memory (programs were essentially macros comprising recorded keystrokes, branches, and pauses for user input), a magnetic card storage device, and it could run a version of Lunar Lander — the hottest game of its era. (I managed to win the HP-65’s Lunar Lander game after a few attempts, which positively boggled the mind of the calculator’s owner. /flex)

Canon Canola: it’s hard to imagine Canon being so daft as to name a product “Canola” today, but the desktop programmable calculator from Canon was in fact named the Canola, and was programmed using punchcards — you punched the cards manually using a pencil. (And corrected mistakes by painstakingly taping “chads” back in place. The Canon Canola had an immense amount of memory (hundreds of steps) and many nice features (you could insert “named” flags and branch to them instead of jumping to fixed addresses, for example) but wasn’t “scientific”, so if you wanted a sin function — well have fun implementing a Taylor series (or whatever).

Sinclair Cambridge Programmable: having had a taste of programming, I was obsessed. Soon, I started seeing ads in the Scientific American (which was my equivalent of the Sears catalog) for the unbelievably cheap Sinclair Cambridge Programmable calculator. Eventually I got a Radio Shack branded version ($50 in Australia) and started learning to master it. This was a huge challenge after the comparative luxury of the HP-65. To begin with, there was no storage — once you turned off the calculator, your program was gone. So you needed to carefully write down each program to be able to use it later. Second, it had much less memory — 36 steps — and unlike the HP-65 each keystroke counted as a step (on the HP a number of any supported level of precision counted as a step), so the numeric constant 2 was two steps (start numeric constant, 2).

This barely useful product introduced me to the concept of re-entrant code — many sample programs used the trick of jumping into a program at different points to reuse it for different purposes (although each branch was three steps). The lunar lander program provided was so short of memory that it required the user to write down their altitude and velocity and re-enter them at appropriate junctures. I shaved steps from it by reducing the precision of values like gravity (1.6m/s became 2 m/s saving two memory steps, for example) and using X-Y register swaps to use the Y register as an extra memory (the Sinclair had one memory compared to the HP’s 6 + stack) and eliminated everything except having to initially enter your altitude.

Apple II BASIC: in 1980 my High School ran a fund-raising drive to buy the school an Apple II. Yes, that’s how expensive they were in Australia (about $3000 for a complete system including floppy drive, printer, and color monitor — a bit less than the price of a cheap new car). In 1981 the school offered its first Computer Studies course, and I spent every spare minute at school (and quite a bit of time after school) messing with it. I learned both variants of BASIC (the MS-derived floating point BASIC, and Woz’s integer BASIC — although I had no idea of the origins of either) and wrote a lot of code, some of it pretty impressive for the time (e.g. I wrote my own variant of “Adventure” because I found the Adventure distributed for the Apple II to have a lame parser and no combat/damage model — so, without knowing what a parser was, I wrote a parser and implemented rudimentary combat mechanics. I also implemented a good portion a tabletop naval simulation I owned and loved (Sea Strike), although I never figured out how to implement decent AI.

With BASIC on the Apple II you pretty much couldn’t write a decent hi-res graphics game — it was just too slow. To do so you needed to code in 6502 assembler and here I ran into a brick wall. The “best” book on the subject took me two months to get by mail order, and when I read it I found it incomprehensible — I just didn’t have the background knowledge to use it, and as by far the most advanced programmer I knew, I had nowhere to go for advice. (It didn’t occur to me to go to the University and find someone to ask. I was so stupid in some ways…)

Sinclar Cambridge ZX-80: having been burned by Sinclair once, I persuaded my parents to get me their next incredibly cheap and borderline useless product — the ZX-80 (with 1kB of RAM). The best thing about the ZX-80 (and ZX-81 — which my machine ended up being kind-of upgraded to) was its editor/language combination. It had an incredibly horrible membrane keyboard which made typing horrific. To help make up for this it figured out which commands you were trying to type from context making most commands a single keystroke. This also let it make efficient use of RAM (commands were stored internally as tokens), and the editor would not let you enter a syntactically incorrect line of code (you could still blow up via logical errors, e.g. not terminating loops or dividing by zero), and the cursor would change to tell you what was expected next in the current line. The Adventure game I mention above was written in an afternoon on the ZX-80 (which, at the time, had a whopping 16kB of RAM) and then took me a month to port to the Apple II — that’s how much better the dev environment on the ZX-80 was. It sucked in every other conceivable way, though.

Pascal: with College came Pascal. For me, the great thing about Pascal was how it handled type structures (BASIC didn’t have those), encapsulation (named functions, procedures, and parameters), scope (!). But, even though it did so relatively cleanly, Pascal still forced you to deal directly with pointers — something I simply loath and despise at a visceral level — and had incredibly poor string handling compared to BASIC.

MACRO-10: my academic introduction to assembler was in second semester of Comp Sci. MACRO-10 was PDP-10 assembler (we ran it on a virtual machine on a DEC-10 — presumably to avoid having the system hard crash every five minutes when assignments were close to due). The principle lesson of programming in assembler — for me — is that it’s not so much hard as tedious, and it’s very worthwhile to understand exactly what you’re asking the machine to do in a high level language (e.g. when you call a function you’re probably pushing variables and pointers onto a stack and then executing a jump to some new — random — spot in memory and eventually jumping back and taking the return result off that stack), but I wasn’t interested in doing it myself. I’ve essentially avoided assembler ever since.

The first year of Computer Science essentially drove me away from Computer Science. For about six months I went Cold Turkey on computers.

One of Apple's sample images shown in MacPaint. Within an hour of using a Mac, I was able to draw pretty decent pictures with a Mouse.
One of Apple's sample images shown in MacPaint. Within an hour of using a Mac, I was able to draw pretty decent pictures with a Mouse.

Then, in the middle of 1984, Apple presented the Macintosh to a packed room. While everyone else went in to watch the presenters, I took the opportunity to use the demo machine in the lobby for the duration (missing the presentation). Oh. My. God. The first time I closed a file and the computer said to me “There are unsaved changes, do you wish to continue?” I was gob-smacked. I then went back and tried opening a file, NOT making changes, and closing it. Nothing. Then I tried going in — making changes — saving, making more changes. It knew! What about opening a file, making changes, hitting the — miraculous — undo, then closing? It freaking knew. (Only a couple of years later would I find out just how horrible it was to implement this wizardry.)

I don’t know if HP demoed the HP-65 on college campuses back in 1974. If so, it may have had a similar impact. The HP-65 was, in many ways, a similarly revolutionary device. But the Mac was a staggering achievement — the kind of achievement that impressed you more the more you knew. The Apple II first of all wasn’t particularly unique, and second it basically gave you an inferior command line, an inferior computing experience, not very much horsepower — in essence something less capable in every way than terminal access to a DEC-10. The Lisa — like the Xerox Star — was an impressive demo with a stupefying price tag — although, unlike the Xerox Star the Lisa was actually usable. But the Mac was simply better in every way than anything else out there. (Purportedly, one of the professors in the Computer Science department took a Mac into his classroom later that year and put it on the bench in front of the students and said “every other computer is obsolete, this box has 75% of the computing power of a Vax”.)

So I fell in love with computers again, but this time for what I could do with them (e.g. draw, write, desktop publish) instead of what I could program them to do. In fact, I lost all interest in programming, and became interested in using other people’s programs. (Also, once you’ve used MacPaint, the bar for writing your own programs is raised rather ridiculously high. If you’ve never used MacPaint, let’s just put it this way: Photoshop is merely an obvious extension of MacPaint taking advantage of newer hardware, but with less elegance and usability.)

Modula-2: Alan Kay famously said, “People who are really serious about software should make their own hardware.” Almost by extension, if you’re really interested in using software you should make your own. My love of computers reborn, I started doing Comp Sci again. This time, with Modula-2. But Modula-2 was just Pascal with some minor improvements, and Sun Workstations were just like the old mainframes, but faster (and, I’d have to say, with better editors). OK, I loved computers, but I didn’t love Comp Sci.

What I chiefly did with computers was design games and write articles about games. After I’d graduated, published my pen-and-paper RPG, and failed to find a job doing anything remotely interesting, I posted an ad on the noticeboard at the Math department where I still hung out. The thrust of it: “anyone interested in developing next generation computer games, meet me at X”. Two guys showed up, both comp sci students a little younger than me. I gave an outline of what I was thinking about — one of them immediately decided this sounded insanely complex, the other became a lifelong (thus far) friend. We eventually published a ground-breaking and commercially unsuccessful game. Andrew went on to write one of the top low-level debuggers for the Mac (Classic) platform, RealBasic, and a bunch of obscure but highly profitable text retrieval and display code (he also wrote, in an evening, a PDF viewer that ran in 1MB of RAM and ran hundreds of times faster than Acrobat — this was back in about 1995 — he was forbidden by the company he worked for from ever releasing it or continuing to work on it). Andrew’s company was recently acquired by Oracle and he’s some kind of bigwig there.

HyperTalk: when I first got my hands on HyperCard (I was playing with some brand new Mac SEs at ADFA) it was almost as revelatory an experience as the Mac. HyperCard was, in essence, the first real IDE. It had some quirks that differentiate it from modern IDEs, but most of these actually weigh strongly in HyperCard’s favor. E.g. in HyperCard your document (“stack”) was everything. You simultaneously modified your programming environment, and your document, and your program — all in an environment that was fully integrated, and a single — extremely capable — programming language, and everything was persistent. In fact, if you wanted to reset state to a fixed starting point you had to program that — by default state was persistent.

The big missing things from HyperCard were support in the language for media types (it was a little too early for this to be an obvious feature when it was released, and like many revolutionary Apple products it conspicuously failed to get updated once the magnificent 1.0 was released) — which didn’t prevent MYST from being developed in HyperCard, native user look and feel, and associative arrays. Pretty much every serious HyperCard programmer had a library of homegrown routines to implement associative arrays via HyperCards incredibly good string support, but using strings as “data soups” is and always will be a kludge.

To give you a rough idea of how productive a tool HyperCard was, I wrote a relational database program (from scratch — as in from not having any RDB functionality in HyperCard) for a small government office using HyperCard in a few evenings. (This act of wizardry actually changed my life — I moved from being a near suicidally depressed salaries clerk in a personnel department to an internal database consultant within a few weeks, then from that to a job in a multimedia startup four months later.)

4th Dimension: my three months as a CSO2 in the ACT Government comprised picking a multiuser-database package for a large Mac-only government department (I picked 4D) and then implementing three database solutions using it. 4D’s Pascal-like programming language is an absolute travesty — full of gratuitous differences from actual Pascal that are just infuriating, and full of French typing conventions (e.g. French quotation marks are — or were — syntactic elements). Ten years later I would be offered a very well paid lead programmer job at a tech startup using 4D and then quit the job after three days when it finally sunk in that 4D was just as retarded after ten years of development as it had been in 1991 (e.g. you still couldn’t implement abstract events — such as “record update” — but only form events, and nothing about the language’s syntax had been fixed).

Authorware: Authorware was a groundbreaking multimedia “authoring” tool (what a horrible overloading of the word “author”), Authorware’s internal “programming” language comprised a graphical flowchart combined with an Excel-like macro language that conspicuously contained no loops, branches, or control structures. (For those, you used the flowchart.) This made writing and debugging code an unholy mess (an example of the path to hell being paved with good intentions). The whole reason the startup I went to work for used Authorware was because it promised seamless cross-platform development — code on a Mac, publish to Mac and Windows. But the original assumption of the developers was that any heavy duty coding would be implemented as plugins written in C or Pascal. Cross-platform C and Pascal remains a sore point, so the best way to avoid platform problems (and coding in C or Pascal) was to do the hardcore coding in Authorware itself. I ended up writing a relational database system in Authorware itself, along with numerous other things which would make the local Authorware rep’s jaw drop (completely custom navigation, global user notes, bookmarks). I despised Authorware, but I was damn good at it.

Object Pascal: while Andrew coded the game engine for Prince of Destruction in C and assembler, he knocked together a graphical content builder using Object Pascal and MacApp 2. In order to lighten Andrew’s burden, I ended up taking over development of Mapper, which involved learning Object Pascal and MacApp 2 — a very nice class library for writing GUI apps. Object Pascal’s chief difference from Pascal was support for class-oriented OOP (as distinct from nearly classless OOP which you seen in languages like JavaScript). This did a much better job of avoiding pointers, making Object Pascal the first “real” (as in “compiled”) programming language I actually liked.


AppleScript: this is one of the worst programming languages I’ve ever used. It’s been described as “read only” and this is pretty apt. Superficially, it looks like HyperTalk, but while HyperTalk is loosely typed (everything is internally a string or a number, and you don’t need to care which), AppleScript is strongly but implicitly typed, leading to endless frustration and incomprehensible errors. This was not helped by Apple’s atrociously lacking documentation (I only got anywhere with it by buying a very expensive hardcover book, whose title escapes me). Automator is a brilliant effort at exploiting the underlying system hooks that AppleScript uses, but AppleScript must rank as one of the most epic design failures of Apple’s history.

Visual Basic: a simply stunning achievement, Visual Basic was probably — in a sense — the best thing Microsoft has ever done. Just as Apple took the Xerox PARC GUI and made “obvious” changes to it to create something vastly more capable and better in every way, Microsoft took HyperCard and made “obvious” changes to it to create something better in many ways. It treated media types — well images — as a nearly native type. It let you build native UIs. OK it ran way slower than HyperCard, and was far less productive to code in than HyperCard (because you couldn’t customize the dev environment for the project you were working on), but you could produce a professional-looking end-result, and the internal programming language was far less horrific than Authorware’s. After three years of multimedia development with Authorware, I went to work for a large consulting company and started developing multimedia and “front office” apps using Visual Basic.

HTML: the chief reason I learned HTML was to create a website for Prince of Destruction (the link is to — essentially — the original HTML). HTML isn’t really a programming language, but it’s worth mentioning as my first foray into web development. My proudest achievement with early web development was figuring out how to produce really nice animated GIFs (versus the revolting ones you usually see) exemplified by the animation in this page here. OK, nothing much to boast about — but it was stolen quite a bit, and I got quite a few emails asking me how the frack I did it. (Hint: if you “render clouds” in Photoshop and your image has power-of-2 dimensions, the result will tile.)

Director/Lingo: developing multimedia with Visual Basic was a bit like teaching a dog to walk. Sure, you could do it, but… Once Director 4 came out its superiority to Visual Basic (or anything else) for multimedia development was simply overwhelming. I ended up making an end-run around my boss (who was dedicated to Visual Basic development — not so much by temperament as company policy) and showed a mockup of a major product we were building for them in VB build in Director. Our next version was built in Director — but not by me! Instead we hired the local Macromedia distributor’s top instructor to write it, and I was only called in when performance and bugs were found to be so bad that the client was threatening to walk out. I ended up “rescuing” the first project and writing a new codebase for follow-up projects.

Toolbook: this was a simple Windows clone of HyperCard but — as with Visual Basic — with the “obvious” deficiencies fixed. It supported color natively, and its controls were native. You could produce decent looking software with it. Like HyperCard it was astonishingly stable (it’s amazing how flaky tools were back then — HyperCard and Toolbook are probably the only 90s multimedia development tools that could remotely deserve the adjective “stable”). We ended up developing an astonishingly rich proof-of-concept demo in Toolbook (integrated with the VB/VC client software) in a matter of one week (albeit 18h days), won a huge contract, and then promptly failed to deliver by switching to VB and Robohelp for development. (Most of my “proof of concept” code tends to be robust and functional enough for deployment — I’m weird that way. I don’t think the “real” system ever got significantly more functional than our hack demo.)

Java: relatively early (as in 1996) I got interested in replacing Director et al with web technologies. I thought the web was The Future and every technology would be eventually subsumed by the Web. Ironically, I then — stupidly — avoided learning much about web technologies for some time — in large part because I got burned so badly by Java. Java did one thing Object Pascal nearly did but failed to do — it got rid of pointers while being a real language. In every other respect, Java is pretty much mediocre-at-best. From an end user’s point of view it was unbelievably slow (now it’s just slow to load) and ugly — but hey, it was neat that it “just worked” more-or-less everywhere. From a language point of view it was a clumsy language design (OK, it was nicer than C++) but it was cool that it had a virtual machine and a lot of libraries. From a library viewpoint the libraries were terrible, but it was nice that the language and VM were there. And from a VM point of view it was a very poorly implemented VM. My first serious Java project involved implementing navigable 3D panoramas (kind of like QuickTime VR, which was our inspiration). At the time, QuickTime VR was really impressive for end-users and a royal pain in the ass to develop (pretty much par for the course for Apple 1984-2003), so we decide to hack together an alternative. Essentially — render 360 degree panorama, scroll it within a view port, define “hotspots” with links in the web page and have them implemented by the “plugin”. I developed two versions of this — one in Java and one in Director/Shockwave. The Director version was so stupendously superior in every respect I’ve never used Java again. (The only “multimedia” features I was using were drawing a picture and drawing lines and rectangles.)

Games were always my first and best reason for using computers, and one day — out of the blue — a guy who I’d had passing contact with while trying to find a publisher for Prince of Destruction called me and asked if I was interested in looking at a game design project. I met with him and a colleague at a conference, he showed me the project, I — off the cuff — gave my opinion on how I’d redesign it — and I was hired. A few weeks later, I’m pitching my design (for which I was never paid, incidentally) to a boardroom full of strangers at a company named Garner MacLennan. It’s a really nice boardroom. There are state-of-the-art 3d graphics framed on the walls, and industry awards everywhere. I’ve just come from work, gotten lost, and run to the meeting in a suit. I stumble in, get asked to pitch my design to prospective developers and money-people for the project. I give a 30 minute pitch — completely unprepared (I’m working 16h days for the Big Consulting Company and I haven’t heard from anyone about the project for weeks or months) — answer a few questions, and am then, basically, dismissed.

A few days later, I receive a call from a gravelly-voiced fellow who would like me to come over to Garner MacLennan for a chat. I end up having a conversation with Stewart MacLennan and Jeff (I forget his surname, but it’s not Garner who had been bought out some years earlier) — the two top guys in the company. Would I like to join them to run an interactive/games division?

C/C++: I’d dipped my toes in the C/C++ world many times before, but if I was going to do hardcore game development — or even just supervise coders — I needed to learn C and C++. So I did. I can’t say I was ever proficient or comfortable — C++ manages to implement all the obviously needed things from Object Pascal (et al) while achieving zero elegance. We licensed a 3D library (from Virtually Unlimited — a Swiss company that has since disappeared from the face of the Earth and even, it seems, the web) that offered both software and hardware-accelerated 3D, and which cost under $50,000 — making it very compelling.

We built a very flexible OO game engine, and started building our game — but we were crippled by several problems. First, it was very hard to hire decent game programmers in Australia at the time — almost impossible, in fact. We could find them in the US, but their salary expectations were freakishly high by our standards, and US coders tend to be ridiculously specialized by Australian standards. (A typical Australian developer is used to doing absolutely everything, Americans want to be told which position in the assembly line they’ll be sitting in. The US system scales better — well, it scales period — but it has a very high cost of entry.)

Chiefly, our problem was that we weren’t three talented college dropouts in a garage — that ship had sailed with Prince of Destruction (only we weren’t dropouts, which itself was a disadvantage because we’d wasted several years getting college degrees) — we had serious salaries, serious lifestyles, no desire to work 20h days, and we had something to lose — like the $250,000/year I could bring in from corporate multimedia development with virtually no effort. So as a AAA game developer, I failed miserably. But I did ship several kids’ titles — two very successful — and make a tidy amount of money on corporate multimedia.

Realbasic: despite continuing efforts to bloat it and turn it into C++ without pointers, Realbasic remains the most elegant real language I’ve ever used, and the most pleasant and productive programming environment for writing desktop apps — at least relatively small and simple ones — I’ve ever seen. I’ve written numerous programs in RB over the years — I was user #2 — Andrew Barry being user #1, was a technical referee for the first edition of O’Reilly’s RealBasic: A Definitive Guide, and ran the most popular RealBasic forum until Real Software finally created its own official forum.

Blitz3D: every time I find out about a new 3D game development tool (that doesn’t cost six figures) I’ll give it a shot. Blitz3D was the first such tool that just grabbed me and didn’t let go. It was Basic, but with the “obvious” problems fixed, and a first-class hardware-accelerated 3D engine just sitting there. I got as far as developing the basics of a space shooter and the basics of a dungeon crawler in it before Real Life took over (i.e. my corporate multimedia money tree died) and I had to get a Real Job. I never really got into BlitzMax, chiefly because proper 3D support has yet to appear, and meanwhile the stupendously superior Unity 3D came out.

PHP: my wife is a Social Psychologist, and a lot of her research involves questionnaires. I think using paper questionnaires is ridiculous, so I suggested she might be better off using electronic questionnaires — and of course it fell to me to implement them. I ended up writing RiddleMeThis for her doctoral dissertation (it’s now in use at quite a few universities, and earns me “Chinese Takeout Money”). To handle web deployment I wanted to use the most ubiquitous possible technology with the simplest possible end-user setup. In other words, I had to pick between perl and PHP. I implemented the basic runtime in both perl and PHP, and quickly picked PHP. I know there’s plenty of prejudice in both directions, but I think in the end that if you love the UNIX command line, you probably think perl is clearly superior. If you don’t, you don’t. I don’t. Back when I first worked on RiddleMeThis, PHP4 was current, PHP3 was still common, and OO programming in PHP was a bit of a nightmare. I’ve recently switched to near fulltime PHP development (some of my time is spent in Realbasic and JavaScript) and am using PHP5’s OO extensively — like everything in PHP it’s a lot uglier and messier than it could be, but it’s not terrible. PHP is the Visual Basic of Web Development — I’ve yet to find the Realbasic.

Flash/ActionScript: I got a job in Web Advertising and naturally had to learn Flash and ActionScript. (Most of the ActionScript in the world is so remedial it makes you weep — kind of like how until about 2004 the chief purpose of JavaScript was to implement rollovers.) ActionScript 2 (and 3, but I’ve never had to do anything much with 3) is a nice language — very similar to JavaScript — and the class library is quite nice, but Flash itself is horrible. (CS4 supposedly — finally — addresses its major shortcomings, such as its astonishingly poor drawing and animation tools (recall that it is, at its heart, a drawing and animation program — the programming language and VM were added long afterwards), but I have, thankfully, not had to do much of anything with Flash for quite some time now.) I’ve wasted enough of your time if you’ve read this far for me to provide a litany of Flash’s lousiness, but a decent examination would be longer than this ridiculously long post.

Python: the reason I got into Python is that for several years my wife worked in a VR lab, and the tool they worked with was programmed in Python. As a result of this, the hairier programming problems always fell to me. I really like Python — it’s kind of close to my idea of a perfect language, except for indentation. I think that semantic indentation is perhaps the stupidest thing I’ve seen since C declarations. If there’s a language feature guaranteed to provide ammunition with which you can blow off your own feet, take your pick. It’s said that in conventional C programming 50% of code (and hence bugs) are related to memory management. In my experience, the thorniest bugs in Python are almost always related to indentation — which is pretty sad, because Python has eliminated a lot of other common sources of bugs (including most memory management) through good language design. Oh well. (The other major failing of Python is that it’s too dynamic to be easy to compile and — frankly — producing reasonably good interpreted languages isn’t particularly hard (I’ve written a few interpreters in my day — nothing major) — producing something like Python that compiled to small, fast binary code — now that would be cool.)

JavaScript: I avoided JavaScript for a very long time, owing almost entirely to prejudice. When you first work with JavaScript there are two things that are likely to infuriate you — or at least there were in 1998, say, when I first started messing with it — first of all you have this huge, mostly/badly undocumented thing to code against — the Document Object Model. That’s not JavaScript’s fault, it’s the browser’s fault. But it gets blamed on JavaScript by people who don’t know better — i.e. most JavaScript coders. The other is the absolute lack of good debugging tools. (This is somewhat, but only somewhat, addressed by modern browser debuggers such as Firefox’s Firebug, and Safari’s and IE8’s built-in debuggers.) JavaScript debuggers were still pretty much a pipedream when I was forced to get serious about it — I implemented a bunch of ad unit types for Fastclick.com (which never were released) and then Valueclick Media (some of which are still in use). It was very educational. Today, I’d say JavaScript is my favorite language of all time — even better than Realbasic. Yes, it has lots of ugly stuff, but you don’t need to use them. Yes, the DOM is revolting, but you don’t need to look at it too often. The chief problem with JavaScript — as with Python — is that it won’t compile to small, fast binary code. If it did it would be The Holy Grail.

Perl: I first grappled with Perl when developing the online runtime for RiddleMeThis, but had to really learn it once I started writing server-code at Valueclick Media. My first example was porting PHP code I had written which dynamically generated ads from an XML feed to HTML/JavaScript to Perl. Later I would write UI code for the admin system in Perl (which made for great demos but for various reasons I think it never went live). Nothing I’ve seen of Perl has made me regret picking PHP for most of my web development. The things Perl programmers seem to be proudest of are things I think are basically shameful — writing code that makes no sense when first inspected. I also think that any language — including PHP and BASIC — that requires you to type some special symbol (like $) in front of variable names was created by a mental cripple (I realize UNIX shell scripts fall in this category, but they have the — partial — excuse of being an innovative hack from yesteryear). If you need to use different special symbols for different kinds of variables, you’re an even worse kind of cripple.

Unity/JavaScript: Unity’s scripting system is based on Mono, which itself is a “clone” of .NET/CLR, itself a conceptual copy of Java (specifically the Java VM and runtime libraries — C# is the Java copy). One of the problems with Java was that the Java VM was kind of seen — it seems to me — as a necessary evil to make Java work rather than a fabulous product in its own right. If Sun had pushed the Java VM as the main product, and Java as merely one example of a language that ran on it among others (e.g. including a less C-like language that didn’t suck) and actually worked harder to make the Java VM something worth pushing, the world would be different. As it was, Sun released Java after years of internal development and quickly Microsoft was able to build a better, faster, stronger VM. Microsoft — the guys who normally take three versions to produce a halfway decent clone of a mediocre product. Anyway, Microsoft saw what Sun didn’t — that it was the idea of a decent “virtual machine” that was the valuable component of Java (not that Smalltalk et al hadn’t done this before).

Anyway, Unity uses Mono to support scripting, which means any Mono-compatible language can be used and what’s more because the CLR lets such languages talk to each other, you can pretty painlessly mix languages in one project without even writing an API. In other words, I can write code in C#, Boo, and Unity’s JavaScript and code in one language can call functions and instance objects implemented in another. Very, very cool. (See a lot of that in the Java world folks?) It’s pretty clear that Unity made the early and wise decision to provide a relatively simple scripting language alternative to C#, and to do this they implemented a subscript of JavaScript they initially called UnityScript, but then renamed JavaScript (for marketing reasons, but creating much confusion thereafter).

Unity JavaScript is — simply put — nearly as nice as regular JavaScript, with the Unity and Mono runtimes to talk to instead of the DOM, and compiled to bytecode — i.e. running at similar speeds to Java, C#, etc. — which is to say in the ballpark of C. This doesn’t make Unity JavaScript the Holy Grail, but it does show how nice a Holy Grail language could be.

Objective-C: everything I know about Objective-C and all my experience with it makes me like it. That said, I really see no reason to commit to it. Indeed, the first and only Objective-C program I’ve written from scratch was a mortgage calculator for the iPhone — so I can’t really claim to be either comfortable or proficient with it — but I mention it because I am pretty familiar with it, I think I get it, and I think that of all the major “real” languages out there (i.e. languages that compile to native binary and can achieve C-like performance when it matters) it comes closest to being The Holy Grail. Unlike C++ or Java, it is almost as dynamic as you want it to be without resorting to special libraries, more syntax, and so forth. And it’s possible to “harden” your dynamic code as much as you want or need to — so you can convert a dynamically linked function pointer into a hard address to eliminate the overhead for dynamic calls — in other words, you can be as C-like in performance as you want without massive rewriting or refactoring, or suffering the agony of strong typing everywhere. The language is also verbose — but in a good way (i.e. a way that doesn’t involve a huge amount of typing, and does tell you what’s going on). C++, Java, and C# are all verbose in ways that bury what’s going on in a bunch of uninformative type declarations and casting that convey no useful information (99% of the time).

Almost all of the problems with Obj-C are practical — you can’t write cross-platform software (even iPhone and Mac OS X software don’t mix). You can’t really write Windows or Linux software as well (unless you target GNUStep or feel like writing your own class library from scratch — neither is terribly practical). So Obj-C remains a very attractive boutique language — more practical than the many even more attractive and boutique languages (mostly Lisp variations) but not practical enough. If, ultimately, your software needs to run in lots of places, Obj-C isn’t your friend. (I currently write utilities used interchangeably on Mac, Windows, and Linux software — I often don’t even look at the Linux versions before deploying them, but apparently they work Just Fine. Thank you Realbasic.)

So, by my count, that’s well over twenty languages I’ve achieved “sufficient proficiency to earn money” with since 1977, despite being lazy and avoiding any programming for about five years. And I’m not including things like Word and Excel Macros, MaxScript, XSL, CSS, VRML, and various other pseudo-languages, that are essentially trivial for anyone with programming experience and the necessary motivation to learn (OK I did mention HTML, but I did say it’s not a real programming language). I’m also not including languages I spent enough time to familiarize myself with (Lisp, Ruby), but was never motivated to build anything real with.

OK, so what about Go?

Chrome Frame, IE Woes, Canvases

Google’s latest salvo in the War Against Terror Microsoft is Chrome Frame, a plugin for IE that replaces IE’s HTML and JavaScript engines with Chrome’s, i.e. Webkit with slightly inferior text rendering and slightly different JavaScript performance.

I’ve spent almost exactly the last six months working on a project called Acumen. My previous job was web advertising, and with web advertising the most important target is the lowest common denominator. In large part this is because no-one wants to see what you have to offer (no-one is going to download a plugin just to see your flashing banner ad). The other side of this is that not only does IE6 represent the lowest common denominator in terms of computer hardware and software, but it’s a pretty handy indicator of the lowest common denominator in pretty much everything else.

(Who is going to click on a “lose weight and get rich by clicking here” banner ad that is pretending to be a Windows error dialog that looks really convincing, except for the fact it jiggles and has a Windows ME frame?)

So it’s refreshing to work on a project that (a) people might deliberately want to use, that (b) is attempting in some small way to make the world a better place (by providing web access to the UA Library system’s special collections, and — ultimately we hope — special collections everywhere) and (c) values standards compliance over the lowest common denominator. The sad thing is that while the front end works beautifully in Safari and IE8, and quite well in Firefox (yes, as of writing Firefox has bugs that Safari and IE8 do not), it explodes in a screaming heap even on IE7.

Acumen's Asset Viewer is an unholy collection of nested divs implemented in JavaScript
Acumen's Asset Viewer is an unholy collection of nested divs implemented in JavaScript It supports two views of a given image -- this is the smaller size.

The main problem for me is the “asset viewer” component — about 400 lines of JavaScript that, when loaded on top of a metadata file (the digital equivalent of a catalog index card) performs a quick query of the server for assets belonging to the metadata file (in other words, if this is a metadata file for the papers of S. D. Cabanis, do we have any of the actual papers available on line?) and then presents them in a vaguely useful way in the page.

The first cut of this component uses jQuery and CSS to assemble an unholy collection of nested divs to produce a fairly straightforward browser UI. You get a horizontal scrolling list of asset thumbnails and, if you click on one, you get to see it up close in a window below. This is very similar to the UI of any number of image browsers such as Picasa or iPhoto. As I said, it works quite nicely on any current reasonably standards-compliant browser.

Even IE8.

But there are all kinds of issues. To begin with, displaying images inside an <img> tag has all kinds of downsides, not least of which is that you can’t wrap a draggable interface around <img> tags because they don’t receive the events. (I blame Safari! Safari — back in version 1.0 — introduced the ability to drag images from Safari into any other desktop app rather than right-click to “Save Image As…”. This is incredibly useful when browsing typical web pages — e.g. it probably reduces the time it takes to download a whole bunch of porn significantly — but for web app developers it means that images don’t get left clicks and drag-type events because the browser is using them.)

Now, I’d love to use <img> tags for the simple reason that they afford you a way of scaling images. If you want to scale an image to 73% of its actual size you (a) load it into an Image object in JavaScript, (b) when it’s loaded and you know its size, you create an <img> tag and sets its height and width to 73% of its native dimensions. But then you can’t scroll the image around by dragging. Also, browsers tend to do an absolutely crappy job of scaling images this way. (If it weren’t for this last problem, I’d probably work around the <img> not receiving events issue using a floating <div>.)

So what I originally did (and if you click the Acumen link you can see it in action) was store two sizes of image — 512×512 and 2048×2048 — and load the images into a <div> tag (which can receive all mouse events). It works pretty well (except in IE6 and IE7 which just can’t deal, and Firefox 3.5 which occasionally loses the plot and thinks you’re clicking on an <img> tag — I’ve reported the bug) but you can only scale to “too small” and “too big”, and generating huge numbers of differently scaled images server-side seems like a Bad Idea to me.

There’s another bad aspect to this asset viewer, and that’s the architecture is ugly. Unless I’m willing to build my own UI library or use an off-the-shelf one I actually like (when will Atlas finally ship?) there’s a huge disconnect between the way you build up a UI inside a browser via the DOM and the way it actually works. E.g. drawing a scrollbar requires the programmer to be acutely aware of the fact that it’s a div (the scrollbar) wrapping another div (the thumb) that has dimensions equal to the sum of its size and border widths. It’s very hard to simply write a routine that knows how to draw scrollbars and another routine that knows how to handle events relating to scrollbars. It’s just ugly. This makes the logic that handles events and refreshes very unpleasant indeed.

Google to the Rescue

The ideal option would be to use the HTML5 canvas for everything. This means you can handle all your events in once place (the canvas, or the element containing the canvas, receives all the events) and you can handle all your drawing — more or less — in one place (everything gets drawn in the canvas). It also lets you do things like scale images to arbitrary sizes on the client (reducing the amount of crap the server needs to store and/or the amount of work the server needs to do — e.g. resizing images or generating giant lists of available images).

But no version of IE, even IE8, supports the canvas.

Then Google did something very clever. It implemented a version of the HTML5 canvas in JavaScript by faking most of the functionality of a Canvas using an unholy combination of VML (Microsoft’s not-quite-equivalent of SVG, I think) and nested <div> tags. All you do is import excanvas.js and initialize any canvas objects you create on the fly with a one-line call and you’re golden.

Well, almost.

Acumen Asset Viewer 2 is implemented using canvas (excuse the debug crap in the top-left corner).
Acumen Asset Viewer 2 is implemented using canvas. Note that arbitrary zooming is supported, so the image has been "scaled to fit".

I’ve got everything working quite beautifully in IE8, Firefox, and Safari. Not quite so well in Opera (but, really, who the frack cares?). In IE7 it works just well enough to convince me that I can probably get it working either by (a) fooling IE7 into going into the right “mode” or (b) making specific allowances for IE7 (basically IE seems to be mismatching metrics for bitmaps and vectors so all the bitmaps end up in the wrong places). This falls into the “lots of extra work to support a broken platform” category, and I’d rather avoid it. Me and pretty much everyone who isn’t a Microtard.

Now, Google has done something both very clever and — perhaps more importantly — cunning. Chrome Frame promises to let us develop standards-compliant websites and just stop caring about IE, which is simply huge. And since it’s completely open source that pretty much means IE can be dead as far as anyone — except web advertisers — is concerned. If I want my site to render nicely in IE then I just add the Chrome Frame tags and forget about it.