As the Wwworm Turns

Microsoft’s recent announcement that it is, in effect, abandoning the unloved and unlamented Edge browser stack in favor of Chromium is, well, both hilarious and dripping in irony.

Consider at first blush the history of the web in the barest terms:

  • 1991 — http, html, etc. invented using NeXT computers
  • 1992 — Early browsers (Mosaic, NetScape, etc.) implement and extend the standard, notably NetScape adds Javascript and tries to make frames and layers a thing. Also, the <blink> tag.
  • 1995 — Microsoft “embraces and extends” standards with Internet Explorer and eventually achieves a 95% stranglehold on the browser market.
  • 1997 — As NetScape self-destructs and Apple’s own OpenDoc-based browser “Cyberdog” fails to gain any users (mostly due to being OpenDoc-based), Apple begs Microsoft for a slightly-less-crummy version of IE5 to remain even vaguely relevant/useful in an era where most web stuff is only developed for whatever version of IE (for Windows) the web developer is using.
  • 2002 — FireFox rises from the ashes of NetScape. (It is essentially a cross-platform browser based on Camino, a similar Mac-only browser that was put together by developers frustrated by the lack of a decent Mac browser.)
  • 2003 — Stuck with an increasingly buggy and incompatible IE port, Apple develops its own browser based on KHTML after rejecting Netscape’s Gecko engine. The new browser is called “Safari”, and Apple’s customized version of KHTML is open-sourced as Webkit.
  • As a scrappy underdog, Apple starts a bunch of small PR wars to show that its browser is more standards-compliant and runs javascript faster than its peers.
  • Owing to bugginess, neglect, and all-round arrogance, gradually Microsoft loses a significant portion of market share to FireFox (and, on the Mac, Safari — which is at least as IE-compatible as the aging version of IE that runs on Macs). Google quietly funds FireFox via ad-revenue-sharing since it is in Google’s interest to break Microsoft’s strangehold on the web.
  • 2007 — Safari having slowly become more relevant to consumers as the best browser on the Mac (at least competitive with Firefox functionally and much faster and more power efficient than any competitor) is suddenly the only browser on the iPhone. Suddenly, making your stuff run on Safari matters.
  • 2008 — Google starts experimenting with making its own web browser. It looks around for the best open source web engine, rejects Gecko, and picks Webkit!
  • Flooded with ad revenue from Google, divorced from any sense of user accountability FireFox slowly becomes bloated and arrogant, developing an email client and new languages and mobile platforms rather than fixing or adding features to the only product it produces that anyone cares about. As Firefox grows bloated and Webkit improves, Google Chrome benefits as, essentially, Safari for Windows. (Especially since Apple’s official Safari for Windows is burdened with a faux-macOS-“metal”, UI and users are tricked into installing it with QuickTime.) When Google decides to turn Android from a Sidekick clone into an iPhone clone, it uses its Safari clone as the standard browser. When Android becomes a success, suddenly Webkit compatibility matters a whole lot more.
  • 2013 — Google is frustrated by Apple’s focus on end-users (versus developers). E.g. is the increase in size and power consumption justified by some kind of end-user benefit? If “no” then Apple simply won’t implement it. Since Google is trying to become the new Microsoft (“developers, developers, developers”) it forks Webkit so it can stop caring about users and just add features developers think they want at an insane pace. It also decides to completely undermine the decades-old conventions of software numbering and make new major releases at an insane pace.
  • Developers LOOOOVE Chrome (for the same reason they loved IE). It lets them reach lots of devices, it has lots of nooks and crannies, it provides functionality that lets developers outsource wasteful tasks to clients, if they whine about some bleeding edge feature Google will add it, whether or not it makes sense for anyone. Also it randomly changes APIs and adds bugs fast enough that you can earn a living by memorizing trivia (like the good old days of AUTOEXEC.BAT) allowing a whole new generation of mediocrities to find gainful employment. Chrome also overtakes Firefox as having the best debug tools (in large part because Firefox engages in a two year masturbatory rewrite of all its debugging tools which succeeds mainly in confusing developers and driving them away).
  • 2018 — Microsoft, having seen itself slide from utter domination (IE6) to laughingstock (IE11/Edge), does the thing-that-has-been-obvious-for-five-years and decides to embrace and extend Google’s Webkit fork (aptly named “Blink”).

Google and the <video> tag

Though H.264 plays an important role in video, as our goal is to enable open innovation, support for the codec will be removed and our resources directed towards completely open codec technologies.

From HTML Video Codec Support in Chrome

Well that sucks.

Gruber asks a few “simple questions” here. Aside from the question of hypocrisy w.r.t. Flash bundling, I think his points are more than neutralized by these ten questions:

You are a proponent of Apple using its influence to diminish the importance of Flash for the web. Yet, when Google makes similar moves to rid the web of a similarly closed and patented, albeit different type of technology, you do not support them. Why is Apple promoting an open web a good thing, but Google promoting an open web a bad thing?

I think that if Google pulled Flash support from Chrome there would be no question that Google were on the side of the angels (although it would still be a dumb thing to do), but since there’s no hint of this it seems purely like a cynical move to hurt Apple’s anti-Flash campaign which will damage HTML5 <video> adoption. I think you can make the argument that HTML5 <video> adoption with H264 as the defacto standard codec is a Bad Thing.

Anyway it’s a bigger mess now than it was before Google decided to do this. Ultimately it will come down to “what will let the most people see the most porn using the most devices?”

Postscript: “standards”

One of the arguments made in favor of WebM/VP8 is that it can be part of the W3C standard, unlike H264, because it’s not encumbered by license fees. The problem here is that WebM/VP8 almost certainly is encumbered (as was GIF in earlier days), it just hasn’t been sued yet because no-one uses it. But this is beside the point — the CSS font-family property supports any font, and almost all the fonts that anyone cares about are encumbered (i.e. subject to royalties, copyright, and so on). Just as CSS font-family can specify a non-free non-open-source font, there’s no reason why a video tag can’t point to an arbitrarily encoded video.

To put it another way:

There’s no conflict between the HTML specification being open and royalty-free and H264 video playback being supported in HTML5 video tags as long as the codec doesn’t need to be implemented by the browser. Just as a slab of text with font-family “Verdana” won’t necessarily display on every browser correctly (if the font is not installed) it would follow that not every video will play back in every browser.

As a practical matter, it would be nice if serving a page with video were as simple an affair as possible. E.g. figuring out which video to serve didn’t involve sniffing the browser, operating system, and so forth; better yet, if one video format worked everywhere. As a practical matter right now H264 is the best candidate. VP8/WebM will never be the best candidate because by the time there’s a critical mass of hardware support out there it will be obsolete. This is a stupid, stupid fight.

And yet one more thing:

It’s interesting that the companies still in favor of h264 (Apple, Microsoft) are precisely those companies who do not implement the codec in the browser. Apple and Microsoft both implement h264 as a plugin architecture at OS level rather than a plugin at browser level (a much worse thing — see this excellent piece that daringfireball brought to my attention).

Is a it a font or an image file format?

The flipside of my argument that H264 should be considered analogous to a font is that, generally speaking, text is still legible when presented in the wrong font. By that argument H264 is more like an image format (JPG, PNG, etc.). If we accept this argument — which I’d say is the most h264-hostile stance (within reason) to take with respect to video codecs — then consider that most browsers simply let you display pretty much any image that’s convenient inside an <img> tag (sometimes badly, as per Internet Explorer’s notorious mishandling of PNG files over the years), generally by using the underlying OS’s APIs for handling images, which is exactly what I’d suggest the idealistic and pragmatic approach for video ought to be.

Would it be great if there were one codec out there that worked everywhere that web developers could target? Sure. But that doesn’t mean not supporting video codecs that happen to be around anyway, just as I can click on a PSD or TIFF in Safari and see it in the browser.

Ultimately, Google’s stance would have web browsers simply refuse to play back content with non-standards-based content (unless it’s Flash). What kind of “principled” or “non-evil” position is that? Again, if Google were to drop Flash support and make the argument that HTML5 is “the platform”, then it could make some kind of argument about consistency, but that’s not it. Google is making Flash part of “the platform” but not H264.

Webkit & Chrome UI Problem and Solution

Chrome's Tabs

I’m starting to like Chrome. I used it as my default browser for Windows for about a week before its various subtle bugs drove me back to Firefox (I got over my annoyance with Firefox…). Under Windows, Chrome is aesthetically challenged, but on the Mac it’s lovely. My one quibble is with the way tabs are positioned.

Chrome does put its tabs above the URL which—conceptually—belongs to the web page you’re looking at. But the tab also contains your bookmarks, which are global context.

Webkit's TabsWebkit/Safari has, without doubt, the most bizarre tab arrangement in the history of computing. It looks quite nice, but it makes no sense at all. It doesn’t work badly, and I guess if you’re going to give up on making sense conceptually, why not go for aesthetics?

I think there’s a very simple solution. Put the freaking tabs on the side of the window—like iTunes or Finder. There’s one browser that does this—Omniweb.

I actually bought a copy of Omniweb back when it was pretty much the best alternative to Internet Explorer 5.1—shortly afterwards Camino came out, followed by Firefox, followed by Safari. Omniweb is now free and webkit-based.

Omniweb's Drawer

The only problem with Omniweb is that the “tabs” are placed in a drawer, which is ugly, has the disadvantage of not visually associating the tabs with the browser view as Finder does, and—worst of all—being a freaking drawer. Even so, Omniweb probably has the best thought-out UI of any current browser. (And not just because it is the only browser to deal with “tabbed” browser windows in a sensible way.)

Aside: drawers should simply not be used for UI elements you expect to use all the time. I’d go so far as to say that Mac OS X would be better off without drawers in the first place. I can’t think of an application which uses a drawer that wouldn’t be better served by simply integrating the same stuff into the parent window. (Pathfinder’s pathological use of drawers is the main reason I can’t bear using it.)

Finder's sidebarHere’s Finder. This is how to do browser tabs. (It’s a shame Finder doesn’t actually dynamically insert open windows in its side menu—it would fix about 75% of what’s wrong with it.)

Tabs are, in general, a stupid idea. They waste vertical real estate (which is more valuable than horizontal, especially on today’s widescreens), and they do so inefficiently (since titles are wide the squat) so you frequently can’t see all your tabs. They’re really only useful in cases where they let you pick from between a small number of options (e.g. selecting views and palettes in Photoshop or Unity)—cases where a side panel would be massive overkill. This is why a lot of more complex programs have stopped using tabbed dialogs for their preferences.

Here endeth the lesson.