Farewell require, you’re no longer needed…

This is a really technical article about front-end javascript programming. You have been warned.

Background

For most of the last two years, I have written my front-end code using my own implementation of require. It was inspired by the mostly delightful implementation of require used at Facebook (which I spent had over a month of my life in the bowels of). Initially, I just wanted to use something off-the-shelf, but every implementation I found required a build-phase during development (the Facebook version certainly did) and simultaneously tried to abstract away paths (because during the build phase, something would find all the libraries and map names to paths).

I wanted a require that did not depend on a build phase, did not abstract out paths (so if you knew a problem was in a required library, you also knew where that library was, was easy to use, and supported third-party libraries written to different variations of the commonjs “standard”.

What is require?

const foo = require('path/to/foo.js');

Require allows you to pull code from other files into a javascript file. Conceptually, it’s something like this (the first time).

  1. get text from ‘path/to/foo.js’
  2. insert it in the body of a function(module){ /* the code from foo.js */ )
  3. create an object, pass it to the function (as module)
  4. after the function executes, put whatever is in module.exports in a map of library files to output.

It follows that foo.js looks something like this:

module.exports = function() { console.log("I am foo"); }

The next time someone wants that file, pass them the same stuff you gave the previous caller.

Now, if that were all it needed to do, require would be pretty simple. Even so, there’s one nasty issue in the preceding when you want to allow for relative paths — if you require the same module from two different places you don’t want to return different instances of the same thing — both for efficiency and correctness.

But of course there are lots and lots of wrinkles:

  • Every javascript file can, of course, require other javascript file.
  • Circular references are easy to introduce accidentally, and can become unavoidable.
  • Each of these things is a synchronous call, so loading a page could take a very long time (if you put this naive implementation into production).
  • Javascript is a dynamic language. It’s already hard to statically analyze, require makes it harder. It’s particularly nasty because require is “just a function” which means it can be called at any time, and conditionally.
  • And, just to be annoying, the different implementations of commonjs make weird assumptions about the behavior of the wrapper function and the way it’s called (some expect module to be {}, some expect it to be {exports:{}}, some expect it to be something else, and because require is just javascript can choose to behave differently based on what they see.

Under the hood, require ends up needing to support things like dependency-mapping, circular-reference detection, asynchronous require, inlining pre-built dependency trees, and pretending to be different versions of require for different modules. It’s quite a nasty mess, and it’s vital to the correct functioning of a web app and its performance.

So, good riddance!

ES6 Modules

So, all the major browsers (and nodejs — sort of) finally support ES6 modules and — perhaps more importantly — because this is a change to the language (rather than something implemented in multiple different ways using the language) various build tools support it and can transpile it for less-compatible target platforms.

So now, you can write:

import foo from 'path/to/foo.js';

And the module looks like:

export default function(){ console.log('I am foo'); }

A couple of months ago, I took a few hours and deleted require.js from the b8r source tree, and then kept working on the b8r source files would pass b8r’s (rather meagre) unit tests. I then ran into a hard wall because I couldn’t get any components to load if they used require.

The underlying problem is that components are loaded asynchronously, and import is synchronous. There’s an asynchronous version of import I didn’t know about.

const foo = await import ('path/to/foo.js'); // does not work!

To obtain default exports via import() you need to obtain its default property. So:

const foo = (await import ('path/to/foo.js').default;

In general, I’ve started avoiding export default for small libraries. In effect, default is just another named export as far as dynamic import() is concerned.

import — again because it’s a language feature and not just some function someone wrote — is doing clever stuff to allow it to be non-blocking, e.g.

import {foo} from 'path/to/foo.js';

Will not work if foo is not explicitly exported from the module. E.g. exporting a default with a property named foo will not work. This isn’t a destructuring assignment. Under the hood this presumably means that a module can be completely “compiled” with placeholders for the imports, allowing its dependencies to be found and imported (etc.). Whereas require blocks compilation, import does not.

This also means that import affords static analysis (unlike require, import must be at top level scope in a module). Also, even though import() (dynamic import) looks just like a regular function, it turns out it’s also a new language feature. It’s not a function, and can’t be called etc.

An Interesting Wrinkle

One of the thorniest issues I had to deal with while maintaining require was that of third-party libraries which adhered to mystifyingly different commonjs variations. The funny thing is that all of them turn out to support the “bad old way” of handling javascript dependencies, which is <script> tags.

As a result of this, my require provided a solution for the worse-case-scenario in the shape of a function — viaTag — that would insert a (memoized) script tag in the header and return a promise that resolved when the tag loaded. Yes, it’s icky (but heck, require is eval) but it works. I’ve essentially retained a version of this function to deal with third-party libraries that aren’t provided as ES modules.

b8r without require

Last night, I revisited the task of replacing require with import in b8r armed with new knowledge of dynamic import(). I did not go back to the branch which had a partial port (that passed tests but couldn’t handle components) because I have done a lot of work on b8r’s support for web-components in my spare time and didn’t want to handle a complicated merge that touched on some of the subtlest code in b8r.

In a nutshell, I got b8r passing tests in about three hours, and then worked through every page of the b8r demo site and had everything loading by midnight with zero console spam. (Some of the loading is less seamless than it used to be because b8r components are even more deeply asynchronous than they used to be, and I haven’t smoothed all of that out yet.)

This morning I went back through my work, and found an error I’d missed (an inline test was still using require) and just now I realized I hadn’t updated benchmark.html so I fixed that.

An Aside on TDD

My position on TDD is mostly negative (I think it’s tedious and boring) but I’m a big fan of API-driven design and “dogfooding”. Dogfooding is, in essence, continuous integration testing, and the entire b8r project is a pure dogfooding. It’s gotten to the point where I prefer developing new libraries I’m writing for other purposes within b8r because it’s such a nice environment to work with. I’ve been thinking about this a lot lately as my team develops front-end testing best practices.

Here’s the point — swapping out require for import across a pretty decent (and technically complex) project is a beyond a “refactor”. This change took me two attempts, and I probably did a lot of thinking about it in the period between the attempts — so I’d call it a solid week of work.

At least when you’re waiting for a compile / render / dependency install you can do something fun. Writing tests — at least for UI components or layouts — isn’t fun to do and it’s not much fun to run the tests either.

“5 Story Points”

I’ve done a bunch of refactors of b8r-based code (in my last month at my previous job two of us completely reorganized most of the user interface in about a week, refactoring and repurposing a dozen significant components along the way). Refactoring b8r code is about as pleasant as I’ve ever found refactoring (despite the complete lack of custom tooling). I’d say it’s easier than fairly simple maintenance on typical “enterprisey” projects (e.g. usually React/Redux, these days) festooned with dedicated unit tests.

Anyway, even advocates of TDD agree that TDD works best for pure functions — things that have well-defined pre- and post- conditions. The problem with front-end programming is that if it touches the UI, it’s not a pure function. If you set the disabled attribute of a <button>, the result of that operation is (a) entirely composed of side-effects and (b) quite likely asynchronous. In TDD world you’d end up writing a test expecting the button with its disabled attribute set to be a button with a disabled attribute set. Yay, assignment statements work! We have test coverage. Woohoo!

Anyway, b8r has a small set of “unit tests” (many of which are in fact pretty integrated) and a whole bunch of inline tests (part of the inline documentation) and dogfooding (the b8r project is built out of b8r). I also have a low tolerance for console spam (it can creep in…) but b8r in general says something if it sees something (bad) and shuts up otherwise.

Anyway, I think this is a pretty pragmatic and effective approach to testing. It works for me!

What’s left to do?

Well, I have to scan through the internal documentation and purge references to require in hundreds of places.

Also, there’s probably a bunch of code that still uses require that for some reason isn’t being exercised. This includes code that expects to run in Electron or nwjs. (One of the nice things about removing my own implementation of require is that it had to deal with environments that create their own global require.) This is an opportunity to deal with some files that need to be surfaced or removed.

At that point there should be no major obstacle to using rollup.js or similar to “build” b8r (versus using the bespoke toolchain I build around require, and can now also throw away). From there it should be straightforward to convert b8r into “just another package”.

Is it a win …yet?

My main goal for doing all this is to make b8r live nicely in the npm (and yarn) package ecosystem and, presumably benefit from all the tooling that you get from being there. If we set that — very significant — benefit aside:

  • Does it load faster? No. The old require loads faster. But that’s to be expected — require did a lot of cute stuff to optimize loading and being able to use even cleverer tooling (that someone else has written) is one of the anticipated benefits of moving to import.
  • Does it run faster? No. The code runs at the same speed.
  • Is it easier to work with? export { .... } is no easier than module.exports = { .... } and destructuring required stuff is actually easier. It will, however, be much easier to work with b8r in combination with random other stuff. I am looking forward to not having to write /* global module, require */ and 'use strict' all over the place to keep my linters happy..
  • Does it make code more reliable? I’m going to say yes! Even in the simplest case import {foo} from 'path/to/foo.js' is more rigorous than const {foo} = require('path/to/foo.js') because the latter code only fails if foo is called (assuming it’s expected to be a function) or using it causes a failure. With import, as soon as foo.js loads an error is thrown if foo isn’t an actual export. (Another reason not to use export default by the way.)

More on web-components and perf

tl;dr web-components are only slow if they have a shadowRoot.

Here’s an updated version of the previous graph, showing two new data points — one where I replace two spans with a simple b8r component, and another where I modified makeWebComponent to allow the creation of custom-elements without a shadowRoot (and hence without styling).

My intention with the upcoming version of b8r is to replace b8r’s components with web-components, and my previous test showed that web-components were kind of expensive. But it occurred to me I hadn’t compared them to b8r components, so I did.

In a nut, a b8r component is about as expensive as a web-component with a shadowRoot (the b8r component in question was styled; removing the style didn’t improve performance, which isn’t surprising since b8r deals with component styles in a very efficient way), and a web-component without a shadowRoot is just plain fast. This is great news since it means that switching from b8r components (which do not have a shadow DOM) to web-components is a perf win.

D&D5e, Web-components, & Performance

tl;dr — web-components (edit: with shadowRoots) are kind of slow.

The graph shows the average of five trials (post warmup) in Chrome on my Macbook Pro (it’s a maxed-out 2015 model). I saw nothing to suggest that a larger number of trials would significantly change the results.

Over the long weekend I was hoping to play some D&D with my kids (it never ended up happening). One of the things I find pretty frustrating with the new (well, 2014) Players Handbook is that the spells are all mixed together in strict alphabetical order. (It’s not like earlier editions were better organized. It’s also not clear how they could really fix this, since any arrangement will either require repeating spell descriptions or some kind of tedium.) So to figure out what spells are available to, say, a first level Druid, and what they do, you need to do a lot of page-flipping and cross-referencing.

As an aside, in the late-lamented DragonQuest (SPI, 1980), magic is divided up into “colleges” with a given character having access to the spells of only one college. Each college, along with its spells, is described in one place, with the only page-flipping being required for a relatively small number of spells that are common between colleges. A typical college has access to 20-30 different spells which are thematically unified (e.g. Adepts of the College of Fire have fire-related spells).

This kind of organization is not possible for D&D because the overlap between classes is so great. Sorcerors essentially get a subset of Wizard spells, as do Bards. And so on.

So it struck me that some kind of easily filterable list of spells which let you decide how you wanted them sorted and what you wanted shown would be handy, and this seemed like a nice test for b8r’s web-component system.

Now, b8r is the third rewrite of a library I originally developed with former colleagues for USPTO. Initially we had a highly convoluted “data-table” (not an original name, of course) that essentially let you point a descriptive data-structure at an array and get a table with all kinds of useful bells and whistles. This quickly became unmanageable, and was particularly horrible when people started using it to create quite simple tables (e.g. essentially short lists of key/value pairs) and then add fancy behaviors (such as drag-reordering) to them.

So, I developed a much simpler binding system (“bind-o-matic”) that allowed you to do fine-grained bindings but didn’t try to do bells and whistles, and it quickly proved itself better even for the fancy tables. b8r‘s data-binding is the direct descendant of bind-o-matic.

Anyway, all-purpose tables are hard. So I looked around for something off-the-shelf that would just work. (I’m trying to purge myself of “not invented here” syndrome, and one of the joys of web-components is that they’re supposed to be completely interoperable, right?) Anyway, the closest thing I could find (and damn if I could find a compelling online demo of any of them) had been abandoned, while the next closest thing was “inspired by” the abandoned thing. Not encouraging.

So I decided to write a minimal conceptual set of custom elements — in essence placeholder replacements for <table> <tr> <td> and <th> — which would have zero functionality (they’d just be divs or spans with no behavior) that I could later add CSS and other functionality to in order to get desired results (e.g. the ability to switch a view from a virtual grid of “cards” to a virtual table with a non-scrolling header and resizable columns — i.e. “the dream” of data-table implementations, right?

Now, I have implemented a function named makeWebComponent that will make you a custom element with one line of code, e.g. makeWebComponent('foo-bar', {}). So I made a bunch of components exactly that way and was horrified to see that my test page, which had been loading in ~1000ms was suddenly taking a lot longer. (And, at the time it was loading b8r from github and loading the spell list from dnd5eapi.co and then making an extra call for each of the 300-odd spells in the list.)

So, I ended up implementing my interactive D&D spell list using old school b8r code and got on with my life.

Hunting down the problem

I’m pretty enthusiastic about web-components, so this performance issue was a bit of a shock. So I went to the bindinator benchmark page (which lets me compare the performance of b8r to special-purpose vanilla javascript code doing the same thing with foreknowledge of the task.

In other words, how fast is b8r compared with javascript hand-coded to do the exact same thing in a completely specific way? (An example of how unfair the comparison can be is swapping two items in a large array and then updating the DOM vs. simply moving the two items.)

So the graph at the top of the page compares creating a table of 10,000 rows with a bunch of cells in each row using vanilla js to b8r, and then the other three columns show b8r with two simple <a> tags replaced with <simple-span>, where <simple-span> is defined using makeWebComponent vs. hand-coded (but still with a style node) vs. hand-coded (with nothing except a slot to contain its children).

class SimpleSpan extends HTMLElement {
  constructor() {
    super();

    // const style = document.createElement('style');
    // style.textContent = ':host { font-style: italic; }';
    const shadow = this.attachShadow({mode: 'open'});
    const slot = document.createElement('slot');
    // shadow.appendChild(style);
    shadow.appendChild(slot);
  }
}
window.customElements.define('simple-span', SimpleSpan);

Above is the hand-coded <simple-span> so you can see exactly what I’m doing and second-guess me. The commented-out lines are the difference between the fourth and fifth columns.

I should add that I tried rewriting the above code to use cloneNode(true) instead of creating the new nodes in the constructor, but performance was no different in this case. If the nodes being created were more complex there would likely be an advantage. (I make extensive use of cloneNode(true) in b8r since past experimentation showed a benefit, and makeWebComponent makes use of it.)

I should add that even the simplest component made with makeWebComponent has a style node because Google’s best practices suggest that all custom elements should support the hidden attribute, and doing this with styles is by far the simplest (and I would hope the most performant) way to do this.

It also occurred to me that having the <style> contain :host { font-style: italic; } might be more expensive that :host([hidden]) { display: none; } but, again, that was a wash. Similarly, it occurred to me that initializing the shadowRoot as {mode: 'closed'} might help. It didn’t.

So, this appears to show that the overhead for just two trivial custom elements in each of 10,000 rows is comparable to the overhead for b8r in its entirety for the entire table row. Now, b8r is creating a given table row by cloning, ad-hoc, the hierarchy it’s using as a template, and then querying that hierarchy to find bound elements and then inserting values as appropriate.

When you consider the kinds of uses to which browsers put custom elements internally (e.g. <video> and <input type=”data”>) these are generally not something you’re going to have 20,000 of in the DOM at once. It’s easy, however, to imagine wanting every item in a pretty huge list to include a custom checkbox or popup menu. It’s worth knowing that even simple custom elements have a non-trivial overhead (and we’re talking 1ms for 20 of them on a fast, high-end laptop).

Addendum

I didn’t realize that you could create a web-component with no shadowRoot. (This isn’t exactly something anyone makes obvious — I’ve found literally zero examples of this in tutorials, etc.)

If you do this, the performance issue mostly goes away.

Now, you do lose all the “benefits” of the shadow DOM but you gain performance and still can attach style rules to custom (unique!) tagNames, which makes managing the resulting CSS easier. This is particularly nice given my desire to replace b8r components with custom elements, since there needn’t be any performance overhead (you can “seal” the styling in a container with just one shadowRoot rather than ten thousand instances).

b8r v2

b8r v2 is going to adopt web-components over components and import over require

Having developed b8r in a period of about two weeks after leaving Facebook (it was essentially a distillation / brain-dump of all the opinions I had formed about writing front-end code while working there), I’ve just spent about two years building a pretty solid social media application using it.

But, times change. I’m done with social media, and the question is, whither b8r?

In the last few weeks of my previous job, one of my colleagues paid me a huge compliment, he said I’d “really hit it out of the park” with b8r. We’d just rebuilt half the user interface of our application, replacing some views completely and significantly redesigning others, in two weeks, with one of the three front-end coders on leave.

I designed b8r to afford writing complex applications quickly, turn hacky code into solid code without major disruption, make maintenance and debugging as easy as possible, and to allow new programmers to quickly grok existing code. As far as I can tell, it seems to do the trick. But it’s not perfect.

Two significant things have emerged in the front-end world in the last few years: import and web-components (custom DOM elements in particular)

import

When I wrote b8r, I found the various implementations of require to be so annoying (and mutually incompatible) I wrote my own, and it represents quite an investment in effort since then (e.g. I ended up writing an entire packaging system because of it).

Switching to import seems like a no-brainer, even if it won’t be painless (for various reasons, import is pretty inimical to CommonJS and not many third-party libraries are import-friendly, and it appears to be impossible for a given javascript file to be compatible with both import and require).

I experimentally converted all of the b8r core over to using import in an afternoon — enough that it could pass all the unit tests, although it couldn’t load the documentation system because any component that uses require or require.lazy won’t work.

Which got me to thinking about…

web-components

I’ve been struggling with improving the b8r’s component architecture. The most important thing I wanted was for components to naturally provide a controller (in effect, for components to be instances of a class), and pave some good paths and wall up some bad ones. But, after several abortive attempts and then thinking about the switch from require to import, I’ve decided to double-down on web-components. The great thing about web-components is that they have all the virtues I want from v2 components and absolutely no dependency on b8r.

I’ve already added a convenience library called web-components.js. You can check it out along with a few sample components. The library makes it relatively easy to implement custom DOM elements, and provides an economical javascript idiom for creating DOM elements that doesn’t involve JSX or other atrocities.

Using this library you can write code like this (this code generates the internals of one of the example components):

fragment(
div({classes: ['selection']}),
div({content: '▾', classes: ['indicator']}),
div({classes: ['menu'], content: slot()}),
)

I think it’s competitive with JSX while not having any of the dependencies (such as requiring a transpile cycle for starters).

<div className={'selection'}></div>
<div className={'indicator'}>▾</div>
<div className={'menu'}>
{...props.children}
</div>

To see just how lean a component implemented using this library can be, you can compare the new switch component to the old CSS-based version.

Aside — An Interesting Advantage of Web Components

One of the interesting properties of web-components is that internally the only part of the DOM they need to care about is whether they have a child <slot>. Web-components don’t need to use the DOM at all except for purposes of managing hierarchical relationships. (Annoyingly, web-components cannot be self-closing tags. You can’t even explicitly self-close them.)

E.g. imagine a web-component that creates a WebGL context and child components that render into container’s scene description.

In several cases while writing b8r examples I really wanted to be able to have abstract components (e.g. the asteroids in the asteroids example or the character model in the threejs example). This is something that can easily be done with web-components but is impossible with b8r’s HTML-centric components. It would be perfectly viable to build a component library that renders itself as WebGL or SVG.

Styling Custom Elements for Fun and Profit

One of the open questions about custom DOM elements is how to allow them to be styled. So far, I’ve seen one article suggesting subclassing, which seems to me like a Bad Idea.

I’m currently leaning towards one or both of (a) making widgets as generic and granular as possible (e.g. implement a custom <select> and a custom <option> and let them be styled from “outside”) and (b) when necessary driving styles via CSS variables (e.g. you might have a widget named foo that has a border, and give it a generic name (–widget-border-color), specific name (–foo-border-color), and a default to fall back to.

So, in essence, b8r v2 will be smaller and simpler — because it’s going to be b8r minus require and components. You won’t need components, because you’ll have web-components. You won’t need require because you’ll have import. I also plan one more significant change in b8r v2 — it will be a proper node package, so you can manage it with npm and yarn and so forth.

<b8r-bindery>

One idea that I’m toying with is to make b8r itself “just a component”. Basically, you’d get a b8r component that you simply stick anywhere in the DOM and you’d get b8r’s core functionality.

In essence the bindery’s value — presumably an object — becomes accessible (via paths) to all descendants, and the component handles all events the usual way.

I’m also toying with the idea of supporting Redux (rather than b8r’s finer-grained two-way bindings). There’s probably not much to do here — just get Redux to populate a bindery and then instead of the tedious passing of objects from parent-to-child-to-grandchild that characterizes React-Redux coding you can simply bind to paths and get on with your life.

Summing Up

After two years, I’m still pretty happy with b8r. Over the next year or so I hope to make it more interoperable (“just another package”), and to migrate it from (its) require and HTML components to import and web-components. Presumably, we’ll have import working in node and (and hence Electron and nwjs) by then.

Happy new year!

As the Wwworm Turns

Microsoft’s recent announcement that it is, in effect, abandoning the unloved and unlamented Edge browser stack in favor of Chromium is, well, both hilarious and dripping in irony.

Consider at first blush the history of the web in the barest terms:

  • 1991 — http, html, etc. invented using NeXT computers
  • 1992 — Early browsers (Mosaic, NetScape, etc.) implement and extend the standard, notably NetScape adds Javascript and tries to make frames and layers a thing. Also, the <blink> tag.
  • 1995 — Microsoft “embraces and extends” standards with Internet Explorer and eventually achieves a 95% stranglehold on the browser market.
  • 1997 — As NetScape self-destructs and Apple’s own OpenDoc-based browser “Cyberdog” fails to gain any users (mostly due to being OpenDoc-based), Apple begs Microsoft for a slightly-less-crummy version of IE5 to remain even vaguely relevant/useful in an era where most web stuff is only developed for whatever version of IE (for Windows) the web developer is using.
  • 2002 — FireFox rises from the ashes of NetScape. (It is essentially a cross-platform browser based on Camino, a similar Mac-only browser that was put together by developers frustrated by the lack of a decent Mac browser.)
  • 2003 — Stuck with an increasingly buggy and incompatible IE port, Apple develops its own browser based on KHTML after rejecting Netscape’s Gecko engine. The new browser is called “Safari”, and Apple’s customized version of KHTML is open-sourced as Webkit.
  • As a scrappy underdog, Apple starts a bunch of small PR wars to show that its browser is more standards-compliant and runs javascript faster than its peers.
  • Owing to bugginess, neglect, and all-round arrogance, gradually Microsoft loses a significant portion of market share to FireFox (and, on the Mac, Safari — which is at least as IE-compatible as the aging version of IE that runs on Macs). Google quietly funds FireFox via ad-revenue-sharing since it is in Google’s interest to break Microsoft’s strangehold on the web.
  • 2007 — Safari having slowly become more relevant to consumers as the best browser on the Mac (at least competitive with Firefox functionally and much faster and more power efficient than any competitor) is suddenly the only browser on the iPhone. Suddenly, making your stuff run on Safari matters.
  • 2008 — Google starts experimenting with making its own web browser. It looks around for the best open source web engine, rejects Gecko, and picks Webkit!
  • Flooded with ad revenue from Google, divorced from any sense of user accountability FireFox slowly becomes bloated and arrogant, developing an email client and new languages and mobile platforms rather than fixing or adding features to the only product it produces that anyone cares about. As Firefox grows bloated and Webkit improves, Google Chrome benefits as, essentially, Safari for Windows. (Especially since Apple’s official Safari for Windows is burdened with a faux-macOS-“metal”, UI and users are tricked into installing it with QuickTime.) When Google decides to turn Android from a Sidekick clone into an iPhone clone, it uses its Safari clone as the standard browser. When Android becomes a success, suddenly Webkit compatibility matters a whole lot more.
  • 2013 — Google is frustrated by Apple’s focus on end-users (versus developers). E.g. is the increase in size and power consumption justified by some kind of end-user benefit? If “no” then Apple simply won’t implement it. Since Google is trying to become the new Microsoft (“developers, developers, developers”) it forks Webkit so it can stop caring about users and just add features developers think they want at an insane pace. It also decides to completely undermine the decades-old conventions of software numbering and make new major releases at an insane pace.
  • Developers LOOOOVE Chrome (for the same reason they loved IE). It lets them reach lots of devices, it has lots of nooks and crannies, it provides functionality that lets developers outsource wasteful tasks to clients, if they whine about some bleeding edge feature Google will add it, whether or not it makes sense for anyone. Also it randomly changes APIs and adds bugs fast enough that you can earn a living by memorizing trivia (like the good old days of AUTOEXEC.BAT) allowing a whole new generation of mediocrities to find gainful employment. Chrome also overtakes Firefox as having the best debug tools (in large part because Firefox engages in a two year masturbatory rewrite of all its debugging tools which succeeds mainly in confusing developers and driving them away).
  • 2018 — Microsoft, having seen itself slide from utter domination (IE6) to laughingstock (IE11/Edge), does the thing-that-has-been-obvious-for-five-years and decides to embrace and extend Google’s Webkit fork (aptly named “Blink”).