Usability on the Underside

A minimalist cocoa app
A minimalist cocoa app

I’ve always thought it ironic that Apple makes the most usable computers and the least usable APIs. I’m referring, of course, to AppleScript — just kidding. At first I thought it was a huge failing (and belies Steve Jobs’s claimed obsession with making even the stuff the user never sees beautiful; but then he likely never looked at APIs); then I excused it as Apple wanting to make dealing with its APIs a pons asinorum that less capable programmers wouldn’t be able to cross.

(Note: I’m not kidding that AppleScript is horrible. Just that it’s twenty years old and that horse has been beaten to death.)

I think I was right the first time.

But, it has gotten a lot better.

Perhaps the single most impressive piece of software Apple ever shipped was HyperCard. There was, basically, nothing wrong with HyperCard that couldn’t have been easily fixed by version 3. The chief problems with HyperCard were that:

  • Images were not a first class entity (in VB3, for example, you could load an image into an image variable much like you could load a text file into a string variable in almost any language with decent string manipulation (i.e. not C/C++ or Pascal, but every other popular language).
  • Binary blobs were not a first class entity (something that you’d probably implement for free while implementing the above).
  • The UI gadgets weren’t native UI gadgets.
  • There was no bridge to the native Toolbox short of writing a plugin in C/C++ or using the crazy-but-genius third party HyperTalk compiler.

That may seem like a lot of key flaws, but there’s nothing there that can’t be solved with a bunch of rudimentary programming. Consider what HyperCard got right:

  • First of all, it was shockingly stable — you could often work on HyperTalk projects for days with no serious crashes (let alone System reboots). This was in an era when computers (both Mac and PC) crashed like crazy.
  • You could quickly build user interfaces with drag-and-drop, including bitmap drawing tools
  • State persistent by default
  • Every application was its own database
  • By default you ran inside the development environment — you lived in the development environment
  • Shockingly fast — once HyperTalk 2 came out with its JIT compiler, performance was amazingly good (see note below).
  • Its language was amazingly accessible — both readable and writable — more readable than AppleTalk and far, far more writeable. Most people, including non-coders, could figure out how to do simple stuff within a few hours.
  • Awesome levels of introspection (which allowed metaprogramming)
  • First class debugging tools
  • Always-available REPL

Many of these virtues have yet to be matched by any programming language. VB3 essentially took a few of these virtues, replaced the incredibly easy to work with but hard-to-implement HyperTalk with the very easy to use and already-implemented BASIC, fixed all the obvious faults, and became the foundation of Microsoft’s entire approach to software development. And, say what you will about Microsoft, they do make good development tools.

Note on performance: we had an ambitious multimedia project written in VB3 that barely ran on maximally configured Windows PCs (we were using then-bleeding-edge 486 DX2/66 desktops for development, and our target platform was the then just-shipped IBM Thinkpad 750 (which cost $11,000 in Australia and had a 486DX4/75 CPU by my recollection; it was also the first name-brand MS-DOS compatible laptop with both color graphics and sound). I cloned the project on my aging Quadra 700 (68040 25MHz) using HyperCard and it ran circles around the PCs.

Usable Languages

It’s not easy to create a really approachable programming language.


It’s been done a few times — notably with BASIC. Javascript is a major achievement, but compared to HyperTalk it’s still very difficult to learn. Javascript essentially thrived by being ubiquitous, indispensable, and based on a simplified subset of the widely understood C-ish syntax:

if (x == y && z != q){ p = foo ? bar : baz; }

could be in any one of a dozen C-ish languages, including Javascript. HyperCard was based on a simplified subset of the even more widely understood English syntax:

get the width of button ok
get it * 2
set the width of button ok to it

Javascript was chiefly indispensable because browsers lacked obvious functionality and simply wouldn’t address it — so instead of simple ways to center stuff or make image-buttons change when the user clicked on them, we got a weird new programming language. (E.g. it took the addition of CSS for us to make it possible for something to change its appearance based on mouse events without programming, and having to code CSS is hardly an improvement over having to code in Javascript.) I’m not sure if this was a conspiracy by Netscape to make Javascript popular — never ascribe to any other cause that which is adequately explained by incompetence — but it really took a huge amount of community effort for programming in Javascript to become even vaguely tolerable.

When I finally was forced to get good at programming Javascript (around 2005), just figuring out how to get some kind of debugger working was a major pain in the neck that many developers never bothered to figure out, and writing browser-portable code was a so difficult most developers threw up their hands and just targeted IE6.

My point is that Javascript isn’t a usable language, it’s just more usable than C or Java (not saying much) and supported by a ubiquitous and indispensable environment (the browser) which has grown to have many of the virtues of HyperCard over time (e.g. browsers are pretty stable now, you get a REPL and a decent debugger) while retaining many of the faults (native UI elements are conspicuously missing, and would be amazingly useful given how much effort goes into making half-assed faux elements).

(It’s worth noting that the web was at least partially inspired by HyperCard. The syntax for comments:

<!-- comment -->

looks to me like a little ode to HyperCard, whose comment syntax was

-- comment

although I’m told they both got it from some precursor; still the influence of HyperCard on the web is pervasive.)

I’m not saying that we should have to write

set the borderLeft of the style of button "OK" to "2px solid black" 

instead of the far more economical (but hardly intuitive):

$("#ok").css({borderLeft: "2px solid black")}

but the fact all web developers find the latter programming style tolerable is something of a miracle, and relies on the rise of jQuery — the currently preferred library for papering over the manifold stupidities of web browsers.

I think it’s safe to say that something like

find("button.ok").style.borderLeft = "2px solid black"

would be reasonably intuitive, C-ish, and decently economical, but despite 25 years of progress and a huge amount of effort by some of the best programmers working at some of the richest companies in the world, the best we can come up with right now out-of-the-box is something like:

document.querySelector("button.ok").style.borderLeft = "2px solid black"

which is both far less readable and less intuitive than the HyperTalk version and yet less concise.

A tangential rant on idiotic naming and the misuse of namespaces

How is it that despite everything being namespaced to hell and back these days, “querySelector” isn’t called — I don’t know — “find”? Why not have a global function named “find” for fuck’s sake, or a global find object that supported find.byURL(), find.byCSSSelector(). What’s the point of sticking everything in its own private name space if you can’t give things people use constantly a short, meaningful, intuitive name?

And why do we need to ty

End rant.

Which gets us back to Apple

The first thing in the original Inside Macintosh (after introductory stuff) was a minimalist Macintosh application (I can’t remember whether it was in Pascal or C) that launched, opened up a Window with a text editor inside it, and had a menu and a main event loop. It was something like four pages of code. Note that this application did not handle undo, did not save or load files (maybe it did), and didn’t support multiple documents, and it didn’t behave nicely with other applications (e.g. redraw its Window after being sent to the background) because MacOS was single-tasking at the time. It was barely more than “hello world” in a Window.

This was because, in the original toolbox, when you created a window you had to do things like track mouse behavior in the window’s close box yourself. The fact you didn’t need to handle every keystroke inside the text editor was actually one of the miraculously cool things about the Mac toolbox in 1984 (but the Inside Mac documentation and natively-hosted compilers were still a couple of years away in 1984).

Thanks to over thirty years of progress, and the efforts of some of the greatest programmers and the most design-and-usability-focused (and now richest) company the world has ever known we can now do the equivalent of the above in — I have no idea how much code, so let’s find out. I tried googling for an example somewhere and the best I could come up with was this pure-code Obj-C “Hello World” for iOS.

This approach isn’t necessarily the first thing you should learn when learning to develop applications for a platform, but it should probably be the second or third. You should have a pretty good idea how absolutely everything works at some level.

#import <Foundation/Foundation.h>
#import <UIKit/UIKit.h>

@interface MyDelegate : UIResponder< UIApplicationDelegate >

@implementation MyDelegate
- ( BOOL ) application: ( UIApplication * ) application
           didFinishLaunchingWithOptions: ( NSDictionary * ) launchOptions {
  UIWindow *window = [ [ [ UIWindow alloc ] initWithFrame: 
    [ [ UIScreen mainScreen ] bounds ] ] autorelease ];
  window.backgroundColor = [ UIColor whiteColor ];
  UILabel *label = [ [ UILabel alloc ] init ];
  label.text = @"Hello, World!"; = CGPointMake( 100, 100 );
  [ label sizeToFit ];
  [ window addSubview: label ];
  [ window makeKeyAndVisible ];
  [ label release ];

  return YES;

int main( int argc, char *argv[ ] )
  UIApplicationMain( argc, argv, nil, NSStringFromClass( [ MyDelegate class ] ) );

That’s actually a lot better than I would have guessed, and a gigantic improvement over writing a minimal Mac application in 1986 (but about on par with writing a MacApp 2.x application in 1993). I can’t find a similar example for Swift, and I’d prefer to implement a desktop application (as its minimal functionality is less minimal than an iPhone app which (for example) is killed by the OS rather than being quit. So I used this longer but similarly minimal Obj-C desktop example as a starting point. Note that this example doesn’t even display “Hello World”, so I added that.

One thing I really like about this second example is that it doesn’t require defining a new class or subclass — like the original Inside Mac example it’s a function, so it doesn’t seem as much like magic — create an instance of the Application class, and send it the .run() method and you’re done, but now WTF does that class do? Sure it’s dealing with objects but it’s really clear what’s going on and where you need to drill down to understand a particular thing better. Similarly, it gives you a very good idea of (for example) what loading a XIB (or NIB) does for you and where it fits into the application’s life cycle.

import Cocoa

func main() -> Int {
    // give me an application instance
    let app = NSApplication.sharedApplication()
    app.setActivationPolicy(.Regular) // no clue if this is necessary or default behavior would be fine
    // build out our menu (iOS apps do not need to do any of this
    let menubar = NSMenu()
    let appMenuItem = NSMenuItem()
    app.mainMenu = menubar
    let appMenu = NSMenu()
    let appName = NSProcessInfo.processInfo().processName
    let quitTitle = "Quit " + appName
    // the only thing we're making that actually DOES something is implementing the Quit menu item
    let quitMenuItem = NSMenuItem(title: quitTitle, action: "stop:", keyEquivalent: "q") = app
    appMenuItem.submenu = appMenu
    // this is where we create our window and completely define how it looks -- it could be entirely replaced by loading a XIB
    let window = NSWindow(
        contentRect: NSRect(x: 0, y: 0, width: 400, height: 400),
        styleMask: NSTitledWindowMask,
        backing: .Buffered,
        `defer`: false // I assume defer is a keyword
    window.cascadeTopLeftFromPoint(NSPoint(x: 20, y: 20)) // playing nice with other apps -- iOS apps don't do this either
    window.title = appName // iOS apps don't have a title, and we don't really need to set this
    // now we're putting "Hello, world" in nice big letters in the Window
    let label = NSText(frame: NSRect(x: 0, y: 250, width: 400, height: 40))
    label.editable = false
    label.selectable = false
    label.string = "Hello, world"
    label.font = NSFont(name: "Helvetica Neue", size: 30.0)
    label.alignment = .Center
    label.backgroundColor = NSColor.windowBackgroundColor()
    // Having defined the window we're good to go
    app.activateIgnoringOtherApps(true) // Magic happens here!
    return 0;

I think this is really neat. You can copy and paste this code into an XCode Swift Playground and invoke main() and voila!

This is longer than the iPhone example, but a lot of the lines could be skipped if I went for brevity. If I didn’t prettify the text or define lots of constants (per the example I copied) then it would be just as short as the iOS example while, I think, using less magic and being clearer in what it does. I’m particularly proud of figuring out that I could get the Quit item to send a message (“stop:”) to NSApp without creating a selector referring to the local context (which would have forced me down the “define a subclass and yada yada magic happens” route… I think — I tried just defining a local function and passing its name as a selector but that did not work). That said, the keyboard shortcut for the quit menu item doesn’t work (as I note in the comments).

I actually don’t think the current Apple APIs are all that bad. They’re about on par with MacApp. There is a pretty big impedance mismatch between what the base initializers for the different UI elements do and what you would want them to do if you wanted to write applications this way, which I suspect is because almost no-one does, but while defaults may not look good, they’re not dysfunctional. I could just create the label and set its string property and it would work fine — but why is the default background color not the window background color or — better — transparent? why is the default font not the standard font and size for a label or a text field but something else entirely (the default text settings for a 1989 NeXT machine perhaps)?

I think the problem is that this isn’t where learning app development starts (after perhaps a simpler, more engaging introduction — look how I can create a “Hello, world” app in one minute using XCode. OK, fine, but how the hell would I make an application from scratch if I needed to? How is all this being hooked together? How does the Window hook up to its view controller? How can I use the same view controller for different views? (By the way, this is not addressed by my little example.) Understanding how the basic wiring works also makings XCode’s ridiculously complicated UI more comprehensible — since you now know certain things exist, you know to look for them, and you also know how to simply make stuff work without guessing where it’s buried in the XCode UI.

That said, this is far better than I expected. It would be great if there were initializers that allowed a typical task to be completed in one line (e.g. create a label, position it, and set its content) and if there were better defaults (and the label’s default font settings corresponded to reasonable expectations). If you omit the call to NSWindow.cascadeTopLeftFromPoint then the window appears in a really stupid place. Similarly, why isn’t there a NSMenuItem initializer that lets you stick the new item in a menu? I’m clearly missing something with respect to event handling, and that’s because the relevant initializers aren’t making it easy to do the common, correct thing. I don’t think you can argue that the badness of Apple’s Cocoa APIs is acting as a pons asinorum. In fact it’s more like it’s creating extra work for mediocre programmers.

(Correction: ignore the stuff I say about the menu item not working and the correct event wiring not being implemented by default — by putting a capital “Q” as the parameter for the keyboard equivalent I inadvertently made the shortcut command+shift+q which was why it appeared not to work when I didn’t bother to look at my menu. So my estimation of the APIs improves by a small notch — better default behavior but more magic going on: how does the window know it belongs to app?)

Learning to write Unity Shaders

Dynamically generated infinite scrolling terrain with a custom shader
Dynamically generated infinite scrolling terrain with a custom shader (inside the Unity editor)

Unity has replaced Photoshop in terms of adding features faster than I can assimilate them. It’s been possible to write custom shaders for years, but every time I tried to do something non-trivial I would give up in frustration. Recently, however, I finally wrote some pretty nice dynamic terrain and dynamic planet code and was very frustrated with shading options.

What I wanted to do, in essence, was embed multiple tiling textures inside a single texture, and then have a shader continuously interpolate between a pair of those shaders based on altitude, biome, or whatever. This doesn’t appear to be something other people are doing, so I was not going to be able to take an existing shader and tweak a couple of things to make it work. I’d actually need to understand what I was doing.

If you look at the picture, you’ll see seamless transitions (based on altitude) between a sand dune texture (at sea level) and a forest texture (at middle levels) and further up the beginning of a transition to bare rock. I’ve got a darker blue-tinged rock below the sand, so the material I’m using looks like this:

A single texture that contains multiple tiling textures.
A single texture that contains multiple tiling textures. Most of the texture is blank, but you get the idea.

Obviously there’s room for expansion. I could do some really interesting things with this technique (even moreso if I can interpolate between three or four samples without choking the GPU). I haven’t figured out how to benchmark this stuff — I’m not seeing any hit on the GPU using Unity’s profile, but I haven’t tried running this on a mobile device — so far, this shader seems to run just as fast as (say) a standard diffuse shader.

How to Start

Writing shaders is actually pretty simple. The big problems are finding useful documentation (I couldn’t find any useful documentation on ShaderLab, but it turns out that nVidia’s cg documentation appears to do the trick) and tutorials (I couldn’t find any). It doesn’t help that Unity has made radical changes to its Shader language over time (not really their fault, the underlying GPU architectures have been in flux) which makes a lot of the tutorials you do find worse than useless.

For the record, I’m still using Unity 4.6.x so this may all be obsolete in Unity 5.x. That said, I was working off the latest Unity 5.x online documentation.

The closest thing to useful tutorials I could find is in the Unity documentation — specifically Surface Shaders Examples. Sadly, you’re going to need to infer a great deal from these examples, because I couldn’t find explanations of the simplest things (e.g. how data gets from the shader’s UI to the actual pixel shader code — there’s a lot of automagical linking going on).

Shader "Custom/DynamicTerrainShader" {
	Properties {
		_MainTex ("Base (RGB)", 2D) = "white" {}
		_Color ("Main Color", Color) = (0,0,0,1)
		_WorldScale("World Scale", Float) = 0.25
		_AltitudeScale("Altitude Scale", float) = 0.25
		_TerrainBands("Terrain Bands", Int) = 4
	SubShader {
		Tags { "RenderType"="Opaque" }
		LOD 200
		#pragma surface surf Lambert

		sampler2D _MainTex;
		float _WorldScale;
		float _AltitudeScale;
		float4 _Color;
		int _TerrainBands;

		struct Input {
			float3 worldPos;
			float2 uv_MainTex;
            float2 uv_BumpMap;

		void surf (Input IN, inout SurfaceOutput o) {
			float y = IN.worldPos.y / _AltitudeScale + 0.5;
			float bandWidth = 1.0 / _TerrainBands;
			float s = clamp(y * (_TerrainBands + 2) - 2, 0, _TerrainBands);
			float t = frac(s);
			t = t < 0.25 ? t * 0.5 : ( t > 0.75 ? (t - 0.75) * 0.5 + 0.875 : t * 1.5 - 0.25);
			float band = floor(s)  * bandWidth;
			float2 uv = frac(IN.uv_MainTex * _WorldScale) * (bandWidth - 0.006, bandWidth - 0.006) + (0.003, 0.003);
			uv.y = uv.y + band;
			float2 uv2 = uv;
			uv2.y = uv2.y + bandWidth;
			half4 c = tex2D(_MainTex, uv) * (1 - t) + tex2D(_MainTex, uv2) * t;
			o.Albedo = c.rgb * 0.5;
			o.Alpha = c.a;
	FallBack "Diffuse"

So here’s my explanation for what it’s worth.

To refer to properties in your code, you need to declare them (again) inside the shader body. Look at occurrences of _MainTex as an instructional example. As far as I can tell you have to figure out which parameters are available where, and which type declarations in the shader body correspond with the (different) types in the properties declaration by osmosis.

The Input block is where you declare which bits of “ambient” information your shader uses from the rendering engine. Again, what you can declare in here and what it is you simply need to figure out from examples. I figured out how worldPos worked from the example which turns the soldier into strips.

Note that the Input block declaration determines what is passed as Input (referred to as IN in the body). The way the declaration works (you declare the type rather than the variable) is a bit puzzling but it kind of makes sense. The SurfaceOutput object is essentially a set of parameters for the pixel that is about to be rendered. So the simplest shader body would simply be something like o.Albedo = (1,0,0,1) which would be the constant color red (depending on the basic shader type, lighting would or wouldn’t be applied, etc.).

Variables and calculations are all about vectors. Basically everything is a vector. A 3D point is a float3, a 2D point is a float2. You can add and multiply vectors (component by component) so (1,0.5) + (-0.5, 0.25) -> (0.5,0.75). You can mix scalars and vectors in some obvious and not-so-obvious ways (hint: it usually pays to be explicit about components).

The naming conventions are interesting. For vectors, you can use x, y, and z as shorthand for accessing specific components. I’m not sure if the fourth coordinate is w or a or something else. I’m also pretty sure that spatial coordinates are not in the order I think they are (so I do ok with foo.x, but get into trouble if I try to handle specific components via (,,) expressions. Hence lines like uv.y = uv.y + band instead of uv = uv + (0,band,0) (which doesn’t work).

You may have noticed some handy functions such as floor and frac being used and wonder what else there is. I couldn’t find any list or references on the Unity website, but eventually found this cg standard library documentation on nVidia’s website (for its shader language). Everything I tried from this list seemed to work (on my nVidia-powered Macbook Pro).

If you’re looking for control structures and the like, I haven’t found any aside from the ternary operator — condition ? value-if-condition-true : value-if-condition-false — which is fully supported, can be nested, etc.. This alone would probably have driven me away just five years ago before I learned to stop worrying and love the ternary operator.

Why no switch statements, loops and such? I’m writing a pixel shader here and I suspect it relies on every program executing the same instruction at the same time, so conditional loops are out. (Actually I may be wrong about this — see the cg language documentation. I don’t know how closely ShaderLab corresponds to cg though.)

Once you see that list of functions you’ll understand why I am using piecewise linear interpolation between the materials (it looks just fine to me).

Some final points — the shader had terrible problems with the edges of the sub-textures until I changed the bitmap sampling to point (versus linear or trilinear interpolation). I suspect this may be a wrapping issue, but (as you may find) debugging these suckers is not easy.

One final comment — even though you’re essentially writing assembler for your GPU, shader programming is pretty forgiving — I haven’t crashed my laptop once (although Unity itself seems to periodically die during compiles).

What does a good API look like?

I’m temporarily unemployed — my longest period of not having to work every day since I was laid off by Valuclick in 2010 — so I’m catching up on my rants. This rant is shaped by recent experiences I had during my recent job interview process (I was interviewed by two Famous Silicon Valley Companies, one of which hired me and the other of which “didn’t think I was a good fit”). Oddly enough, the company that hired me interviewed me in an API-agnostic (indeed almost language-agnostic) way while the company that didn’t interviewed me in terms of my familiarity with an API created by the other company (which they apparently are using religiously; although maybe not so much following recent layoffs).

Anyway, the API in question is very much in vogue in the Javascript / Web Front End world, in much the same way as CoffeeScript and Angular were a couple of years ago. (Indeed, the same bunch of people who were switching over to Angular a few years back have recently been switching over to this API, which is funny to watch). And this API and the API it has replaced in terms of mindshare is in large part concerned with a very common problem in UI programming.

A Very Common Problem in UI Programming

Probably the single most common problem in UI programming is synchronizing data with the user interface, or “binding”. There are three fundamental things almost any user-facing program has to do:

  • Populate the UI with data you have when (or, better, before) the UI first appears on screen
  • Keep the UI updated with data changed elsewhere (e.g. data that changes over time)
  • Update the data with changes made by the user

This stuff needs to be done, it’s boring to do, it often gets churned a lot (there are constant changes made to a UI and the data it needs to sync with in the course of any real project). So it’s nice if this stuff isn’t a total pain to do. Even better if it’s pretty much automatic for simple cases.

The Javascript API in question addresses a conspicuous failing of Javascript given its role as “the language of the web”. In fact almost everything good and bad about it stems from its addressing this issue. What’s this issue? It’s the difficulty of dealing with HTML text. If you look at Perl, PHP, and JSP, the three big old server-side web-programming languages, each handles this particular issue very well. The way I used to look at it was:

  • A perl script tends to look like a bunch of code with snippets of HTML embedded in it.
  • A PHP script tends to look like a web page with snippets of code embedded in it.
  • A JSP script tends to look like a web page with horrible custom tags and/or snippets of code embedded in it.

If you’re trying to solve a simple problem like get data from your database and stick it in your dynamic web page, you end up writing that web page the way you normally would (as bog standard HTML) and just putting a little something where you want your data to be and maybe some supporting code elsewhere. E.g. in PHP you might write “<p>{$myDate}</p>” while in  JSP you’d write something like “<p><%= myDate %></p>”. These all look similar, do similar things, and make sense.

It’s perfectly possible to defy these natural tendencies, e.g. write a page that has little or no HTML in it and just looks like a code file, but this is pretty much how many projects start out.

Javascript, in comparison, is horrible at dealing with HTML. You either end up building strings manually “<p>” + myDate + “</p>” which gets old fast for anything non-trivial, or you manipulate the DOM directly, either through the browser’s APIs, having first added metadata to your existing HTML, e.g. you’d change “<p></p>” to “<p id=”myDate”></p>” and then write “document.getElementById(‘myDate’).text = myDate;” in a script tag somewhere else.

The common solution to this issue is to use a template language implemented in Javascript (there are approximately 1.7 bazillion of them, as there is of anything involving Javascript) which allow you to write something like “<p>{{myDate}}</p>” and then do something like “Populate(slabOfHtml, {myDate: myDate});” in the simplest case (cue discussion about code injection). The net effect is you’re writing non-standard HTML and using a possibly obscure and flawed library to manipulate markup written in this non-standard HTML (…code injection). You may also be paying a huge performance penalty because depending on how things work, updating the page may involve regenerating its HTML and getting the browser to parse it again, which can suck — especially with huge tables (or, worse, huge slabs of highly styled DIVs pretending to be tables). OTOH you can use lots of jQuery to populate your DOM-with-metadata fairly efficiently, but this tends to be even worse for large updates.

The API in question solves this problem by uniting non-standard HTML and non-standard Javascript in a single new language that’s essentially a mashup of XML and Javascript that compiles into pure Javascript and [re]builds the DOM from HTML efficiently and in a fine-grained manner. So now you kind of need to learn a new language and an unfamiliar API.

My final interview with the company that did not hire me involved doing a “take home exam” where I was asked to solve a fairly open-ended problem using this API, for which I had to actually learn this API. The problem essentially involved: getting data from a server, displaying a table of data, allowing the user to see detail on a row item, and allowing the user to page through the table.

Having written a solution using this unfamiliar API, it seemed very verbose and clumsy, so I tried to figure out what I’d done wrong. I tried to figure out what the idiomatic way to do things using this API was and refine them. Having spent a lot of spare time on this exercise (and I was more-than-fully-employed at the time) it struck me that the effort I was spending to learn the API, and to hone my implementation, were far greater than the effort required to implement the same solution using an API I had written myself. So, for fun, I did that too.

Obviously, I had much less trouble using my API. Obviously, I had fewer bugs. Obviously I had no issues writing idiomatic code.

But, here’s the thing. Writing idiomatic code wasn’t actually making my original code shorter or more obvious. It was just more idiomatic.

To bind an arbitary data object to the DOM with my API, the code you write looks like this:


The complex case looks like this:

$(<some-selector>).bindomatic(<data-object>, <options-object>);

Assuming you’re familiar with the idioms of jQuery, there’s nothing new to learn here. The HTML you bind to needs to be marked up with metadata in a totally standard way (intended to make immediate sense even to people who’ve never seen my code before), e.g. to bind myDate to a particular paragraph you might write: “<p data-source=”.myDate”></p>”. If you wanted to make the date editable by the user and synced to the original data object, you would write: “<p data-bind=”.myDate”></p>”. The only complaints I’ve had about my API are about the “.” part (and I somewhat regret it). Actually the syntax is data-source=”myData.myDate” where “myData” is simply an arbitrary word used to refer to the original bound object. I had some thoughts of actually directly binding to the object by name, somehow, when I wrote the API, but Javascript doesn’t make that easy.

In case you’re wondering, the metadata for binding tabular data looks like this: “<tr data-repeat=”.someTable”><td data-source=”.someField”></td></tr>”.

My code was leaner, far simpler, to my mind far more “obvious”, and ran faster than the code using this other, famous and voguish, API. There’s also no question my API is far simpler. Oh, and also, my library solves all three of the stated problems — you do have to tell it if you have changes in your object that need to be synced to the UI — (without polluting the source object with new fields, methods, etc.) while this other library — not-so-much.

So — having concluded that a programming job that entailed working every day with the second API would be very annoying — I submitted both my “correct” solution and the simpler, faster, leaner solution to the second company and there you go. I could have been laid off by now!

Here’s my idea of what a good API looks like

  • It should be focused on doing one thing and one thing well.
  • It should only require that the programmer tell it something it can’t figure out for itself and hasn’t been told before.
  • It should be obvious (or as obvious as possible) how it works.
  • It should have sensible defaults
  • It should make simple things ridiculously easy, and complex things possible (in other words, its simplicity shouldn’t handcuff a programmer who wants to fine-tune performance, UX, and so on.

XCode and Swift

I don’t know if the binding mechanisms in Interface Builder seemed awesome back in 1989, but today — with all the improvements in both Interface Builder and the replacement of Objective-C with the (potentially) far cleaner Swift — they seem positively medieval to me, combining the worst aspects of automatic-code-generation “magic” and telling the left hand what the left hand is doing.

Let’s go back to the example of sticking myDate into a the UI somewhere. IB doesn’t really have “paragraphs” (unless you embed HTML) so let’s stick it in a label. Supposing you have a layout created in IB, the way you’re taught — as a newb — to do it this is:

  1. In XCode, drag from your label in IB to the view controller source code (oh, step 0 is to make sure both relevant things are visible)
  2. You’ll be asked to name the “outlet”, and then XCode will automagically write this code: @IBOutlet weak var myDate: UILabel!
  3. Now, in — say — the already-written-for-you viewDidLoad method of the controller you can write something like: myDate.text = _myDate (it can’t be myDate because you’re used myDate to hold the outlet reference).

Congratulations, you have now solved one of the three problems. That’s two lines of code, one generated by magic, the other containing no useful information, that you’ve written to get one piece of data from your controller to your view.

Incidentally, let’s suppose I wanted to change the outlet name from “myDate” to “dateLabel”. How do I do that? Well, you can delete the outlet and create a new outlet from scratch using the above process, and then change the code referencing the outlet. Is there another way? Not that I know of.

And how to we solve the other two problems?

Let’s suppose we’d in fact bound to an input field. So now my outlet looks like this: @IBOutlet weak var myDate: UITextField! (the “!” is semantically significant, not me getting excited).

  1. In XCode, drag from the field in IB to the view controller source code.
  2. Now, instead of creating an outlet, you select Action, and you make sure the type is UITextField, and change the event to ValueChanged.
  3. In the automatically-just-written-for-you Action code add the code _myDate = sender.text!

You’re now solved the last of the three problems. You’ve had a function written for you automagically, and you’ve written one line of retarded code. That’s three more lines of code (and one new function) to support your single field. And that’s two different things that require messing with the UI during a refactor or if a property name gets changed.

OK, what about the middle problem? That’s essentially a case of refactoring the original code so that you can call it whenever you like. So, for example, you write a showData method, call it from viewDidLoad, and then call it when you have new data.

Now, this is all pretty ugly in basic Javascript too. (And it was even uglier until browsers added documentQuerySelector.) The point is that it’s possible to make it very clean. How to do this in Swift / XCode?

Javascript may not have invented the hash as a fundamental data type, but it certainly popularized it. Swift, like most recent languages, provides dictionaries as a basic type. Dictionaries are God’s gift to people writing binding libraries. That said, Swift’s dictionaries are strongly typed which leads to a lot of teeth gnashing.

Our goal is to be able to write something like:


It would be even cooler to be able to round-trip JSON (the way my Javascript binding library can). So if this works we can probably integrate a decent JSON library.

So the things we need are:

  • Key-value-pair data storage, i.e. dictionaries — check!
  • The ability to add some kind of metadata to the UI
  • The ability to find stuff in the UI using this metadata

This doesn’t seem too intimidating until you consider some of the difficulty involved in binding data to IB.


The way tables are implemented in Cocoa is actually pretty awesome. In essence, Cocoa tables (think lists, for now) are generated minimally and managed efficiently by the following mechanism:

The minimum number of rows is generated to fill the available space.

When the user scrolls the table, new rows are created as necessary, and old rows disposed of. But, to make it even more efficient rather than disposing of unused rows, they are kept in a pool and reallocated as needed — so the row that scrolls off the top as you scroll down is reused to draw the row that just scrolled into view. (It’s more complex and clever than this — e.g. rows can be of different types, and each type is pooled separately — but that’s the gist.) This may seem like overkill when you’re trying to stick ten things in a list, but it’s ridiculously awesome when you’re trying to display a list of 30,000 songs on your first generation iPhone.

In order for this to work, there’s a tableDelegate protocol. The minimal implementation of this is that you need to tell the table how many rows of data you have and populate a row when you’re asked to.

So, for each table you’re dealing with you need to provide a delegate that knows what’s supposed to go in that specific table. Ideally, I just want to do something like self.bind(data) in the viewDidLoad method, how do I create and hook up the necessary delegates? It’s even worse if I want to use something like RootViewController (e.g. for a paged display) which is fiddly to set up even manually. But, given how horrible all this stuff is to deal with in vanilla Swift/Cocoa, that’s just how awesome it will be not to have to do any of it ever again if I can do this. Not only that, but to implement this I’m going to need to understand the ugly stuff really well.


Adding Metadata to IB Objects

The first problem is figuring out some convenient way of attaching metadata to IB elements (e.g. buttons, text fields, and so on). After a lot of spelunking, I concluded that my first thought (to use the accessibilityIdentifier field) turns out to be the most practical (even though, as we shall see, it has issues).

There are oodles of different, promising-looking fields associated with elements in IB, e.g. you can set a label (which appears in the view hierarchy, making the control easy to find). This would be perfect, but as far as I could tell it isn’t actually accessible at runtime. There’s also User Defined Runtime Attributes which are a bit fiddly to add and edit, but worse, as far as I’ve been able to tell, safely accessing them is a pain in the ass (i.e. if you simply ask for a property by name and it’s not there — crash!). So, unless I get a clue, no cigar for now.

The nice thing about the accessibilityIdentifier is that it looks like it’s OK for it to be non-unique (so you can bind the same value to more than one place) and it can be directly edited (you don’t need to create a property, and then set its name, set its type as you do for User Defined Runtime Attributes). The downside is that some things — UITableViews in particular — don’t have them. (Also, presumably, they have an impact on accessibility, but it seems to me like that shouldn’t be a problem if you use sensible names.)

So my first cut of automatic binding for Swift/Cocoa took a couple of hours and handled UITextField and UILabel.

class Bindery: NSObject {
    var view: UIView!
    var data: [String: AnyObject?]!
    init(view v:UIView, data dict:[String: AnyObject?]){
        view = v
        data = dict
    func subviews(name: String) -> [UIView] {
        var found: [UIView] = []
        for v in view!.subviews {
            if v.accessibilityIdentifier == name {
        return found
    @IBAction func valueChanged(sender: AnyObject?){
        var key: String? = nil
        if sender is UIView {
            key = sender!.accessibilityIdentifier
            if !data.keys.contains(key!) {
        if sender is UITextField {
            let field = sender as? UITextField
            data[key!] = field!.text
    func updateKey(key: String){
        let views = subviews(key)
        let value = data[key]
        for v in views {
            if v is UILabel {
                let label = v as? UILabel
                label!.text = value! is String ? value as! String : ""
            else if v is UITextField {
                let field = v as? UITextField
                field!.text = value! is String ? value as! String : ""
                field!.addTarget(self, action: "valueChanged:", forControlEvents: .EditingDidEnd)
    func update() -> Bindery {
        for key in (data?.keys)! {
        return self

Usage is pretty close to my ideal with one complication (this code is inside the view controller):

    var binder: Bindery
    var data:[String: AnyObject?] = [
        "name": "Anne Example",
        "sex": "female"

    override func viewDidLoad() {
        // Do any additional setup after loading the view, typically from a nib.
        binder = Bindery(view:self.view, data: data).update()

If you look closely, I have to call update() from the new Bindery instance to make things work. This is because Swift doesn’t let me refer to self inside an initializer (I assume this is designed to avoid possible issues with computed properties, or to encourage programmers to not put heavy lifting in the main thread… or something). Anyway it’s not exactly terrible (and I could paper over the issue by adding a class convenience method).

OK, so what about tables?

Well I figure tables will need their own special binding class (which I shockingly call TableBindery) and implement it so that you need to use an outlet (or some other hard reference to the table) and then I use Bindery to populate each cell (this lets you create a cell prototype visually and then bind to it with almost no work). This is how that ends up looking like this (I won’t bore you with the implementation which is pretty straightforward once I worked out that a table cell has a child view that contains all of its contents, and how to convert a [String: String] into a [String: AnyObject?]):

    var data:[String: AnyObject?] = [
        "name": "Anne Example",
        "sex": "female"
    override func viewDidLoad() {
        tableBinder = TableBindery(table:table, array: tableData).update()

In the course of getting this working, I discover that the prototype cells do have an accessibilityIdentifier, so it might well be possible to spelunk the table at runtime and identify bindings by using the attributes of the table’s children. The fact is, though, that tables — especially the sophisticated multi-section tables that Cocoa allows — probably need to be handled a little more manually than HTML tables usually do, and having to write a line or two of code to populate a table is not too bad.

Now imagine if Bindery supported all the common controls, provided a protocol for allowing custom controls to be bound, and then imagine an analog of TableBindery for handling Root view controllers. This doesn’t actually look like a particularly huge undertaking, and I already feel much more confident dealing with Cocoa’s nasty underbelly than I was this morning.

And, finally, if I really wanted to provide a self.bindData convenience function — Swift and Cocoa make this very easy. I’d simply extend UIView.

Electronic vs. Printed Books, Gun Control, and Moral Panic

Future Shock (book) cover — cover with rather anti-prophetic quotation

Back in the early days of the Internet you’d frequently find sensationalist stories along the lines of “teenager learns how to make bomb from internet” in the media. Similarly, we’ve seen headlines along the lines of “mass murderer played violent video games“, or “rapist had huge collection of porn” (no links because my search results are dominated by stories about pedophiles, which is a whole different topic), or — one of my favorites — “teenager who played D&D commits suicide“. Another very common example these days is “woman causes fatal car accident while texting“. All of these headlines tend to have some things in common:

  • The basic facts are true, e.g. the teenager who committed suicide did in fact play D&D.
  • The implied causal factor is something novel to, or stigmatized by, society, or both.
  • There is no actual attempt to determine if there is even a correlation, let alone causation (e.g. are D&D players more or less likely to commit suicide than non-D&D players?) — usually the stories don’t bother to even cite relevant statistics.

It’s also worth noting that you almost never see the opposite kind of story. E.g. “teenager who plays D&D volunteers at suicide help line,” even though there are doubtless plenty of examples, because they don’t fit the desired narrative of “thing you already don’t like and are kind of suspicious of is actually morally bad and here’s why”.

Back in the 90s, whenever I heard relatives quoting insane alarmist stories about the internet to me (and I was “the guy” in my extended family who “knew about computers” so I got a lot of this) I would say “replace the word internet in the story with books or letters and see whether it is equally applicable”. After all, there are plenty of stories about women being seduced by serial killers by mail, or people learning to do terrible things in a library. In a recent discussion with my very smart wife and some of her very smart colleagues, the question of electronic vs. printed books came up. All of the colleagues were young (well, younger than us), technically savvy, and had a background in either psychology or communications or both. All claimed there was a body of research showing the retention or comprehension rates are lower when you read electronic vs. printed material.

(You know, I remember when some programmers liked to print hard copies of their code for debugging because reading code on crappy CRTs sucked. But you don’t see that much any more.)

Later, there was discussion of how, for example, “things that girls do” tend to be trivialized and stigmatized by society (e.g. “texting” is something generally ascribed more to girls than boys), and the term “moral panic” came up. This is the general term for when “society” (e.g. the mass media, or whatever) encounters some new or marginalized phenomenon and — finding cases where it appears to be linked to some other bad thing (e.g. a fatal car accident) — loses its shit. The current vogue example is selfies (it was hard to find the linked story since most such stories are about people accidentally or deliberately shooting themselves while taking selfies — hooray for gun owners — but let’s not get ahead of ourselves).

I pointed out that the anti-eBook stance they were all taking was an example of moral panic.

First, I tried to find the research that shows lower comprehension rates when reading eBooks. Here’s an example. Let me quote the article:

“When you read on paper you can sense with your fingers a pile of pages on the left growing, and shrinking on the right,” said Mangen. “You have the tactile sense of progress, in addition to the visual … [The differences for Kindle readers] might have something to do with the fact that the fixity of a text on paper, and this very gradual unfolding of paper as you progress through a story, is some kind of sensory offload, supporting the visual sense of progress when you’re reading. Perhaps this somehow aids the reader, providing more fixity and solidity to the reader’s sense of unfolding and progress of the text, and hence the story”

If you read the article, they have two studies, one of 50 people and another of 72. In the first case the researcher mostly got null results (i.e. the readers of both media had equal success in answering comprehension questions) but discovered that the readers of eBooks had more difficulty placing events in order. In the second case the researcher found “students who read texts in print scored significantly better on the reading comprehension test than students who read the texts digitally”. Of course both tests were badly flawed. In the first test only two subjects were familiar with the software and hardware being used (everyone, of course, is familiar with printed material). In the second case the “digital” version was a PDF not, say, an ePub which is designed to be read on screen. Imagine if I did such a test comparing reading a web page with a printed version of that web page in whatever revolting format the browser elected to print it in.

So you could equally draw the conclusion that: “For most intents and purposes, even inexperienced eBook readers using terrible hardware and software have the same comprehension rates as readers of paper books.” Let me also suggest that based on the elaborate and completely unfounded theory that one of the researchers has for why these rather shaky results were obtained, I suspect we’re looking at confirmation bias. She had a theory that eBooks were somehow worse and then went looking for data.

Mechanism of Action

My biggest methodological problem with this research is that it is horribly artificial. Let’s suppose you have an actual task — e.g. you watched Ken Burns’s The Civil War and were incredibly impressed by Shelby Foote and want to read his magnum opusThe Civil War, A Narrative. Or you have to write a paper discussing a bunch of papers for your Art History class. How quickly and easily are you able to accomplish the task you set out to complete? How happy are you with the outcome? And how did this figure into a bigger picture? E.g. how much did you actually learn (not just about the text in question but in general?)

E.g. one of the things I treasure about reading electronically is how quickly I can skip over crap. E.g. if I read something interesting but wonder if it covers a specific point, I can quickly search for it and skip to the next article if not (or decide whether what the article has to say on that point prompts me to read the entire article more carefully). If you were testing my comprehension of the article, I would fare worse for an article that didn’t interest me — but I would have spent far less time deciding I didn’t need to read it; time I could spend reading something else.

I pick both of these examples from personal experience. In the former case I read The Civil War in paperback. I borrowed the three enormous books from a friend; it was absolutely riveting; and I enjoyed it very much. But, if I were to read it today I would far prefer to read it in electronic form and (a) I would be able to highlight or annotate anything I thought was interesting or which I didn’t understand without fear of damaging my friend’s books; (b) I would be able to instantly look up any word I didn’t understand or whose meaning I was unsure of without losing my spot; (c) I could jump to-and-from end-notes without losing my spot; and (d) I would be saved a great deal of back and neck pain caused by trying to read three trade-sized 1000+ page books in bed.

That is not to say that e-books are in all ways superior to printed books. E.g. the reproduction of images is often terrible; e-books often have typographical and layout errors that would be considered egregious in a printed book; and e-readers are simply not up to the task of displaying highly-visual books. But these aren’t intrinsic flaws of e-books, they are implementation flaws. It’s like complaining about printed books in a land of incompetent printers. Conversely, I won’t claim that fact that videos play badly on paper books is a point for e-books, even though that’s true and it’s unlikely to get better.

In the second case I had digital copies of all the papers I had to read and discuss. When I found things I didn’t understand I could instantly look them up, capture and download them when appropriate, find related or contrary pieces and download those, and quickly skip around to relevant pieces or find things I had read using text search (for the papers that were text PDFs rather than scanned documents, at least). I remember doing this kind of thing with paper manuscripts and it was horrible. Heck, the readings for Anthropology A01 were heavier than a laptop and far less useful.

When you’re trying to explain why something is better or worse you don’t just need data, you also need a mechanism of action. Data tells you something is going on, but mechanics tell you what to look for.  (Just as the researcher who likes how books feel and smell goes looking for evidence that this somehow improves reading comprehension, but I prefer some kind of more direct mechanism of action than that.) The “mechanics” the researchers claim make printed books better sound like cargo cultism or fetishism to me. E.g. the idea that the number of pages in your left hand versus your right somehow gives you cues to help keep track of the temporal order of events does not stand up to inspection. What if I am reading a short story in a large anthology? What if I am reading a temporally confusing book (such as Iain M Banks’s Use of Weapons) where chronological order does not in any way match page order? What about constantly having to shift posture to cope with reading left-vs-right pages in a heavy book? This is not going to show up in a study of a 28 page short story, but it’s a real consideration for actual books. I can show some real, tangible advantages to reading electronically, and it would be easy to devise experiments to test whether these advantages are real (although getting subjects to read 3000 page trilogies might make getting volunteer test subjects difficult).

How Wrong Can You Be?

Now, let’s pause and balance this out by considering the idea that the liberal consensus in favor of gun control might equally be a moral panic. This is similar to some of the arguments people who I somehow seem to know on Facebook have made against gun control. “Don’t blame the guns.” Let’s ignore the vapidity of that argument and assume that the more cogent argument (that this is a moral panic) was made.

First, let’s look at the degrees to which an assumed causation can be wrong. I hope you already know that “correlation does not equal causation”. E.g. if places with fewer murders happen to have fewer guns all you have is a correlation. It may be that in places where people hate each other they are more likely to buy weapons. The example my Stats professor liked to use was the birthrate in Stockholm is strongly correlated to the stork population.

When a correlation turns out not to reflect causation, the technical term used is that there are mediating factors — in this case season. In extreme cases like this, you’re likely to find that the mediating factor completely eliminates the original correlation (or perhaps even makes it slightly negative); e.g. we can probably find outliers where for some reason there were fewer storks but the birthrate did not dip and vice versa, so taking season into account the correlation between storks and babies becomes much lower and possibly negative.

So, assuming that correlation equals causation is a fairly common trap. You can be wrong, but you’ll find yourself in good company.

People who read text written in sans serif fonts have poorer comprehension than people who read text written in serif fonts. Therefore sans-serif typefaces cause poorer reading comprehension.

This “fact” was well-known when I was starting out in the usability business. (The suggested “mechanism of action” was that the serifs helped readers visually line up the letters or something — given that serifs were invented to prevent rain from eroding letters carved in stone, and then copied by typesetters to lend printed text the authority of Roman carvings, this idea seems fanciful.) In the nineties a lot of contrary evidence was showing up because very low resolution computer displays and not-especially-high-resolution laser printers both sucked at reproducing serif fonts. But it wasn’t long before the result stopped replicating even for documents printed at very high resolution. It turns out it was an example of this kind of trap. What seems to have been happening was that people who were brought up reading serif fonts (in the 1970s this was almost everyone) were better at reading serif fonts.

People who read text on an e-reader have poorer comprehension than people who read text on paper. Therefore e-readers cause poorer reading comprehension.

We’ve just seen that in the cases above, a specific e-reader was used and the users weren’t used to it. We also don’t know whether, for example, good typefaces were picked on the e-reader (e.g. Apple, Microsoft, and Google have each done a lot of work to maximize legibility of screen text; Amazon, not so much.) And, “comprehension” was only worse in one specific way (and the theorized reason for it could easily be addressed by improving the e-reader software, or using a better e-reader).

But again, moral panic is frequently guilty of assuming correlation itself.

Assuming correlation and then jumping to causation is a way of being even more wrong than actually finding a correlation and assuming causation.

And then, worst of all, there’s the availability heuristic: we hear lots of news about X therefore X happens a lot. This is a major reason why people are more afraid of terrorists than staircases and flying than driving. “Swimmer eaten by shark.” “D&D player commits suicide.” “Illegal immigrant is a serial killer.” “black guy kills cop.” The fact there is more news about something doesn’t mean there is more something. Sharks kill very few people. D&D players are less suicidal than non-D&D players. Illegal immigrants are more law abiding than pretty much anyone else. Violence against cops is at historic lows.

So the hierarchy of wrong goes something like this:

  • Incredibly wrong. We hear about it a lot, therefore it must be common. (Availability Heuristic.)
  • Wrong. We hear about A and B together a lot, therefore A must be related to B. (Assuming correlation.) Therefore A causes B. (And then jumping to causation.)
  • Wrong but there’s probably something there. We’ve found that A seems to occur with B. Therefore A causes B. (Assuming causation.)

Moral panics are usually more wrong than merely assuming causation. E.g. it turns out that D&D players are less likely to commit suicide than non-D&D players. Similarly, the evidence linking porn to rape is pretty weak (if anything, the ready availability of sexual outlets such as porn and prostitution appears to reduce rape).

Spot the Moral Panic

Can we apply the same arguments to the gun debate? Or texting while driving?

I’ll pick these two examples because I am in favor of gun control, but I think concerns about texting while driving are overblown.

Evidence in favor of gun control:

  • Countries with more gun control have fewer gun deaths. (Factual correlation.)
  • Mass shootings disappeared in Australia after ban on semi-automatic weapons. (Factual correlation.)
  • US States with stronger gun control laws have lower gun violence when you correct for economic factors, etc. etc. (Factual correlation that’s hard to explain to people, especially those who are disinclined to believe in Math or evidence in general.)
  • Black market prices for weapons soared in Australia after ban on semi-automatic weapons. (Factual correlation.)
  • It’s hard to shoot people when you don’t have a gun. (Mechanism of Action.)
  • Many mass shooters buy guns legally. (Mechanism of Action.)
  • Studies show that people with guns do not help (in fact make things worse) in active shooter situations. (Mechanism of Action.)

Evidence/Arguments against:

  • Switzerland. (Lots of guns, but very low gun violence. That said, Swiss gun control laws — which are national, standardized, and consistently enforced — would more than satisfy the most liberal gun control advocate in the US, and gun ownership in Switzerland is roughly half that in the US and declining.)

No, this isn’t a moral panic, it’s a very solid case. The one “exception” looks, upon examination, to be more evidence in favor of the case. This is probably why the NRA has done its best to block any scientific investigation of the subject.

Let’s try texting while driving:

Evidence/Arguments against:

This case is weaker, but there’s still something there. It’s not that texting while driving isn’t bad, it’s just that it’s not clear whether laws against it are much use. (And we might just as well have laws against eating, listening to engaging music (or NPR), or having conversations while driving.)

Back to Dead Trees

So, let’s circle around to e-readers vs. printed books:

Evidence in favor of printed books:

  • If you get 50 people to read a short story in printed form vs. a Kindle then the people who read on the Kindle are not as good at placing a list of events in the story in the correct order. (Factual correlation.)
  • If you get 72 Norwegian 10th-graders to read a text in printed form vs. a PDF then the students who read the PDF scored “significantly better in comprehension tests”. (Factual correlation.)

I should point out that this is an actively researched field, but based on my quick searches on Google Scholar, the more nuanced research is producing much more equivocal results. But the underlying problem remains: the population is still predominantly more familiar with printed books (remember serif typefaces) and the software is still primitive. The points of comparison are flawed and the population samples are biased. And, even then, the pro-book results are pretty weak.

Evidence/Arguments against fall into some basic categories:

  • The media being tested aren’t equally familiar. Everyone is familiar with printed books. Even those people familiar with e-readers probably aren’t using their own e-reader with their own favorite settings; it’s very hard to run a fair test. (Correlations are based on bad experiments.)
  • In each case, the test is unnecessarily compromised. E.g. a PDF is essentially an electronic representation of a paper document. An ePub is a digital document designed for reading on a screen. It is possible to create PDFs that are not horrible to read on screen, but I doubt this was done, and even the best designed PDF won’t allow the reader to choose their favorite typeface and font size and contrast. A blind person won’t get an accessible copy. (Advantages of electronic media are methodically ignored.)
  • The person is being asked to read a specific document, not complete a [realistic/representative] task. Even if the task is “read such and such a book” then, assuming it’s available as an e-book, in the real world the person with the electronic device is already hours or days ahead because they’ll have the book more-or-less instantly. If the task is “find the answer to X” then again the reader with the device will almost certainly find more information, be able to chase up more references, and will likely find more specifically applicable information. (The wrong questions are being asked.)

In the end, most of the arguments against one sees for or against electronic books are nothing to do with the relative quality of the media, e.g.:

“I am still a total paper book lover. It’s just satisfying curling up with a book, the smell of the pages, the heft of the book. And there is the classic “Three B test” – bath, bus, bed.”

If you’ve ever read a big, thick book in bed I doubt you’d consider paper books to have any real advantage over electronic. I personally end up turning over every other page to avoid keeping three pounds of paper suspended in the air. And by the same token it’s much easier to read an electronic book in a bus. As for the bath — get a waterproof case and your point is?

“I was Switzerland in this discussion, but the ebook I was reading told me I was 84% finished with the book when the book ended. The remaining 16% was excerpts from the author’s other books, an author interview, and a discussion guide. Paper books are far superior when it comes to letting you know your place in a book, and that’s why I prefer them.”

That is annoying. Of course, I first encountered this kind of crap in printed books, so I don’t really see how that’s a strike against digital. And that’s without garbage put in by the author him or her-self, e.g. the story part of Return of the King ends something like 75% of the way through; the next 25% of the book is appendices, most of which few would want to read.

It is refreshing to see someone talk about stuff other than whether they prefer the smell of paper to that of a kindle, e.g.:

“For me, it depends on the book—how visual it is (graphic novels I like in paper format), whether I’m more likely to race through it (a good novel) or linger and bounce around (poetry), how big it is (I wish the gigantic Robert Moses book was in eBook form), and how well the text was translated to Kindle (I heard bad things about the Game of Thrones digital versions, so went with paper for that).”

At last, some rationality. I personally would rather buy a graphic novel or a fine art book as paper because ebooks generally have very poor image quality (and even if they don’t it’s hard to tell before you buy them whether this book is the exception) and even if they had great image quality, there’s not really any good way to consume them (until the iPad Pro comes out, perhaps). But, if I knew that the images were going to be in great shape, and I had — say — an iPad that could project books at full resolution onto a 4K display, I would pick electronic books there as well.

It’s also worth noting that in many cases Kindle and other electronic editions will be perversely more expensive than their dead tree counterparts. (That’s why I went with dead trees for Game of Thrones, and it turned out to be a mistake because I lost the damn things.)

A Modest Proposal

I think if we’re going to have guns they shouldn’t be concealed. They should have day-glo grips, stocks, and cases — mandatorily lurid pink I suggest, have built-in GPS sensors, and make wah-wah noises when they’re moved around; the battery that runs the GPS and buzzer also allows the gun to be fired; and every gun should have sample fired bullets and casings registered in a national database (paid for by the bullet tax, see below). After all, if they’re supposed to deter crime shouldn’t criminals know they’re there? I certainly want to know who has guns and avoid them.

Now of course people will argue “if it’s illegal to conceal weapons then only criminals will have concealed weapons”. That’s true, but they need to be careful, especially if the penalties are harsh. E.g. if someone doesn’t like you they can just tell the police you habitually carry a gun. Similarly, it would be illegal to sell guns without these things and when you tear out the mechanism your last known location would be in the cloud.

The GPS sensors and buzzers will run out of batteries and also could be gouged out but not keeping your batteries charged would also be a crime and when your gun stopped responding the authorities would know when and where.

We could require gun ranges to run every bullet fired on the range, and every casing to be matched against the database (expensive, but the bullet tax will pay for it). If a bullet doesn’t have a registered match (e.g. the gun’s owner is not the right person or the gun’s rifling has been tampered with) then we either arrest the owner or register the new bullet.

The buzzers and day-glo would kind of mess up hunting, but the right to go hunting is not enshrined by the constitution — the second amendment is solely there for purposes of preserving us from tyranny, and at such time as we desire to overthrow the government we can always pull the crap out, right? After all, armed insurrection is also illegal. Perhaps to honor the second amendment we can require the mechanisms to be removable in some straightforward way — on the strict understanding that it’s a felony.

All this might sound horribly draconian. It’s supposed to be. The argument is that the 2nd amendment protects our right to overthrow tyrants. I would argue the 4th amendment is far more important (and we can set up the GPS system so it merely tracks your gun anonymously until it’s involved in a shooting).

When a gun owner moves into your neighborhood they should be required to post a public notification in the “known sex offenders and gun owners” registry.

Chris Rock suggests that we simply put a huge tax on bullets. (“That guy must deserve it, they put $50,000 worth of lead in him.”) I would point out that the right to bullets is actually not enshrined in the constitution, but certainly we can put a hefty federal tax on them or require a prescription. After all, they’re kind of a potentially lethal drug (“lead poisoning”) and should be properly controlled. Better make sure you have all your tax stamps and prescriptions ready when you get your hunting license.

The bullet tax can also pay for free kevlar body armor for all citizens who want it, and perhaps provide guns and bullets (which are after all rather expensive as a result of all this) to the poor.