HyperCard, Visual Basic, Real Basic, and Me

When the Mac first appeared it was a revelation. A computer with a powerful, consistent user-interface (with undo!) that allowed users to figure out most programs without ever reading a manual.

I can remember back in 1984 sitting outside the Hayden-Allen Tank (a cylindrical lecture theater on the ANU campus that tended to house the largest humanities classes and many featured speakers) playing with a Mac on display while Apple reps inside introduced the Mac to a packed house. (My friends and I figured we’d rather spend time with the computer than watch a pitch.)

How did undo work? It wasn’t immediately obvious.

When we quit an application or closed a document, how did the program know we had unsaved changes? We checked, if the document had no changes, or the changes were saved, the computer knew.

We were hardcore math and CS geeks but computers had never, in our experience, done these kinds of things before so it took us a while to reverse-engineer what was going on. It was very, fucking, impressive.

But it was also really hard to do with the tools of the time. Initially, you couldn’t write real Mac software on a Mac. At best, there was MacPascal, which couldn’t use the toolbox and couldn’t build standalone applications, and QuickBasic, which provided no GUI for creating a GUI, and produced really clunky results.

To write Mac programs you needed a Lisa, later a Mac XL (same hardware, different software). It took over a year for the Mac SDK to appear (via pirate copies), and it was an assembler that spanned multiple disks. Eventually we got Consulair-C and Inside Macintosh but, to give you an idea, the equivalent of “hello world” was a couple of pages of C or Pascal most of which was incomprehensible boilerplate. The entire toolbox relied heavily on function pointers, really an assembly-language concept, and in some cases programmers had to manually save register state.

No-one’s to blame for this — Xerox provided much cleaner APIs for its much more mature (but less capable) GUI and far better tooling — the cost was a computer that ran dog slow, no-one could afford, and which actually was functionally far inferior to the Mac.

The first really good tool for creating GUI programs was HyperCard. I can remember being dragged away from a computer lab at ADFA (where a friend was teaching a course on C) which had been stocked with new Mac SEs running HyperCard.

For all its many flaws and limitations, HyperCard was easy to use, fast, stable, and forgiving (it was almost impossible to lose your work or data, and it rarely crashed in an era when everything crashed all the time). Its programming language introduced a yet-to-be-equalled combination of being easy to read, easy to write, and easy to debug (AppleScript, which followed it, was horribly inferior). When HyperCard 2 brought a really good debugger (but sadly no color) and a plugin architecture, things looked pretty good. But then, as Apple was wont to do in those days, Apple’s attention wandered and HyperCard languished. (Paul Allen’s clone of HyperCard, Toolbook for Windows, was superb but it was a Windows product so I didn’t care.)

Eventually I found myself being forced to learn Visual Basic 3, which, despite its many flaws, was also revolutionary in that it took HyperCard’s ease of use and added the ability to create native look and feel (and native APIs if you knew what you were doing, which I did not). With Visual Basic 3 you could essentially do anything any Windows application could do, only slower. (HyperCard was notably faster than VB, despite both being interpreted languages, owing to early work on JIT compilers.)

After using VB for a year or two, I told my good friend (and a great programmer) Andrew Barry that what the Mac really needed was its own VB. The result was Realbasic (now Xojo) of which I was the first user (and for a long time I ran a website, realgurus.com, that provided the best source of support for Realbasic programmers). Realbasic was far more than a VB for the Mac since it was truly and deeply Object-Oriented and also cross-platform. I could turn an idea into a desktop application with native look and feel (on the Mac at least) in an evening.

When MP3 players started proliferating on Windows, I wrote an MP3 player called QuickMP3 in a couple of hours after a dinner conversation about the lousy state of MP3 players on the Mac. By the next morning I had a product with a website on the market (I distributed it as shareware; registration was $5 through Kagi — RIP — which was the lowest price that made sense at the time, I think Kagi took about $1.50 of each sale, and I had to deal with occasional cash and checks in random currencies).

Over the years, I wrote dozens of useful programs using Realbasic, and a few commercially successful ones (e.g. Media Mover 3,  and RiddleMeThis) and an in-house tool that made hundreds of thousands of dollars (over the course of several years) with a few days’ effort.

Today, I find Xojo (which Realbasic rebranded itself to) to have become bloated, unstable, and expensive, and Xojo has never captured native look and feel in the post-Carbon world on the Mac, and anything that looks OK on Windows looks like crap on the Mac and vice versa, which undercuts its benefits as a cross-platform application. Also, my career has made me an expert on Javascript and web development.

So my weapon of choice these days for desktop development became nwjs and Electron. While web-apps don’t have desktop look and feel (even if you go to extremes with frameworks like Sproutcore or Cappuccino), neither do many desktop apps (including most of Microsoft’s built-in apps in Windows 10). Many successful commercial apps either are web apps (e.g. Slack) or might as well be (e.g. Lightroom).

I mention all of this right now because it closes the loop with my work on bindinator — anything that makes web application development faster and better thus helps desktop application development. I think it also clarifies my design goals with bindinator: I feel that in many ways ease of development peaked with Realbasic, and bindinator is an attempt to recreate that ease of development while adding wrinkles such as automatic binding and literate programming that make life easier and better.

Returning to the Adobe fold… sort of

I remain very frustrated with my Photography workflow. No-one seems to get this right and it drives me nuts. (I’m unwilling to pay Apple lots of money for a ridiculous amount of iCloud storage, which might work quite well, but it still wouldn’t have simple features like prioritizing images that I’ve rated or looked at closely over others automagically, or allow me to rate JPEGs and have the rating carried over to the RAW later.)

Anyway, Aperture is sufficiently out-of-date that I’ve actually uninstalled it and Photoshop still has some features (e.g. stitching) that its competition cannot match. So, $120 for a year of Photoshop + Lightroom… let’s see how it goes.

Lightroom

I was expecting Lightroom to be awesome what with all the prominent folks who swear by it. So far I find it unfamiliar (I did actually use LR2, and of course I am a Photoshop ninja) to the point of frustration, un-Mac-like, and ugly, ugly, ugly.

Some of my Lightroom Gripes
Some of my Lightroom Gripes

A large part of the problem is terrible use of screen Real Estate. It’s easy to hide the menubar (once you find where Adobe has hidden its non-standard full screen controls), but it’s hard (impossible) to hide the idiotic mode menu “Identity Plate”. (I found the “Identity Plate Editor” (WTF?) by right-clicking hopefully all over the place, which allowed me to shrink the stupidly large lettering but it just left the empty space behind. How can an application that was created brand new (and initially Mac-only) have managed to look like a dog’s breakfast so quickly?

But there are many little things that just suck.

  • All the menus are horrible — cluttered and full of nutty junk. Looks like design by committee.
  • The dialog box that appears when you “Copy…” the current adjustments is a crime against humanity (it has a weird set of defaults which I overrode by clicking “check none” when I only wanted to copy some very specific settings and now I can’t figure out how to restore the defaults).
  • The green button doesn’t activate full screen mode. There are multiple full screen modes and none of them are what I want.
  • Zooming with the trackpad is weird. And the “Loupe” (nothing like or as nice as Aperture’s) changes its behavior for reasons I cannot discern. (I finally figured out that the zoom in shortcut actually goes to 1:1 by default, which is useful, although it’s such a common feature I’d have assigned a “naked” keystroke to it, such as Z, which instead toggles between display modes.)
  • The main image view seizes up after an indeterminate amount of use and shortly afterwards Lightroom crashes. (This is on maxed out Macbook Pro 15″.)
  • I can’t hide the stupid top bar (with your name in it). I can’t even make it smaller by reducing the font size of the crap in it.
  • Hiding the “toolbar” hides a random thing that doesn’t seem to me to be a toolbar.
  • By default the left side of the main window is wasted space. Oh, and the stupid presets are displayed as a list of words — you need to mouse over them to get a low-fidelity preview.
A crime against humanity.
A crime against humanity.

I found Lightroom’s UI sufficiently annoying that I reinstalled Aperture for comparison. Sadly, Lightroom crushes Aperture in ways that really matter. E.g. its Shadow and Highlight tools simply work better than Aperture’s (I essentially need to go into Curves to do anything slightly difficult in Aperture), and it has recent features (such as Dehaze — I’m pretty sure inspired by a similar feature DxO was very proud of a while back). After processing a few carefully underexposed RAW images* in both programs, Lightroom gets me results that Aperture simply can’t match (it also makes it very tempting to make the kind of over-processed images you see everywhere these days with amped up colors, quasi-HDR effects, and exaggerated micro-contrast).

(* Quite a few years ago someone I respect suggested that it’s a good idea to “underexpose” most outdoor shots by one stop to keep more highlight detail. This is especially important if the sky is visible. These days, everyone seems to be on the “ISO Invariance” bandwagon which is essentially not doing anything to the signal off the sensor (boosting effective ISO) when capturing RAW, in essence, “expose to the left” automatically — the exact opposite of the “expose to the right” bandwagon these clowns were all on two years ago — here’s a discussion of doing both at the same time. Hilarious. On the bright side, ISO Invariance pretty much saves ETTR nuts from constantly blowing their highlights.)

The Photos App is far more competitive with Lightroom than Aperture
The Photos App is far more competitive with Lightroom than Aperture. And its UI is simply out of Lightroom’s league (see those filters on the right? Lightroom simply has a list of names).

Funny thing though is that the new Photos app gives Lightroom a much better run for its money (um, it’s free), has Aperture’s best UI feature (organic customization), and everything runs much faster that Lightroom. The problem with Photos is it is missing key features of Lightroom, e.g. Dehaze, Clarity, and (most curiously) Vibrance. You just can’t get as far with Photos as you can with Lightroom. (If you have Affinity Photo you can use its Dehaze more-or-less transparently from Photos. It’s a modal, but then Lightroom is utterly modal.)

On the UI level, though, Photos simply spanks Lightroom’s Develop mode. Lightroom’s organization tools, clearly with many features requested by users, are completely out of Photos’ league.

I also tried Darktable (the open source Lightroom replacement) for comparison. I think its user interface is in many ways nicer than Lightroom’s — it looks and feels better — although much of its lack of clutter is owed to a corresponding lack of features), but the sad news is that Darktable’s image-processing capabilities don’t even compete with Aperture, let alone Photos. (One thing I really like about Darktable is that it applies “orientation” (automatic horizon leveling), “sharpen”, and “base curve” automagically by default. Right now this isn’t customizable — there’s a placeholder dialog — but if it were it would be an awesome feature.)

The lack of fit and finish in Lightroom is unprofessional and embarrassing
The lack of fit and finish in Lightroom is unprofessional and embarrassing. If it’s not obvious to you, the red line shows the four different baselines used for the UI elements.
Hilarious. Lightroom's "About Box" uses utterly non-standard buttons that behave like tab selectors.
This is hilarious. Lightroom’s “About box” uses utterly non-standard buttons that behave like tab selectors. This is actually regression for Adobe, which used to really take pride in its About boxes.

At bottom, Aperture doesn’t look or feel like an application developed by or for professionals. It’s very capable, but its design is — ironically — horrible.

Photoshop

Photoshop’s capabilities are, by and large, unmatched, but its UI wasn’t good when it first came out and many of its worst features have pretty much made it through unscathed by taste, practicality, or a sense of a job well done. Take a look at this gem:

Adobe Photoshop's horrible Radial Blur dialog
Adobe Photoshop’s horrible Radial Blur dialog

This was an understandably frustrating dialog back in 1991 — in fact the attempt to provide visual cues with the lines was probably as much as you could expect, but it hasn’t changed since — every other application I use provides a GPU-accelerated live preview (in Acorn it’s non-destructive too). What’s even worse is that it looks like the dialog’s layout has been slightly tweaked to allow for too-large-and-non-standard buttons (with badly centered captions that would look worse if there were a glyph with a descender in it). At least it doesn’t waste a buttload of space on a mode menu: instead there’s a small popup lets you pick which (customizable) “workspace” you want to use, and the rest of the bar is actually useful (it shows common settings for the currently selected tool).

In the end, Photoshop at least looks reasonably nice , and its UI foibles are things I’ve grown accustomed to over twenty-five years.

I can’t wait until I get to experience Adobe’s Updater…

Learning to write Unity Shaders

Dynamically generated infinite scrolling terrain with a custom shader
Dynamically generated infinite scrolling terrain with a custom shader (inside the Unity editor)

Unity has replaced Photoshop in terms of adding features faster than I can assimilate them. It’s been possible to write custom shaders for years, but every time I tried to do something non-trivial I would give up in frustration. Recently, however, I finally wrote some pretty nice dynamic terrain and dynamic planet code and was very frustrated with shading options.

What I wanted to do, in essence, was embed multiple tiling textures inside a single texture, and then have a shader continuously interpolate between a pair of those shaders based on altitude, biome, or whatever. This doesn’t appear to be something other people are doing, so I was not going to be able to take an existing shader and tweak a couple of things to make it work. I’d actually need to understand what I was doing.

If you look at the picture, you’ll see seamless transitions (based on altitude) between a sand dune texture (at sea level) and a forest texture (at middle levels) and further up the beginning of a transition to bare rock. I’ve got a darker blue-tinged rock below the sand, so the material I’m using looks like this:

A single texture that contains multiple tiling textures.
A single texture that contains multiple tiling textures. Most of the texture is blank, but you get the idea.

Obviously there’s room for expansion. I could do some really interesting things with this technique (even moreso if I can interpolate between three or four samples without choking the GPU). I haven’t figured out how to benchmark this stuff — I’m not seeing any hit on the GPU using Unity’s profile, but I haven’t tried running this on a mobile device — so far, this shader seems to run just as fast as (say) a standard diffuse shader.

How to Start

Writing shaders is actually pretty simple. The big problems are finding useful documentation (I couldn’t find any useful documentation on ShaderLab, but it turns out that nVidia’s cg documentation appears to do the trick) and tutorials (I couldn’t find any). It doesn’t help that Unity has made radical changes to its Shader language over time (not really their fault, the underlying GPU architectures have been in flux) which makes a lot of the tutorials you do find worse than useless.

For the record, I’m still using Unity 4.6.x so this may all be obsolete in Unity 5.x. That said, I was working off the latest Unity 5.x online documentation.

The closest thing to useful tutorials I could find is in the Unity documentation — specifically Surface Shaders Examples. Sadly, you’re going to need to infer a great deal from these examples, because I couldn’t find explanations of the simplest things (e.g. how data gets from the shader’s UI to the actual pixel shader code — there’s a lot of automagical linking going on).

Shader "Custom/DynamicTerrainShader" {
	Properties {
		_MainTex ("Base (RGB)", 2D) = "white" {}
		_Color ("Main Color", Color) = (0,0,0,1)
		_WorldScale("World Scale", Float) = 0.25
		_AltitudeScale("Altitude Scale", float) = 0.25
		_TerrainBands("Terrain Bands", Int) = 4
	}
	SubShader {
		Tags { "RenderType"="Opaque" }
		LOD 200
		
		CGPROGRAM
		#pragma surface surf Lambert

		sampler2D _MainTex;
		float _WorldScale;
		float _AltitudeScale;
		float4 _Color;
		int _TerrainBands;

		struct Input {
			float3 worldPos;
			float2 uv_MainTex;
            float2 uv_BumpMap;
		};

		void surf (Input IN, inout SurfaceOutput o) {
			float y = IN.worldPos.y / _AltitudeScale + 0.5;
			float bandWidth = 1.0 / _TerrainBands;
			float s = clamp(y * (_TerrainBands + 2) - 2, 0, _TerrainBands);
			float t = frac(s);
			t = t < 0.25 ? t * 0.5 : ( t > 0.75 ? (t - 0.75) * 0.5 + 0.875 : t * 1.5 - 0.25);
			float band = floor(s)  * bandWidth;
			float2 uv = frac(IN.uv_MainTex * _WorldScale) * (bandWidth - 0.006, bandWidth - 0.006) + (0.003, 0.003);
			uv.y = uv.y + band;
			float2 uv2 = uv;
			uv2.y = uv2.y + bandWidth;
			half4 c = tex2D(_MainTex, uv) * (1 - t) + tex2D(_MainTex, uv2) * t;
			o.Albedo = c.rgb * 0.5;
			o.Alpha = c.a;
		}
		ENDCG
	} 
	FallBack "Diffuse"
}

So here’s my explanation for what it’s worth.

To refer to properties in your code, you need to declare them (again) inside the shader body. Look at occurrences of _MainTex as an instructional example. As far as I can tell you have to figure out which parameters are available where, and which type declarations in the shader body correspond with the (different) types in the properties declaration by osmosis.

The Input block is where you declare which bits of “ambient” information your shader uses from the rendering engine. Again, what you can declare in here and what it is you simply need to figure out from examples. I figured out how worldPos worked from the example which turns the soldier into strips.

Note that the Input block declaration determines what is passed as Input (referred to as IN in the body). The way the declaration works (you declare the type rather than the variable) is a bit puzzling but it kind of makes sense. The SurfaceOutput object is essentially a set of parameters for the pixel that is about to be rendered. So the simplest shader body would simply be something like o.Albedo = (1,0,0,1) which would be the constant color red (depending on the basic shader type, lighting would or wouldn’t be applied, etc.).

Variables and calculations are all about vectors. Basically everything is a vector. A 3D point is a float3, a 2D point is a float2. You can add and multiply vectors (component by component) so (1,0.5) + (-0.5, 0.25) -> (0.5,0.75). You can mix scalars and vectors in some obvious and not-so-obvious ways (hint: it usually pays to be explicit about components).

The naming conventions are interesting. For vectors, you can use x, y, and z as shorthand for accessing specific components. I’m not sure if the fourth coordinate is w or a or something else. I’m also pretty sure that spatial coordinates are not in the order I think they are (so I do ok with foo.x, but get into trouble if I try to handle specific components via (,,) expressions. Hence lines like uv.y = uv.y + band instead of uv = uv + (0,band,0) (which doesn’t work).

You may have noticed some handy functions such as floor and frac being used and wonder what else there is. I couldn’t find any list or references on the Unity website, but eventually found this cg standard library documentation on nVidia’s website (for its shader language). Everything I tried from this list seemed to work (on my nVidia-powered Macbook Pro).

If you’re looking for control structures and the like, I haven’t found any aside from the ternary operator — condition ? value-if-condition-true : value-if-condition-false — which is fully supported, can be nested, etc.. This alone would probably have driven me away just five years ago before I learned to stop worrying and love the ternary operator.

Why no switch statements, loops and such? I’m writing a pixel shader here and I suspect it relies on every program executing the same instruction at the same time, so conditional loops are out. (Actually I may be wrong about this — see the cg language documentation. I don’t know how closely ShaderLab corresponds to cg though.)

Once you see that list of functions you’ll understand why I am using piecewise linear interpolation between the materials (it looks just fine to me).

Some final points — the shader had terrible problems with the edges of the sub-textures until I changed the bitmap sampling to point (versus linear or trilinear interpolation). I suspect this may be a wrapping issue, but (as you may find) debugging these suckers is not easy.

One final comment — even though you’re essentially writing assembler for your GPU, shader programming is pretty forgiving — I haven’t crashed my laptop once (although Unity itself seems to periodically die during compiles).

What does a good API look like?

I’m temporarily unemployed — my longest period of not having to work every day since I was laid off by Valuclick in 2010 — so I’m catching up on my rants. This rant is shaped by recent experiences I had during my recent job interview process (I was interviewed by two Famous Silicon Valley Companies, one of which hired me and the other of which “didn’t think I was a good fit”). Oddly enough, the company that hired me interviewed me in an API-agnostic (indeed almost language-agnostic) way while the company that didn’t interviewed me in terms of my familiarity with an API created by the other company (which they apparently are using religiously; although maybe not so much following recent layoffs).

Anyway, the API in question is very much in vogue in the Javascript / Web Front End world, in much the same way as CoffeeScript and Angular were a couple of years ago. (Indeed, the same bunch of people who were switching over to Angular a few years back have recently been switching over to this API, which is funny to watch). And this API and the API it has replaced in terms of mindshare is in large part concerned with a very common problem in UI programming.

A Very Common Problem in UI Programming

Probably the single most common problem in UI programming is synchronizing data with the user interface, or “binding”. There are three fundamental things almost any user-facing program has to do:

  • Populate the UI with data you have when (or, better, before) the UI first appears on screen
  • Keep the UI updated with data changed elsewhere (e.g. data that changes over time)
  • Update the data with changes made by the user

This stuff needs to be done, it’s boring to do, it often gets churned a lot (there are constant changes made to a UI and the data it needs to sync with in the course of any real project). So it’s nice if this stuff isn’t a total pain to do. Even better if it’s pretty much automatic for simple cases.

The Javascript API in question addresses a conspicuous failing of Javascript given its role as “the language of the web”. In fact almost everything good and bad about it stems from its addressing this issue. What’s this issue? It’s the difficulty of dealing with HTML text. If you look at Perl, PHP, and JSP, the three big old server-side web-programming languages, each handles this particular issue very well. The way I used to look at it was:

  • A perl script tends to look like a bunch of code with snippets of HTML embedded in it.
  • A PHP script tends to look like a web page with snippets of code embedded in it.
  • A JSP script tends to look like a web page with horrible custom tags and/or snippets of code embedded in it.

If you’re trying to solve a simple problem like get data from your database and stick it in your dynamic web page, you end up writing that web page the way you normally would (as bog standard HTML) and just putting a little something where you want your data to be and maybe some supporting code elsewhere. E.g. in PHP you might write “<p>{$myDate}</p>” while in  JSP you’d write something like “<p><%= myDate %></p>”. These all look similar, do similar things, and make sense.

It’s perfectly possible to defy these natural tendencies, e.g. write a page that has little or no HTML in it and just looks like a code file, but this is pretty much how many projects start out.

Javascript, in comparison, is horrible at dealing with HTML. You either end up building strings manually “<p>” + myDate + “</p>” which gets old fast for anything non-trivial, or you manipulate the DOM directly, either through the browser’s APIs, having first added metadata to your existing HTML, e.g. you’d change “<p></p>” to “<p id=”myDate”></p>” and then write “document.getElementById(‘myDate’).text = myDate;” in a script tag somewhere else.

The common solution to this issue is to use a template language implemented in Javascript (there are approximately 1.7 bazillion of them, as there is of anything involving Javascript) which allow you to write something like “<p>{{myDate}}</p>” and then do something like “Populate(slabOfHtml, {myDate: myDate});” in the simplest case (cue discussion about code injection). The net effect is you’re writing non-standard HTML and using a possibly obscure and flawed library to manipulate markup written in this non-standard HTML (…code injection). You may also be paying a huge performance penalty because depending on how things work, updating the page may involve regenerating its HTML and getting the browser to parse it again, which can suck — especially with huge tables (or, worse, huge slabs of highly styled DIVs pretending to be tables). OTOH you can use lots of jQuery to populate your DOM-with-metadata fairly efficiently, but this tends to be even worse for large updates.

The API in question solves this problem by uniting non-standard HTML and non-standard Javascript in a single new language that’s essentially a mashup of XML and Javascript that compiles into pure Javascript and [re]builds the DOM from HTML efficiently and in a fine-grained manner. So now you kind of need to learn a new language and an unfamiliar API.

My final interview with the company that did not hire me involved doing a “take home exam” where I was asked to solve a fairly open-ended problem using this API, for which I had to actually learn this API. The problem essentially involved: getting data from a server, displaying a table of data, allowing the user to see detail on a row item, and allowing the user to page through the table.

Having written a solution using this unfamiliar API, it seemed very verbose and clumsy, so I tried to figure out what I’d done wrong. I tried to figure out what the idiomatic way to do things using this API was and refine them. Having spent a lot of spare time on this exercise (and I was more-than-fully-employed at the time) it struck me that the effort I was spending to learn the API, and to hone my implementation, were far greater than the effort required to implement the same solution using an API I had written myself. So, for fun, I did that too.

Obviously, I had much less trouble using my API. Obviously, I had fewer bugs. Obviously I had no issues writing idiomatic code.

But, here’s the thing. Writing idiomatic code wasn’t actually making my original code shorter or more obvious. It was just more idiomatic.

To bind an arbitary data object to the DOM with my API, the code you write looks like this:

$(<some-selector>).bindomatic(<data-object>);

The complex case looks like this:

$(<some-selector>).bindomatic(<data-object>, <options-object>);

Assuming you’re familiar with the idioms of jQuery, there’s nothing new to learn here. The HTML you bind to needs to be marked up with metadata in a totally standard way (intended to make immediate sense even to people who’ve never seen my code before), e.g. to bind myDate to a particular paragraph you might write: “<p data-source=”.myDate”></p>”. If you wanted to make the date editable by the user and synced to the original data object, you would write: “<input data-bind=”.myDate”>”. The only complaints I’ve had about my API are about the “.” part (and I somewhat regret it). Actually the syntax is data-source=”myData.myDate” where “myData” is simply an arbitrary word used to refer to the original bound object. I had some thoughts of actually directly binding to the object by name, somehow, when I wrote the API, but Javascript doesn’t make that easy.

In case you’re wondering, the metadata for binding tabular data looks like this: “<tr data-repeat=”.someTable”><td data-source=”.someField”></td></tr>”.

My code was leaner, far simpler, to my mind far more “obvious”, and ran faster than the code using this other, famous and voguish, API. There’s also no question my API is far simpler. Oh, and also, my library solves all three of the stated problems — you do have to tell it if you have changes in your object that need to be synced to the UI — (without polluting the source object with new fields, methods, etc.) while this other library — not-so-much.

So — having concluded that a programming job that entailed working every day with the second API would be very annoying — I submitted both my “correct” solution and the simpler, faster, leaner solution to the second company and there you go. I could have been laid off by now!

Here’s my idea of what a good API looks like

  • It should be focused on doing one thing and one thing well.
  • It should only require that the programmer tell it something it can’t figure out for itself and hasn’t been told before.
  • It should be obvious (or as obvious as possible) how it works.
  • It should have sensible defaults
  • It should make simple things ridiculously easy, and complex things possible (in other words, its simplicity shouldn’t handcuff a programmer who wants to fine-tune performance, UX, and so on.

XCode and Swift

I don’t know if the binding mechanisms in Interface Builder seemed awesome back in 1989, but today — with all the improvements in both Interface Builder and the replacement of Objective-C with the (potentially) far cleaner Swift — they seem positively medieval to me, combining the worst aspects of automatic-code-generation “magic” and telling the left hand what the left hand is doing.

Let’s go back to the example of sticking myDate into a the UI somewhere. IB doesn’t really have “paragraphs” (unless you embed HTML) so let’s stick it in a label. Supposing you have a layout created in IB, the way you’re taught — as a newb — to do it this is:

  1. In XCode, drag from your label in IB to the view controller source code (oh, step 0 is to make sure both relevant things are visible)
  2. You’ll be asked to name the “outlet”, and then XCode will automagically write this code: @IBOutlet weak var myDate: UILabel!
  3. Now, in — say — the already-written-for-you viewDidLoad method of the controller you can write something like: myDate.text = _myDate (it can’t be myDate because you’re used myDate to hold the outlet reference).

Congratulations, you have now solved one of the three problems. That’s two lines of code, one generated by magic, the other containing no useful information, that you’ve written to get one piece of data from your controller to your view.

Incidentally, let’s suppose I wanted to change the outlet name from “myDate” to “dateLabel”. How do I do that? Well, you can delete the outlet and create a new outlet from scratch using the above process, and then change the code referencing the outlet. Is there another way? Not that I know of.

And how to we solve the other two problems?

Let’s suppose we’d in fact bound to an input field. So now my outlet looks like this: @IBOutlet weak var myDate: UITextField! (the “!” is semantically significant, not me getting excited).

  1. In XCode, drag from the field in IB to the view controller source code.
  2. Now, instead of creating an outlet, you select Action, and you make sure the type is UITextField, and change the event to ValueChanged.
  3. In the automatically-just-written-for-you Action code add the code _myDate = sender.text!

You’re now solved the last of the three problems. You’ve had a function written for you automagically, and you’ve written one line of retarded code. That’s three more lines of code (and one new function) to support your single field. And that’s two different things that require messing with the UI during a refactor or if a property name gets changed.

OK, what about the middle problem? That’s essentially a case of refactoring the original code so that you can call it whenever you like. So, for example, you write a showData method, call it from viewDidLoad, and then call it when you have new data.

Now, this is all pretty ugly in basic Javascript too. (And it was even uglier until browsers added documentQuerySelector.) The point is that it’s possible to make it very clean. How to do this in Swift / XCode?

Javascript may not have invented the hash as a fundamental data type, but it certainly popularized it. Swift, like most recent languages, provides dictionaries as a basic type. Dictionaries are God’s gift to people writing binding libraries. That said, Swift’s dictionaries are strongly typed which leads to a lot of teeth gnashing.

Our goal is to be able to write something like:

self.bindData(<dictionary>)

It would be even cooler to be able to round-trip JSON (the way my Javascript binding library can). So if this works we can probably integrate a decent JSON library.

So the things we need are:

  • Key-value-pair data storage, i.e. dictionaries — check!
  • The ability to add some kind of metadata to the UI
  • The ability to find stuff in the UI using this metadata

This doesn’t seem too intimidating until you consider some of the difficulty involved in binding data to IB.

Tables

The way tables are implemented in Cocoa is actually pretty awesome. In essence, Cocoa tables (think lists, for now) are generated minimally and managed efficiently by the following mechanism:

The minimum number of rows is generated to fill the available space.

When the user scrolls the table, new rows are created as necessary, and old rows disposed of. But, to make it even more efficient rather than disposing of unused rows, they are kept in a pool and reallocated as needed — so the row that scrolls off the top as you scroll down is reused to draw the row that just scrolled into view. (It’s more complex and clever than this — e.g. rows can be of different types, and each type is pooled separately — but that’s the gist.) This may seem like overkill when you’re trying to stick ten things in a list, but it’s ridiculously awesome when you’re trying to display a list of 30,000 songs on your first generation iPhone.

In order for this to work, there’s a tableDelegate protocol. The minimal implementation of this is that you need to tell the table how many rows of data you have and populate a row when you’re asked to.

So, for each table you’re dealing with you need to provide a delegate that knows what’s supposed to go in that specific table. Ideally, I just want to do something like self.bind(data) in the viewDidLoad method, how do I create and hook up the necessary delegates? It’s even worse if I want to use something like RootViewController (e.g. for a paged display) which is fiddly to set up even manually. But, given how horrible all this stuff is to deal with in vanilla Swift/Cocoa, that’s just how awesome it will be not to have to do any of it ever again if I can do this. Not only that, but to implement this I’m going to need to understand the ugly stuff really well.

Excellent.

Adding Metadata to IB Objects

The first problem is figuring out some convenient way of attaching metadata to IB elements (e.g. buttons, text fields, and so on). After a lot of spelunking, I concluded that my first thought (to use the accessibilityIdentifier field) turns out to be the most practical (even though, as we shall see, it has issues).

There are oodles of different, promising-looking fields associated with elements in IB, e.g. you can set a label (which appears in the view hierarchy, making the control easy to find). This would be perfect, but as far as I could tell it isn’t actually accessible at runtime. There’s also User Defined Runtime Attributes which are a bit fiddly to add and edit, but worse, as far as I’ve been able to tell, safely accessing them is a pain in the ass (i.e. if you simply ask for a property by name and it’s not there — crash!). So, unless I get a clue, no cigar for now.

The nice thing about the accessibilityIdentifier is that it looks like it’s OK for it to be non-unique (so you can bind the same value to more than one place) and it can be directly edited (you don’t need to create a property, and then set its name, set its type as you do for User Defined Runtime Attributes). The downside is that some things — UITableViews in particular — don’t have them. (Also, presumably, they have an impact on accessibility, but it seems to me like that shouldn’t be a problem if you use sensible names.)

So my first cut of automatic binding for Swift/Cocoa took a couple of hours and handled UITextField and UILabel.

class Bindery: NSObject {
    var view: UIView!
    var data: [String: AnyObject?]!
    
    init(view v:UIView, data dict:[String: AnyObject?]){
        view = v
        data = dict
    }
    
    func subviews(name: String) -> [UIView] {
        var found: [UIView] = []
        for v in view!.subviews {
            if v.accessibilityIdentifier == name {
                found.append(v)
            }
        }
        return found
    }
    
    @IBAction func valueChanged(sender: AnyObject?){
        var key: String? = nil
        if sender is UIView {
            key = sender!.accessibilityIdentifier
            if !data.keys.contains(key!) {
                return
            }
        }
        if sender is UITextField {
            let field = sender as? UITextField
            data[key!] = field!.text
        }
        updateKey(key!)
    }
    
    func updateKey(key: String){
        let views = subviews(key)
        let value = data[key]
        for v in views {
            if v is UILabel {
                let label = v as? UILabel
                label!.text = value! is String ? value as! String : ""
            }
            else if v is UITextField {
                let field = v as? UITextField
                field!.text = value! is String ? value as! String : ""
                field!.addTarget(self, action: "valueChanged:", forControlEvents: .EditingDidEnd)
            }
        }
    }
    
    func update() -> Bindery {
        for key in (data?.keys)! {
            updateKey(key)
        }
        return self
    }
}

Usage is pretty close to my ideal with one complication (this code is inside the view controller):

    var binder: Bindery
    var data:[String: AnyObject?] = [
        "name": "Anne Example",
        "sex": "female"
    ]

    override func viewDidLoad() {
        super.viewDidLoad()
        // Do any additional setup after loading the view, typically from a nib.
        
        binder = Bindery(view:self.view, data: data).update()
    }

If you look closely, I have to call update() from the new Bindery instance to make things work. This is because Swift doesn’t let me refer to self inside an initializer (I assume this is designed to avoid possible issues with computed properties, or to encourage programmers to not put heavy lifting in the main thread… or something). Anyway it’s not exactly terrible (and I could paper over the issue by adding a class convenience method).

OK, so what about tables?

Well I figure tables will need their own special binding class (which I shockingly call TableBindery) and implement it so that you need to use an outlet (or some other hard reference to the table) and then I use Bindery to populate each cell (this lets you create a cell prototype visually and then bind to it with almost no work). This is how that ends up looking like this (I won’t bore you with the implementation which is pretty straightforward once I worked out that a table cell has a child view that contains all of its contents, and how to convert a [String: String] into a [String: AnyObject?]):

    var data:[String: AnyObject?] = [
        "name": "Anne Example",
        "sex": "female"
    ]
    
    override func viewDidLoad() {
        ...
        tableBinder = TableBindery(table:table, array: tableData).update()
    }

In the course of getting this working, I discover that the prototype cells do have an accessibilityIdentifier, so it might well be possible to spelunk the table at runtime and identify bindings by using the attributes of the table’s children. The fact is, though, that tables — especially the sophisticated multi-section tables that Cocoa allows — probably need to be handled a little more manually than HTML tables usually do, and having to write a line or two of code to populate a table is not too bad.

Now imagine if Bindery supported all the common controls, provided a protocol for allowing custom controls to be bound, and then imagine an analog of TableBindery for handling Root view controllers. This doesn’t actually look like a particularly huge undertaking, and I already feel much more confident dealing with Cocoa’s nasty underbelly than I was this morning.

And, finally, if I really wanted to provide a self.bindData convenience function — Swift and Cocoa make this very easy. I’d simply extend UIView.

The Graph with a Trillion Edges

After looking at this thread on Hacker News about this paper from a bunch of Facebook researchers, and this excellent article by Frank McSherry, I tried to understand McSherry’s approach (since I found the paper kind of impenetrable) and this is my take on it.

The Basic Problem

Let’s suppose you’re Facebook and you have about 1B members who have an arbitrary number of links to each other. How do you scale and parallelize queries (especially iterated queries) on the data?

McSherry basically implements a single thread algorithm on his laptop using brute force. (He uses a publicly available database of 128B edges.) As I understand it, his approach is pretty simple:

  • Encode each connection as a 64 bit number (think connection from A (32 bits) to B (32 bits).
  • Store the numbers in RAM as a sequence of variable-length integer encoded differences. E.g. a set of numbers beginning with 4, 100, 2003, 2005,… would be encoded as 4, 96, 1903, 2,… Since the average distance between 128B values scattered among 1T is 8, the expected distance between connections (64-bit values) will be around 9, which we can encode as ~4 bits of data (using variable-length encoding) instead of 64 bits, and storage needed is proportional to number of values stored [1]).
  • Look for a connection from A to anyone by running through the list and looking for values starting with A’s 32 bits. (You can make this search arbitrarily efficient — trading memory requirements for performance — by partitioning the list into a lookup table.)
  • Store the numbers on disk as deltas between the numbers encoded as a Hilbert Curve [2] using variable-length integer encoding for efficiency. (The naive encoding has an average of ~5bits between values; the optimized encoding was 2.

[1] Each entry will be log(n) bits in size, there’ll be c of them, and the average difference (to be encoded) will be log(c/n) bits. If we assume c scales as n^2 (highly dubious — it means if the world’s population increased by a factor of 2, I’d have twice as many friends) then we get n^2 * log(n) for storage. If we assume c scales as n log(n) (probably still generous) then we get n log(n) * log(log(n)). At any given scale that function looks pretty linear, although it is actually faster than linear — the gradient hangs around 5 for n between 1B and 100B — I don’t think it’s a cause for concern.

[2] A simple way of looking at Hilbert Curve encoding is that it treats a binary number as a series of 2-bit numbers, with each pair of bits selecting a sector of a 2×2 square (in a somewhat non-obvious way). So all numbers starting with a given pair of bits are found in the same sector. Then, look at the next two bits and subdivide that sector in a similarly non-obvious way until you run out of bits. (In an earlier post, McSherry explains Hilbert Curve implementation a graphically.) Incidentally, simply interleaving the bits of the two numbers has much the same effect.

That’s it in a nutshell. Obviously there are plenty of simple ways to optimize the lookups.

He points out that the database he used comes in 700 files. These could each be processed by 700 different computers (stored slightly less efficiently, because the average distance between deltas goes up slightly) and any request could easily be split among them.

The interesting thing is that running in one thread on a laptop, this approach runs faster and scales better than the 128 core systems he benchmarks against.

Optimizing

So let’s suppose I wanted to emulate Facebook and have access to 256 virtual machines. (I could easily afford to do this, as a startup, using AWS or similar.) How would I do this in practice? Remember that Facebook also has to deal with users creating new connections all the time.

First of all (kind of obviously) every connection gets stored twice (i.e. connections are two-way). We need this for worst-case scenarios.

Let’s suppose I number each of my virtual machines with an 8-bit value. When I store a connection between A and B I encode the connection twice (As AB and BA) and take the last 8 bits of one value and record them as a connection on machines with the corresponding number. Each machine stores the value in its representation of its link database.

How would this work in practice?

Well, each value being stored is effectively 56-bits (8 bits are pulled off the 64-bit value to pick the machine are and thus the same). Let’s divide that into 32-bits and 24-bits and store (in some efficient manner) 2^24 lists of 32-bit numbers, again stored in some efficient manner (we can break down and use McSherry’s approach at this point — effectively we’re progressively using McSherry’s approach on different portions of the numbers anyway).

So any lookup will entail grabbing 24-bits of the remaining 56-bits leaving us with 1/2^32 of the original data to search. Our efficient encoding algorithm would be to split the remaining data using the same approach among the cores, but I’m sure you get the idea by now.

So in a nutshell:

To record the existence of the edge AB:

  1. Encode as 2 64-bit values.
  2. Reduce to two 56-bit values and 2 8-bit values.
  3. Ask the 2 (or occasionally 1) machines designated for the 2 8-bit values.
  4. Each machine reduces the 56-bit value to a 32-bit value with a 24-bit lookup.
  5. Then inserts the remaining 32-bit value in a list of ~E/2^32 values (where E is the total number of edges, so 232 values for a 1T vertex DB)

Inserting a value in a list of n values is a solved O(log(n)) problem. Note that all connections Ax will be stored on one machine, but connections xA will be scattered (or vice versa) because we’re storing two-way connections. (Important for the worst case, see below.)

To find all edges from A:

  1. Grab 8-bits using the same algorithm used for storing AB.
  2. Ask the designated machine for the list of ~E/2^32 connections from A.

So to sum up — storing a connection AB involves bothering two machines. Determining if A’s connections involves bothering 1 machine in most cases, and (see below) all the machines in rare cases. However, the rare cases should never actually exist.

Worst Case Response

One problem here is that the worst case response could be huge. If A has 10M outgoing edges then one machine is going to have to cough up one giant database. (If one node has more connections than can easily be stored on one machine then we can deal with that by getting our machine names from the entropic bits of both user ids, but let’s ignore that for now.)

In sufficiently bad cases we reverse the lookup. And reverse lookups will never be bad! If we hit the machine containing all of Ax for all connections of the form Ax — there are too many to deal with, so the machine tells us to reverse the lookup, and we ask all our machines for connections xA, and we can reasonably expect those to be evenly distributed (if 256 bots sign up for and follow A simultaneously, there will be 256 new entries of the form Ax on one machine, but only one connection of the form xA on each machine).

So, the worst case performance of this design comes down to the cost of transmitting the results (which we can evenly distribute across our machines) — you can’t really do better than that, and the key step is treating each machine at having responsibility for one square in a grid picked using Hilbert Curves.

Note that the worst case isn’t terribly bad if we just want to count connections, so interrogating the database to find out how many outgoing links A has is practical.

Anyway, that never happens…

One can assume in the case of most social networks that while there will often be worst cases of the form A follows B (i.e. B has 10M followers) there will never be cases where A follows 10M users (indeed anyone like that could be banned for abuse long, long before that point is reached).

It follows that when someone posts something (on Twitter say) it only gets pulled by followers (it doesn’t get pushed to them). A posts something — solved problem, just update one list. B asks for update — we need to check B’s followers — solved problem. So we don’t even need to store two-way connections or worry about reverse lookups.

Incidentally…

This is apparently very similar to PageRank (Google’s algorithm for evaluating websites), which I’ve never bothered to try to understand.

Consider that each edge represents A links to B, and we rank a page by how many incoming links it has and we throw out pages with too many outgoing links (i.e. link farms, sitemaps). Then we iterate, weighting the value by the previously calculated values of each page. Google (et al) add to that by looking at the text in and around the link (e.g. “publicly available database of 128B edges”, above, links to a page about exactly that, so Google might infer that the linked page is “about” things in those words (or in the case of “click here” type links, look at words around the link).

So if from incoming links we infer that a page is about “scalable algorithms”, and from the quality of the pages with incoming links the quality of the page itself — and we maintain a database of connections between topics and pages — then when someone searches for “scalable algorithms”, we take the id of “scalable” and algorithms”, find pages about both, sort them by “quality”, create a web page, and festoon it with sponsored links and ads. By golly we’ve got a search engine!