Epic Fail

Atul Gawande describes the Epic software system being rolled out in America’s hospitals.

It reads like a  potpourri of everything bad about enterprise IT. Standardize on endpoints instead of interoperability, Big Bang changes instead of incremental improvements, and failure to adhere to the simplest principles of usability.

The sad thing is that the litany of horrors in this article are all solved problems. Unfortunately, it seems that in the process of “professionalizing” usability, the discipline has lost its way.

Reading through the article, you can just tally up the violations of my proposed Usability Heuristics, and there’s very few issues described in the article that would not be eliminated by applying one of them.

The others would fall to simple principles like using battle-tested standards (ISO timestamps anyone?) and picking the right level of database normalization (it should be difficult or impossible to enter different variations of the same problem in “problem lists”, and easier to elaborate on existing problems).

There was a column of thirteen tabs on the left side of my screen, crowded with nearly identical terms: “chart review,” “results review,” “review flowsheet.”

I’m sure the tabs LOOKED nice, though. (Hint: maximize generality, minimize steps, progressive disclosure, viability.)

“Ordering a mammogram used to be one click,” she said. “Now I spend three extra clicks to put in a diagnosis. When I do a Pap smear, I have eleven clicks. It’s ‘Oh, who did it?’ Why not, by default, think that I did it?” She was almost shouting now. “I’m the one putting the order in. Why is it asking me what date, if the patient is in the office today? When do you think this actually happened? It is incredible!”

Sensible defaults can be helpful?! Who knew? (Hint: sensible defaults, minimize steps.)

This is probably my favorite (even though it’s not usability-related):

Last fall, the night before daylight-saving time ended, an all-user e-mail alert went out. The system did not have a way to record information when the hour from 1 a.m. to 1:59 a.m. repeated in the night. This was, for the system, a surprise event.

Face meet palm.

Date-and-time is a fundamental issue with all software and the layers of stupidity that must have signed off on a system that couldn’t cope with Daylight Savings boggles my mind.

A former colleague of mine linked to US Web Design System as if this were some kind of intrinsically Good Thing. Hilariously, the site itself does not appear to have been designed for accessibility or even decent semantic web, and blocks robots.

Even if the site itself were perfect, the bigger problems are that (a) there are plenty of similar open source projects, they could have just blessed one; (b) it’s a cosmetic standard, and (c) there’s pretty much no emphasis on the conceptual side of usability. So, at best it helps make government websites look nice and consistent.

(To be continued…)

Please enter date of birth…

I’d like every person who has implemented a date picker control to enter the birthdays of their living relatives one-hundred times using their own date-pickers. Then try some ancestors.

Date pickers are perhaps one of the worst controls one deals with on a daily basis. They’re pretty terrible even when used for their expected purpose (entering dates for appointments). Perhaps the simplest criterion by which to judge them is:

Is your date picker easier to use than a keyboard?

A followup question would be:

If you want to use your keyboard anyway, will you find it harder than if your date picker weren’t there?

The primary reason date pickers exist is not for the user’s convenience. Entering a date using a keyboard requires, typically, 4 keystrokes for a nearby date (e.g. “4/1” or “12/25”) and at most 10 keystrokes for a date with specific year. Now, if you’re picking a date within a predictable and small period of time, you can provide graphical calendars to reduce date entry to a single click, and date-range-entry to a single click-and-drag. So, in a certain restricted set of use-cases, date pickers can be convenient.

The iOS Date Picker
The iOS Date Picker makes me furious every time I use it. The absolute best case is that you want to set something in the next few days. Anything else and you need to remember where you started and then scroll. Now what if you want to schedule your next teeth cleaning in six months? OK, what if you were entering a date of birth? Good. Fucking. Luck. And there’s no option to simply type in the damn date.

So, if I’m trying to enter a new appointment, and the date picker defaults to, say, showing today’s date and time, we’re in pretty good shape. Mostly.

Similarly, if I want to move an appointment from today to tomorrow or from Tuesday to a week from Tuesday we’re again in pretty good shape.

But if I want to select the time I started working at Andersen Consulting (October 1993?) things get a lot less pleasant fast. If I want to select the time I spent living in Canberra (March 1983 to September 1993… ish) it’s getting seriously unpleasant. If I need to enter birthdates for my family… blech. Luckily with America’s healthcare system I probably only need to do that a few dozen times a year.

One reason date pickers exist is because parsing typed or written dates is difficult. To begin with, the US has chosen to use a date format that is different from that used in every other part of the world — month / day / year. Since a lot of dates are ambiguous (if the first two numbers are both 12 or less then you can’t know for sure which is which) a date picker allows the UI designer to make the user’s choice clear to the software.

The simplest option in most cases would be to provide a calendar with a text field. If you type a date, it gets parsed and displayed on the calendar. If you click on the calendar, it replaces the typed date. In most cases, the calendar can be something that’s only visible when the text field is in focus.

Chrome's date picker
Chrome’s date picker is actually one of the best I’ve seen. That said, it has “clever” touches that can be confusing (e.g. if you type “4” as month, it automatically advances to the next field before you type “/” or start typing the day) which in turn means that it won’t let me enter the date in non-US order and flip it, let alone enter the date in non-numeric format (e.g. March 15), and it doesn’t deal well with anything less than a four-digit year.

Another reason date pickers exist is that you often want to display dates, and date pickers allow dates to be rendered in a consistent form, often with useful context. E.g. today vs. the date selected. Also, if you’re going to display something, then it’s nice if you can also directly manipulate it.

But, directly manipulating dates doesn’t really make sense (unless maybe you’re from Gallifrey). What you want to manipulate is events embedded within calendars. In the end, the best solution is probably to figure out how to avoid date pickers altogether.

Mac OS X Calendar
Mac OS X Calendar doesn’t use a date picker. You simply type in your appointment in shorthand (and look, it figured out I had entered date/month automagically). This also happens to have been a feature of the Newton (which would actually figure out that what you had entered was an appointment).

HyperCard, Visual Basic, Real Basic, and Me

When the Mac first appeared it was a revelation. A computer with a powerful, consistent user-interface (with undo!) that allowed users to figure out most programs without ever reading a manual.

I can remember back in 1984 sitting outside the Hayden-Allen Tank (a cylindrical lecture theater on the ANU campus that tended to house the largest humanities classes and many featured speakers) playing with a Mac on display while Apple reps inside introduced the Mac to a packed house. (My friends and I figured we’d rather spend time with the computer than watch a pitch.)

How did undo work? It wasn’t immediately obvious.

When we quit an application or closed a document, how did the program know we had unsaved changes? We checked, if the document had no changes, or the changes were saved, the computer knew.

We were hardcore math and CS geeks but computers had never, in our experience, done these kinds of things before so it took us a while to reverse-engineer what was going on. It was very, fucking, impressive.

But it was also really hard to do with the tools of the time. Initially, you couldn’t write real Mac software on a Mac. At best, there was MacPascal, which couldn’t use the toolbox and couldn’t build standalone applications, and QuickBasic, which provided no GUI for creating a GUI, and produced really clunky results.

To write Mac programs you needed a Lisa, later a Mac XL (same hardware, different software). It took over a year for the Mac SDK to appear (via pirate copies), and it was an assembler that spanned multiple disks. Eventually we got Consulair-C and Inside Macintosh but, to give you an idea, the equivalent of “hello world” was a couple of pages of C or Pascal most of which was incomprehensible boilerplate. The entire toolbox relied heavily on function pointers, really an assembly-language concept, and in some cases programmers had to manually save register state.

No-one’s to blame for this — Xerox provided much cleaner APIs for its much more mature (but less capable) GUI and far better tooling — the cost was a computer that ran dog slow, no-one could afford, and which actually was functionally far inferior to the Mac.

The first really good tool for creating GUI programs was HyperCard. I can remember being dragged away from a computer lab at ADFA (where a friend was teaching a course on C) which had been stocked with new Mac SEs running HyperCard.

For all its many flaws and limitations, HyperCard was easy to use, fast, stable, and forgiving (it was almost impossible to lose your work or data, and it rarely crashed in an era when everything crashed all the time). Its programming language introduced a yet-to-be-equalled combination of being easy to read, easy to write, and easy to debug (AppleScript, which followed it, was horribly inferior). When HyperCard 2 brought a really good debugger (but sadly no color) and a plugin architecture, things looked pretty good. But then, as Apple was wont to do in those days, Apple’s attention wandered and HyperCard languished. (Paul Allen’s clone of HyperCard, Toolbook for Windows, was superb but it was a Windows product so I didn’t care.)

Eventually I found myself being forced to learn Visual Basic 3, which, despite its many flaws, was also revolutionary in that it took HyperCard’s ease of use and added the ability to create native look and feel (and native APIs if you knew what you were doing, which I did not). With Visual Basic 3 you could essentially do anything any Windows application could do, only slower. (HyperCard was notably faster than VB, despite both being interpreted languages, owing to early work on JIT compilers.)

After using VB for a year or two, I told my good friend (and a great programmer) Andrew Barry that what the Mac really needed was its own VB. The result was Realbasic (now Xojo) of which I was the first user (and for a long time I ran a website, realgurus.com, that provided the best source of support for Realbasic programmers). Realbasic was far more than a VB for the Mac since it was truly and deeply Object-Oriented and also cross-platform. I could turn an idea into a desktop application with native look and feel (on the Mac at least) in an evening.

When MP3 players started proliferating on Windows, I wrote an MP3 player called QuickMP3 in a couple of hours after a dinner conversation about the lousy state of MP3 players on the Mac. By the next morning I had a product with a website on the market (I distributed it as shareware; registration was $5 through Kagi — RIP — which was the lowest price that made sense at the time, I think Kagi took about $1.50 of each sale, and I had to deal with occasional cash and checks in random currencies).

Over the years, I wrote dozens of useful programs using Realbasic, and a few commercially successful ones (e.g. Media Mover 3,  and RiddleMeThis) and an in-house tool that made hundreds of thousands of dollars (over the course of several years) with a few days’ effort.

Today, I find Xojo (which Realbasic rebranded itself to) to have become bloated, unstable, and expensive, and Xojo has never captured native look and feel in the post-Carbon world on the Mac, and anything that looks OK on Windows looks like crap on the Mac and vice versa, which undercuts its benefits as a cross-platform application. Also, my career has made me an expert on Javascript and web development.

So my weapon of choice these days for desktop development became nwjs and Electron. While web-apps don’t have desktop look and feel (even if you go to extremes with frameworks like Sproutcore or Cappuccino), neither do many desktop apps (including most of Microsoft’s built-in apps in Windows 10). Many successful commercial apps either are web apps (e.g. Slack) or might as well be (e.g. Lightroom).

I mention all of this right now because it closes the loop with my work on bindinator — anything that makes web application development faster and better thus helps desktop application development. I think it also clarifies my design goals with bindinator: I feel that in many ways ease of development peaked with Realbasic, and bindinator is an attempt to recreate that ease of development while adding wrinkles such as automatic binding and literate programming that make life easier and better.

Returning to the Adobe fold… sort of

I remain very frustrated with my Photography workflow. No-one seems to get this right and it drives me nuts. (I’m unwilling to pay Apple lots of money for a ridiculous amount of iCloud storage, which might work quite well, but it still wouldn’t have simple features like prioritizing images that I’ve rated or looked at closely over others automagically, or allow me to rate JPEGs and have the rating carried over to the RAW later.)

Anyway, Aperture is sufficiently out-of-date that I’ve actually uninstalled it and Photoshop still has some features (e.g. stitching) that its competition cannot match. So, $120 for a year of Photoshop + Lightroom… let’s see how it goes.

Lightroom

I was expecting Lightroom to be awesome what with all the prominent folks who swear by it. So far I find it unfamiliar (I did actually use LR2, and of course I am a Photoshop ninja) to the point of frustration, un-Mac-like, and ugly, ugly, ugly.

Some of my Lightroom Gripes
Some of my Lightroom Gripes

A large part of the problem is terrible use of screen Real Estate. It’s easy to hide the menubar (once you find where Adobe has hidden its non-standard full screen controls), but it’s hard (impossible) to hide the idiotic mode menu “Identity Plate”. (I found the “Identity Plate Editor” (WTF?) by right-clicking hopefully all over the place, which allowed me to shrink the stupidly large lettering but it just left the empty space behind. How can an application that was created brand new (and initially Mac-only) have managed to look like a dog’s breakfast so quickly?

But there are many little things that just suck.

  • All the menus are horrible — cluttered and full of nutty junk. Looks like design by committee.
  • The dialog box that appears when you “Copy…” the current adjustments is a crime against humanity (it has a weird set of defaults which I overrode by clicking “check none” when I only wanted to copy some very specific settings and now I can’t figure out how to restore the defaults).
  • The green button doesn’t activate full screen mode. There are multiple full screen modes and none of them are what I want.
  • Zooming with the trackpad is weird. And the “Loupe” (nothing like or as nice as Aperture’s) changes its behavior for reasons I cannot discern. (I finally figured out that the zoom in shortcut actually goes to 1:1 by default, which is useful, although it’s such a common feature I’d have assigned a “naked” keystroke to it, such as Z, which instead toggles between display modes.)
  • The main image view seizes up after an indeterminate amount of use and shortly afterwards Lightroom crashes. (This is on maxed out Macbook Pro 15″.)
  • I can’t hide the stupid top bar (with your name in it). I can’t even make it smaller by reducing the font size of the crap in it.
  • Hiding the “toolbar” hides a random thing that doesn’t seem to me to be a toolbar.
  • By default the left side of the main window is wasted space. Oh, and the stupid presets are displayed as a list of words — you need to mouse over them to get a low-fidelity preview.
A crime against humanity.
A crime against humanity.

I found Lightroom’s UI sufficiently annoying that I reinstalled Aperture for comparison. Sadly, Lightroom crushes Aperture in ways that really matter. E.g. its Shadow and Highlight tools simply work better than Aperture’s (I essentially need to go into Curves to do anything slightly difficult in Aperture), and it has recent features (such as Dehaze — I’m pretty sure inspired by a similar feature DxO was very proud of a while back). After processing a few carefully underexposed RAW images* in both programs, Lightroom gets me results that Aperture simply can’t match (it also makes it very tempting to make the kind of over-processed images you see everywhere these days with amped up colors, quasi-HDR effects, and exaggerated micro-contrast).

(* Quite a few years ago someone I respect suggested that it’s a good idea to “underexpose” most outdoor shots by one stop to keep more highlight detail. This is especially important if the sky is visible. These days, everyone seems to be on the “ISO Invariance” bandwagon which is essentially not doing anything to the signal off the sensor (boosting effective ISO) when capturing RAW, in essence, “expose to the left” automatically — the exact opposite of the “expose to the right” bandwagon these clowns were all on two years ago — here’s a discussion of doing both at the same time. Hilarious. On the bright side, ISO Invariance pretty much saves ETTR nuts from constantly blowing their highlights.)

The Photos App is far more competitive with Lightroom than Aperture
The Photos App is far more competitive with Lightroom than Aperture. And its UI is simply out of Lightroom’s league (see those filters on the right? Lightroom simply has a list of names).

Funny thing though is that the new Photos app gives Lightroom a much better run for its money (um, it’s free), has Aperture’s best UI feature (organic customization), and everything runs much faster that Lightroom. The problem with Photos is it is missing key features of Lightroom, e.g. Dehaze, Clarity, and (most curiously) Vibrance. You just can’t get as far with Photos as you can with Lightroom. (If you have Affinity Photo you can use its Dehaze more-or-less transparently from Photos. It’s a modal, but then Lightroom is utterly modal.)

On the UI level, though, Photos simply spanks Lightroom’s Develop mode. Lightroom’s organization tools, clearly with many features requested by users, are completely out of Photos’ league.

I also tried Darktable (the open source Lightroom replacement) for comparison. I think its user interface is in many ways nicer than Lightroom’s — it looks and feels better — although much of its lack of clutter is owed to a corresponding lack of features), but the sad news is that Darktable’s image-processing capabilities don’t even compete with Aperture, let alone Photos. (One thing I really like about Darktable is that it applies “orientation” (automatic horizon leveling), “sharpen”, and “base curve” automagically by default. Right now this isn’t customizable — there’s a placeholder dialog — but if it were it would be an awesome feature.)

The lack of fit and finish in Lightroom is unprofessional and embarrassing
The lack of fit and finish in Lightroom is unprofessional and embarrassing. If it’s not obvious to you, the red line shows the four different baselines used for the UI elements.
Hilarious. Lightroom's "About Box" uses utterly non-standard buttons that behave like tab selectors.
This is hilarious. Lightroom’s “About box” uses utterly non-standard buttons that behave like tab selectors. This is actually regression for Adobe, which used to really take pride in its About boxes.

At bottom, Aperture doesn’t look or feel like an application developed by or for professionals. It’s very capable, but its design is — ironically — horrible.

Photoshop

Photoshop’s capabilities are, by and large, unmatched, but its UI wasn’t good when it first came out and many of its worst features have pretty much made it through unscathed by taste, practicality, or a sense of a job well done. Take a look at this gem:

Adobe Photoshop's horrible Radial Blur dialog
Adobe Photoshop’s horrible Radial Blur dialog

This was an understandably frustrating dialog back in 1991 — in fact the attempt to provide visual cues with the lines was probably as much as you could expect, but it hasn’t changed since — every other application I use provides a GPU-accelerated live preview (in Acorn it’s non-destructive too). What’s even worse is that it looks like the dialog’s layout has been slightly tweaked to allow for too-large-and-non-standard buttons (with badly centered captions that would look worse if there were a glyph with a descender in it). At least it doesn’t waste a buttload of space on a mode menu: instead there’s a small popup lets you pick which (customizable) “workspace” you want to use, and the rest of the bar is actually useful (it shows common settings for the currently selected tool).

In the end, Photoshop at least looks reasonably nice , and its UI foibles are things I’ve grown accustomed to over twenty-five years.

I can’t wait until I get to experience Adobe’s Updater…

Usability on the Underside

A minimalist cocoa app
A minimalist cocoa app

I’ve always thought it ironic that Apple makes the most usable computers and the least usable APIs. I’m referring, of course, to AppleScript — just kidding. At first I thought it was a huge failing (and belies Steve Jobs’s claimed obsession with making even the stuff the user never sees beautiful; but then he likely never looked at APIs); then I excused it as Apple wanting to make dealing with its APIs a pons asinorum that less capable programmers wouldn’t be able to cross.

(Note: I’m not kidding that AppleScript is horrible. Just that it’s twenty years old and that horse has been beaten to death.)

I think I was right the first time.

But, it has gotten a lot better.

Perhaps the single most impressive piece of software Apple ever shipped was HyperCard. There was, basically, nothing wrong with HyperCard that couldn’t have been easily fixed by version 3. The chief problems with HyperCard were that:

  • Images were not a first class entity (in VB3, for example, you could load an image into an image variable much like you could load a text file into a string variable in almost any language with decent string manipulation (i.e. not C/C++ or Pascal, but every other popular language).
  • Binary blobs were not a first class entity (something that you’d probably implement for free while implementing the above).
  • The UI gadgets weren’t native UI gadgets.
  • There was no bridge to the native Toolbox short of writing a plugin in C/C++ or using the crazy-but-genius third party HyperTalk compiler.

That may seem like a lot of key flaws, but there’s nothing there that can’t be solved with a bunch of rudimentary programming. Consider what HyperCard got right:

  • First of all, it was shockingly stable — you could often work on HyperTalk projects for days with no serious crashes (let alone System reboots). This was in an era when computers (both Mac and PC) crashed like crazy.
  • You could quickly build user interfaces with drag-and-drop, including bitmap drawing tools
  • State persistent by default
  • Every application was its own database
  • By default you ran inside the development environment — you lived in the development environment
  • Shockingly fast — once HyperTalk 2 came out with its JIT compiler, performance was amazingly good (see note below).
  • Its language was amazingly accessible — both readable and writable — more readable than AppleTalk and far, far more writeable. Most people, including non-coders, could figure out how to do simple stuff within a few hours.
  • Awesome levels of introspection (which allowed metaprogramming)
  • First class debugging tools
  • Always-available REPL

Many of these virtues have yet to be matched by any programming language. VB3 essentially took a few of these virtues, replaced the incredibly easy to work with but hard-to-implement HyperTalk with the very easy to use and already-implemented BASIC, fixed all the obvious faults, and became the foundation of Microsoft’s entire approach to software development. And, say what you will about Microsoft, they do make good development tools.

Note on performance: we had an ambitious multimedia project written in VB3 that barely ran on maximally configured Windows PCs (we were using then-bleeding-edge 486 DX2/66 desktops for development, and our target platform was the then just-shipped IBM Thinkpad 750 (which cost $11,000 in Australia and had a 486DX4/75 CPU by my recollection; it was also the first name-brand MS-DOS compatible laptop with both color graphics and sound). I cloned the project on my aging Quadra 700 (68040 25MHz) using HyperCard and it ran circles around the PCs.

Usable Languages

It’s not easy to create a really approachable programming language.

Obviously.

It’s been done a few times — notably with BASIC. Javascript is a major achievement, but compared to HyperTalk it’s still very difficult to learn. Javascript essentially thrived by being ubiquitous, indispensable, and based on a simplified subset of the widely understood C-ish syntax:

if (x == y && z != q){ p = foo ? bar : baz; }

could be in any one of a dozen C-ish languages, including Javascript. HyperCard was based on a simplified subset of the even more widely understood English syntax:

get the width of button ok
get it * 2
set the width of button ok to it

Javascript was chiefly indispensable because browsers lacked obvious functionality and simply wouldn’t address it — so instead of simple ways to center stuff or make image-buttons change when the user clicked on them, we got a weird new programming language. (E.g. it took the addition of CSS for us to make it possible for something to change its appearance based on mouse events without programming, and having to code CSS is hardly an improvement over having to code in Javascript.) I’m not sure if this was a conspiracy by Netscape to make Javascript popular — never ascribe to any other cause that which is adequately explained by incompetence — but it really took a huge amount of community effort for programming in Javascript to become even vaguely tolerable.

When I finally was forced to get good at programming Javascript (around 2005), just figuring out how to get some kind of debugger working was a major pain in the neck that many developers never bothered to figure out, and writing browser-portable code was a so difficult most developers threw up their hands and just targeted IE6.

My point is that Javascript isn’t a usable language, it’s just more usable than C or Java (not saying much) and supported by a ubiquitous and indispensable environment (the browser) which has grown to have many of the virtues of HyperCard over time (e.g. browsers are pretty stable now, you get a REPL and a decent debugger) while retaining many of the faults (native UI elements are conspicuously missing, and would be amazingly useful given how much effort goes into making half-assed faux elements).

(It’s worth noting that the web was at least partially inspired by HyperCard. The syntax for comments:

<!-- comment -->

looks to me like a little ode to HyperCard, whose comment syntax was

-- comment

although I’m told they both got it from some precursor; still the influence of HyperCard on the web is pervasive.)

I’m not saying that we should have to write

set the borderLeft of the style of button "OK" to "2px solid black" 

instead of the far more economical (but hardly intuitive):

$("#ok").css({borderLeft: "2px solid black"})

but the fact all web developers find the latter programming style tolerable is something of a miracle, and relies on the rise of jQuery — the currently preferred library for papering over the manifold stupidities of web browsers.

I think it’s safe to say that something like

find("button.ok").style.borderLeft = "2px solid black"

would be reasonably intuitive, C-ish, and decently economical, but despite 25 years of progress and a huge amount of effort by some of the best programmers working at some of the richest companies in the world, the best we can come up with right now out-of-the-box is something like:

document.querySelector("button.ok").style.borderLeft = "2px solid black"

which is both far less readable and less intuitive than the HyperTalk version and yet less concise.

A tangential rant on idiotic naming and the misuse of namespaces

How is it that despite everything being namespaced to hell and back these days, “querySelector” isn’t called — I don’t know — “find”? Why not have a global function named “find” for fuck’s sake, or a global find object that supported find.byURL(), find.byCSSSelector(). What’s the point of sticking everything in its own private name space if you can’t give things people use constantly a short, meaningful, intuitive name?

End rant.

Which gets us back to Apple

The first thing in the original Inside Macintosh (after introductory stuff) was a minimalist Macintosh application (I can’t remember whether it was in Pascal or C) that launched, opened up a Window with a text editor inside it, and had a menu and a main event loop. It was something like four pages of code. Note that this application did not handle undo, did not save or load files (maybe it did), and didn’t support multiple documents, and it didn’t behave nicely with other applications (e.g. redraw its Window after being sent to the background) because MacOS was single-tasking at the time. It was barely more than “hello world” in a Window.

This was because, in the original toolbox, when you created a window you had to do things like track mouse behavior in the window’s close box yourself. The fact you didn’t need to handle every keystroke inside the text editor was actually one of the miraculously cool things about the Mac toolbox in 1984 (but the Inside Mac documentation and natively-hosted compilers were still a couple of years away in 1984).

Thanks to over thirty years of progress, and the efforts of some of the greatest programmers and the most design-and-usability-focused (and now richest) company the world has ever known we can now do the equivalent of the above in — I have no idea how much code, so let’s find out. I tried googling for an example somewhere and the best I could come up with was this pure-code Obj-C “Hello World” for iOS.

This approach isn’t necessarily the first thing you should learn when learning to develop applications for a platform, but it should probably be the second or third. You should have a pretty good idea how absolutely everything works at some level.

#import <Foundation/Foundation.h>
#import <UIKit/UIKit.h>

@interface MyDelegate : UIResponder< UIApplicationDelegate >
@end

@implementation MyDelegate
- ( BOOL ) application: ( UIApplication * ) application
           didFinishLaunchingWithOptions: ( NSDictionary * ) launchOptions {
  UIWindow *window = [ [ [ UIWindow alloc ] initWithFrame: 
    [ [ UIScreen mainScreen ] bounds ] ] autorelease ];
  window.backgroundColor = [ UIColor whiteColor ];
  UILabel *label = [ [ UILabel alloc ] init ];
  label.text = @"Hello, World!";
  label.center = CGPointMake( 100, 100 );
  [ label sizeToFit ];
  [ window addSubview: label ];
  [ window makeKeyAndVisible ];
  [ label release ];

  return YES;
}
@end

int main( int argc, char *argv[ ] )
{
  UIApplicationMain( argc, argv, nil, NSStringFromClass( [ MyDelegate class ] ) );
}

That’s actually a lot better than I would have guessed, and a gigantic improvement over writing a minimal Mac application in 1986 (but about on par with writing a MacApp 2.x application in 1993). I can’t find a similar example for Swift, and I’d prefer to implement a desktop application (as its minimal functionality is less minimal than an iPhone app which (for example) is killed by the OS rather than being quit. So I used this longer but similarly minimal Obj-C desktop example as a starting point. Note that this example doesn’t even display “Hello World”, so I added that.

One thing I really like about this second example is that it doesn’t require defining a new class or subclass — like the original Inside Mac example it’s a function, so it doesn’t seem as much like magic — create an instance of the Application class, and send it the .run() method and you’re done, but now WTF does that class do? Sure it’s dealing with objects but it’s really clear what’s going on and where you need to drill down to understand a particular thing better. Similarly, it gives you a very good idea of (for example) what loading a XIB (or NIB) does for you and where it fits into the application’s life cycle.

import Cocoa

func main() -> Int {
    // give me an application instance
    let app = NSApplication.sharedApplication()
    app.setActivationPolicy(.Regular) // no clue if this is necessary or default
                                      // behavior would be fine
    
    // build out our menu (iOS apps do not need to do any of this
    let menubar = NSMenu()
    let appMenuItem = NSMenuItem()
    menubar.addItem(appMenuItem)
    app.mainMenu = menubar
    let appMenu = NSMenu()
    let appName = NSProcessInfo.processInfo().processName
    let quitTitle = "Quit " + appName
    
    // the only thing we're making that actually DOES something is implementing 
    // the Quit menu item
    let quitMenuItem = NSMenuItem(title: quitTitle, action: "stop:", keyEquivalent: "q")
    quitMenuItem.target = app
    appMenu.addItem(quitMenuItem)
    appMenuItem.submenu = appMenu
    
    // this is where we create our window and completely define how it looks -- it 
    //could be entirely replaced by loading a XIB
    let window = NSWindow(
        contentRect: NSRect(x: 0, y: 0, width: 400, height: 400),
        styleMask: NSTitledWindowMask,
        backing: .Buffered,
        `defer`: false // I assume defer is a keyword
    )
    window.cascadeTopLeftFromPoint(NSPoint(x: 20, y: 20)) // playing nice with 
                                 // other apps -- iOS apps don't do this either
    window.title = appName // iOS apps don't have a title, and we don't really 
                           //need to set this
    
    // now we're putting "Hello, world" in nice big letters in the Window
    let label = NSText(frame: NSRect(x: 0, y: 250, width: 400, height: 40))
    label.editable = false
    label.selectable = false
    label.string = "Hello, world"
    label.font = NSFont(name: "Helvetica Neue", size: 30.0)
    label.alignment = .Center
    label.backgroundColor = NSColor.windowBackgroundColor()
    window.contentView?.addSubview(label)
    
    // Having defined the window we're good to go
    window.makeKeyAndOrderFront(nil)
    app.activateIgnoringOtherApps(true)
    app.run() // Magic happens here!
    return 0;
}

I think this is really neat. You can copy and paste this code into an XCode Swift Playground and invoke main() and voila!

This is longer than the iPhone example, but a lot of the lines could be skipped if I went for brevity. If I didn’t prettify the text or define lots of constants (per the example I copied) then it would be just as short as the iOS example while, I think, using less magic and being clearer in what it does. I’m particularly proud of figuring out that I could get the Quit item to send a message (“stop:”) to NSApp without creating a selector referring to the local context (which would have forced me down the “define a subclass and yada yada magic happens” route… I think — I tried just defining a local function and passing its name as a selector but that did not work). That said, the keyboard shortcut for the quit menu item doesn’t work (as I note in the comments).

I actually don’t think the current Apple APIs are all that bad. They’re about on par with MacApp. There is a pretty big impedance mismatch between what the base initializers for the different UI elements do and what you would want them to do if you wanted to write applications this way, which I suspect is because almost no-one does, but while defaults may not look good, they’re not dysfunctional. I could just create the label and set its string property and it would work fine — but why is the default background color not the window background color or — better — transparent? why is the default font not the standard font and size for a label or a text field but something else entirely (the default text settings for a 1989 NeXT machine perhaps)?

I think the problem is that this isn’t where learning app development starts (after perhaps a simpler, more engaging introduction — look how I can create a “Hello, world” app in one minute using XCode. OK, fine, but how the hell would I make an application from scratch if I needed to? How is all this being hooked together? How does the Window hook up to its view controller? How can I use the same view controller for different views? (By the way, this is not addressed by my little example.) Understanding how the basic wiring works also makes XCode’s ridiculously complicated UI more comprehensible — since you now know certain things exist, you know to look for them, and you also know how to simply make stuff work without guessing where it’s buried in the XCode UI.

That said, this is far better than I expected. It would be great if there were initializers that allowed a typical task to be completed in one line (e.g. create a label, position it, and set its content) and if there were better defaults (and the label’s default font settings corresponded to reasonable expectations). If you omit the call to NSWindow.cascadeTopLeftFromPoint then the window appears in a really stupid place. Similarly, why isn’t there a NSMenuItem initializer that lets you stick the new item in a menu? I’m clearly missing something with respect to event handling, and that’s because the relevant initializers aren’t making it easy to do the common, correct thing. I don’t think you can argue that the badness of Apple’s Cocoa APIs is acting as a pons asinorum. In fact it’s more like it’s creating extra work for mediocre programmers.

(Correction: ignore the stuff I say about the menu item not working and the correct event wiring not being implemented by default — by putting a capital “Q” as the parameter for the keyboard equivalent I inadvertently made the shortcut command+shift+q which was why it appeared not to work when I didn’t bother to look at my menu. So my estimation of the APIs improves by a small notch — better default behavior but more magic going on: how does the window know it belongs to app?)