Heterogeneous Lists

b8r's demo site uses a heterogeneous list to display source files with embedded documentation, tests, and examples
b8r’s demo site uses a heterogeneous list to display source files with embedded documentation, tests, and examples

One of the things I wanted to implement in bindinator was heterogeneous lists, i.e. lists of things that aren’t all the same. Typically, this is implemented by creating homogeneous lists and then subclassing the list element, even though the individual list elements may have nothing more in common with one another than the fact that, for the moment, they’re in the same list.

This ended up being pretty easy to implement in two different ways.

The first thing I tried was a effectively an empty (as in no markup) “router” component. Instead of binding the list to a superclass component, you bind to a content-free component (code, no content) which figures out which component you really want programmatically (so it can be arbitrarily complex) and inserts one of those over itself. This is a satisfactory option because it handles both simple cases and complex cases quite nicely, and didn’t actually touch the core of bindinator.

Here’s the file-viewer code in its entirety:

<script>
    switch (data.file_type || data.url.split('.').pop()) {
        case 'md':
        case 'markdown':
            b8r.component('components/markdown-viewer').then(viewer => {
                b8r.insertComponent(viewer, component, data);
            });
            break;

        case 'text':
            b8r.component('components/text-viewer').then(viewer => {
                b8r.insertComponent(viewer, component, data);
            });
            break;

        case 'js':
            b8r.component('components/literate-js-viewer').then(viewer => {
                b8r.insertComponent(viewer, component, data);
            });
            break;
    }
</script>

(I should note that this router is not used for a list in the demo site, since the next approach turned out to meet the specific needs for the demo site.)

The example of this approach in the demo code is the file viewer (used to display markdown, source files, and so on). You pass it a file and it figures out what type of file it is from the file type and then picks the appropriate viewer component to display it with. In turn this means that a PNG viewer, say, need have nothing in common with a markdown viewer, or an SVG viewer. Or, to put it another way, we can use a standalone viewer component directly, rather than creating a special list component and mixing-in or subclassing the necessary stuff in.

You’ll note that this case is pretty trivial — we’re making a selection based directly on the file_type property, and I thought it should be necessary to use a component or write any code for such a simple case.

The second approach was that I added a toTarget called component_map that let you pick a component based on the value of a property. This maps onto a common JSON pattern where you have a heterogeneous array of elements, each of which has a common property (such as “type”). In essence, it’s a toTarget that acts like a simple switch statement, complete with allowing a default option.

The example of this in the demo app is the source code viewer which breaks up a source file into a list of different kinds of things (source code snippets, markdown fragments, tests, and demos). These in turn are rendered with appropriate components.

This is what a component_map looks like in action:

<div
  data-list="_component_.parts"
  data-bind="component_map(
    js:js-viewer|
    markdown:markdown-viewer|
    component:fiddle|
    test:test
  )=.type"
>
  <div data-component="loading"></div>
</div>

From my perspective, both of these seem like pretty clean and simple implementations of a common requirement. The only strike against component_map is obviousness, and in particular the quasi-magical difference between binding to _component_.parts vs. .type, which makes me think that while the latter is convenient to type, forcing the programmer to explicitly bind to _instance_.type might be clearer in the long run.

P.S.

Anyone know of a nice way to embed code in blog posts in WordPress? All I can find are tools for embedding hacks in wordpress.

Node-Webkit Development

I’m in the process of porting RiddleMeThis from Realbasic (er Xojo) to Node-Webkit (henceforth nw). The latest version of nw allows full menu customization, which means you can produce pretty decently behaved applications with it, and they have the advantage of launching almost instantly and being able to run the same codebase everywhere (including online and, using something like PhoneGap, on mobile). It has the potential to be cross-platform nirvana.

Now, there are IDEs for nw, but I’m not a big fan of IDEs in general, and I doubt that I’m going to be converted to them any time soon. In the meantime, I’ve been able to knock together applications using handwritten everything pretty damn fast (as fast as with Realbasic, for example, albeit with many of the usual UI issues of web applications).

Here’s my magic sauce — a single shell script that I simply put in a project directory and double-click to perform a build:

DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
echo "$DIR"
cd "$DIR"
pwd
zip app.nw *.html package.json *.png *.js *.css *.map
mv app.nw ../node-webkit.app/Contents/Resources/
cp nw.icns ../node-webkit.app/Contents/Resources/
cp info.plist ../node-webkit.app/Contents/
open ../node-webkit.app

What this shell script does is set the working directory to the project directory, zip together the source files and name the archive app.nw, and then move that, the icon, and the (modified) info.plist into the right place (assuming the node-webkit app is in the parent directory), and launch it. This basically takes maybe 2s before my app is running in front of me.

info.plist is simply copied from the basic nw app, and modified so that the application name and version are what I want to appear in the about box and menubar.

This is how I make sure the standard menus are present (on Mac builds):

var gui = require('nw.gui'),
    menubar = new gui.Menu({ type: 'menubar' });
if(process.platform === 'darwin'){
    menubar.createMacBuiltin(appName);
}
gui.Window.get().menu = menubar;

And finally, to enable easy debugging:

if(dev){
    var debugItem = new gui.MenuItem({ label: 'Dev' }),
        debugMenu = new gui.Menu(),
        menuItem;
    debugItem.submenu = debugMenu;
    menubar.append(debugItem);
    menuItem = new gui.MenuItem({ label: "Open Debugger" });
    debugMenu.append(menuItem);
    menuItem.click = function(){
        gui.Window.get().showDevTools();
    }
}

So with a few code snippets you get a Mac menubar (where appropriate), a Dev menu (if dev is truthy) and a single double-click to package and build your application in the blink of an eye. You can build your application with whatever Javascript / HTML / CSS combination suits you best.

Obviously, you don’t get a real “native” UI (but the next best thing is a web UI, since people expect to deal with web applications on every platform), and this isn’t going to be a great way to deliver performant applications that do heavy lifting (e.g. image processing), but it’s very easy to orchestrate command-line tools, and of course the chrome engine offers a ridiculous amount of functionality out of the box.

I should mention that Xojo is actually quite capable of producing decent Cocoa applications these days. (It sure took a while.) But its performance is nowhere near nw in practical terms. For example, C3D Buddy is able to load thousands of PNGs in the blink of an eye; the equivalent Xojo application is torpid. Now if I were doing heavy lifting with Javascript, perhaps it would tilt Xojo’s way, but it’s hard to think of just how heavy the lifting would need to get.

node-webkit

C3D Buddy 2.0 in action — built in an evening using node-webkit
C3D Buddy 2.0 in action — built in an evening using node-webkit

I don’t do very much desktop software development these days, and I’ve become increasingly frustrated with Realbasic (now Xojo), so the prospect of forking out $500 for an upgrade that may or may not be usable did not fill me with glee. Furthermore, as the years roll on, there’s more and more functionality tied to the web that desktop-centric development tools struggle to work with.

I was dimly interested in a thread on Hacker News this morning concerning a workflow for developing desktop applications with node. Turns out that the workflow was mostly to do with things like angular and jade, which I’m not especially keen on, and not so much with the actual desktop stuff — the heavy lifting for which is all done by node-webkit. So I started looking into node-webkit.

I like a tool which has you up-and-running in about three paragraphs.

It goes something like this:

Download an appropriate node-webkit binary. (Disappointingly, the Mac version is restricted to 32-bit.)

  • Write an index.html file. (The sample “hello world” app calls node inline.)
  • Write a package.json file. (The documentation suggests it’s like a “manifest” file, but really it’s more like a configuration file.)
  • Zip this stuff into a file into an archive whose name ends in .nw.
  • Double-click the archive.
  • Voila, your app works.

After working on this for about fifteen minutes, I knocked up a bash script (which I turned into a .command file, allowing me to double-click it in Finder) which does the archiving and then launches the file in one step. (It does leave dead terminal windows lying around, and it doesn’t quit the app if it’s still running. I think I may build an app for building apps if I do much more of this.)

And to build a distributable application you can simply put the archive in the application bundle (on the Mac) renaming it “app.nw” or name the file package.nw and put it in the same directory as the application (for those benighted platforms that don’t have bundles). One minor annoyance is that it’s not clear exactly what needs to be changed in the Info.plist file to fully customize your application.

So what is node-webkit? It’s essentially a version of Chromium that puts node.js alongside the DOM in your web pages, along with a few custom HTML and DOM extensions that let you do things like handle platform-native file requesters).

I should note that it’s a potential security nightmare as it stands. There’s no sandboxing (that’s kind of the point), so deploying an app that loads your banking website and then watches you press keys is totally doable. That said, it’s a desktop app so it can also delete all your files or encrypt them and hold them hostage.

Anyway, I had a lot of fun this evening rewriting a utility application I originally wrote in Realbasic a few years ago. It probably took me about twice as long to write it as it took me to do it in Realbasic. Part of that is I know (or knew) Realbasic very well, and I don’t know node very well. Another part of it is that Realbasic is almost always synchronous while node is almost always asynchronous. Writing code that spelunks a directory tree asynchronously is a lot more complex both conceptually and in practice than doing so in linear fashion, but the payoff is quite impressive — despite running an “interpreted” language, a task that took significant time in Realbasic (loading and displaying hundreds of thumbnails) happens in the blink of an eye.

One of the things that’s especially intriguing about node-webkit is that it gives you control over the browser you normally don’t have (e.g. window placement, file system access) — these are a constant usability sore-point in the project I am currently working on (which is a web app that is replacing a suite of desktop apps). It would be so much easier to get a lot of the things we want to do working smoothly using a hybrid desktop application built on top of something like node-webkit — it’s kind of a lemma of Alan Kay’s observation that anyone who wants to write really good software needs to build hardware — anyone who wants to write really good web apps really needs to build the browser.

The github repo for my project is here. It’s not a very general-purpose application; if you don’t use Cheetah 3D it’s completely useless.

A Little Privacy

Securing data from prying eyes is pretty much a solved problem. PGP is just as good as ever. So all you need to do to receive communications securely from another person is to create a PGP Private/Public key pair, broadcast your public key (hint — it’s shorter) to anyone who might want to contact you, and then decrypt incoming messages using your private key on the way in.

This only addresses security. Authentication is a separate issue, possibly just as important, and if anything harder to address (because it involves trusting third parties), and I won’t deal with this. Privacy is plenty to deal with right now.

So we’ve heard that secure communications providers are shutting down or destroying their servers rather than surrender to demands from the US government (NSA, FBI, CIA? We don’t know which branch or branches because they’re not allowed to say — lovely, huh?). What demands might these service providers be concerned about?

  • Surrender private keys (why would they even have these)
  • Install malware on their servers or on users’ machines (why would a secure email provider install any software on its users’ machines?)
  • Help surveil users (e.g. notify government agency when a specific user addresses his/her mail)
  • Monitor metadata (e.g. while the body of an email might be encrypted, the header information has to be plaintext).

Can you think of other things?

There’s a recent thriller (you probably haven’t heard of it — it tanked at the box office) starring John Cusack called Numbers Station. The idea is that the CIA maintains a network of shortwave broadcast stations that send out encrypted messages to sleeper agents. To do this they need a specially trained cryptographer and a network of highly fortified shortwave transmitters. Or something. It’s a stupid, stupid premise. (But not as bad as 2012.)

Let’s suppose we want to communicate with field agents securely. Well, before leaving HQ our field agent creates a private/public key pair and leaves the public key behind. He/she secretes the private key on his/her person (committing it to memory is probably impossible, so it might be in a tiny subcutaneous LED projector!) and then goes on his/her merry way, having told his/her handlers to post messages on usenet using his/her public key. There’s no other step required.

Now, how do we handle authentication? Hey, I said this wasn’t about authentication! In any event, same way we handle it using any other less secure communication channel. Perhaps authentic messages are agreed to end with “Signed Bob” or “The peanut walks by night”. Doesn’t matter — we’re talking about security not authentication.

How does Double Secret Agent VII find the publicly posted messages on usenet? Any number of ways. Perhaps they’re in messages entitled “but I like wesley” on alt.wesley.crusher.die.die.die. Perhaps they’re embedded in the comment tags of PNG images posted on alt.sex.donkeys. It doesn’t matter.

Heck, you could just use mailinator. Want to email Double Secret Agent VII? Send an email to [email protected] and use the correct key. Done.

The beauty of the usenet example is that thousands of people will be downloading the message accidentally as a matter of course, and the message will be automatically distributed to thousands of servers whether anyone reads it or not. I really don’t know how PRISM, et al, would help against a determined, competent opponent communicating this way. This is probably why PGP had the US Government so riled up back in the 90s.

So, what about losing track of Agent VII? Simple. You’re Control (or whatever). If a communications channel is compromised (e.g. Kaos figures out you’re posting messages as EXIF data in pornographic images and deletes them or posts confusing spam) then Agent VII can use the Control’s public key to phone home. It’s not complicated.

So, here’s my modest suggestion for creating a secure replacement for email that everyone can use, and which can be gradually migrated to.

  1. set up a standard mail server.
  2. configure it to bounce any email that appears not to be encrypted using PGP with a message saying “if you want to contact [email protected] then use [email protected]’s public key to encrypt the message and provide your own public key so a secure response can be sent” and provide a link to a web page for securely sending such emails if the person doesn’t want to.
  3. outgoing emails are decorated with a public key for securely replying to the sender.
  4. account holders can have any number of handles (“email addresses”) associated with a given public key. They can access their email simply by asking for it. (Either there’s no passwords or everyone has the same password.)
  5. the server holds public keys so it can send the messages in item 2 (and provide a convenient system for sending the messages).
  6. Provide a simple to use web-based client for the service (which does all its encryption / decryption client-side) and provide links to a number of alternative open source clients. Make all the clients as transparent as possible.
  7. Provide a web-based client that deals only in encrypted data. (I.e. requires the user to manually extract and decrypt incoming messages, and encrypt outgoing messages.)
  8. Pay for all of this by charging a small amount (say $0.01) for each message sent to a user. (This is Bill Gates’s proposed solution to spam from way back, and if we’re going to migrate off email, we might as well cash in that idea.) Any profits could be donated to MSF, or the campaign to drown Jenny McCarthy in cat vomit.

Now, practically speaking, we could use passwords simply to prevent nuisance denial of service attacks, but we’d have absolutely no problem giving those passwords to anyone who showed up to our office in a sufficiently impressive suit, or driving a big enough SUV.

So, this gives us a pretty secure email system that is fairly interoperable with existing email systems (modulo requiring users “outside” the system to opt into using it, at least to contact its users) and which doesn’t hold any private information or keys at all. Heck, it can simply expose all of its data to Google. (Indeed, it could keep its code repositories exposed so that suspicious users could review changes to its codebase.) Now, it can’t be used with idiotic services that send you your login details, but you can either use another email service (e.g. gmail or mailinator) for those or implement a cryptographic bridge (e.g. if you subscribe using an email address prefixed with “insecure-” then it might do the encryption serverside for you.

Note that as described, the system doesn’t conceal metadata. So if [email protected] sends [email protected] orders to assassinate that pesky reporter, the fact that such a communication occurred (if not its content) is stored on the server. Of course, you could use the web client to anonymously send and/or receive the message, and use Tor to avoid leaving too much of a trace of having done that, but it’s kind of inconvenient, so normal people won’t do it very often. A normal person wants an email client that Just Works (this can provide that) and to exchange email with other people (this can get you there).

The proposed system provides end-to-end encryption of message content without the server needing to store any private keys and would allow all key components of the system to run in the browser (and thus have openly inspectable runtime code that could be monitored for changes). But it won’t stop the NSA from hitting you with a $5 wrench until you tell them where you keep your private key.

The GIMP Revisited

GIMP in Single-Window Mode
GIMP in Single-Window Mode

As part of my quest to replace Adobe’s Creative Suite with less expensive alternatives, I revisited The GIMP (“The GNU Image Manipulation Program”) the best-known open-source Photoshop clone. Earlier versions of GIMP were both hard to install and required X11 (they also tended to come with non-standard fonts) as well as having a confusing multi-window interface. All of these issues have been addressed as of August 2012 (yes, I’m late to the party).

(Inkscape still requires X11, and since there are plenty of inexpensive Illustrator alternatives that do not require X11, I am ignoring it for now.)

The version of GIMP I played with is the one that emphasizes Python scriptability. I didn’t actually try scripting it, but I’ll assume that the Python stuff is there and works ( scriptability is generally the most robust feature of open source projects).

First the good:

  • It’s free and open source.
  • Supports Photoshop’s pair-kerning shortcuts (option-left/right-arrow).
  • You no longer need to install X11 or jump through hoops to get a Mac version of GIMP. You can go straight to the main download page and get a .dmg.
  • The user interface is much improved over earlier versions. In many respects just as refined as Photoshop’s (and just as cluttered).
  • Single window mode.
  • Photoshop files are reasonably well supported (layers come out pre-rendered, but that’s about as much as you get from anything other than Adobe’s applications).
  • Native fonts are supported.

There are some warts:

  • The cross-platform imaging code is clearly not nearly as performant as Apple’s Core Image stuff — filters and even simple screen redraws are quite sluggish compared with any Core Image based editor (or Photoline for that matter).
  • There’s no >32bpp color support, which means it does not replace Photoshop for HDR work. I could not open .hdr files.
  • There are no layer effects.
  • The online help did not work (I tried the “Read Online” option and got a weird error message).
  • The windows, including file pickers, while quite attractively laid out are non-native, which can make simple things (like accessing non-boot-drives) a little annoying (you need to type in paths to get to things — luckily you can shortcut them once you get there).
  • The UI has some quirks, including palettes that float above dialogs and a funky “no document is open” document window that when closed quits the application.
  • Key filters copied from Photoshop are notably less useful (e.g. the difference cloud filter doesn’t create tiling fractals).

In a nutshell, GIMP is free and very capable, but lacks the non-destructive filters / layer effects of Acorn. Compared with Photoline, it’s scriptable but lacks non-destructive filters and >32bpp support. Compared with Pixelmator, it has a less refined user interface and lacks Core Image support.

The GIMP ★★★★★ (relative to the other Photoshop alternatives) — and note that I do not consider price in my ratings.