RAW Power 2.0

RAW Power 2.0 in action — the new version features nice tools for generating black-and-white images.

My favorite tool for quickly making adjustments to RAW photos just turned 2.0. It’s a free upgrade but the price for new users has increased to (I believe) $25. While the original program was great for quickly adjusting a single image, the new program allows you to browse directories full of images quickly and easily, to some extent replacing browsing apps like FastRAWViewer.

The major new features — and there are a lot of them — are batch processing, copying and pasting adjustments between images, multiple window and tab support, hot/cold pixel overlays (very nicely done), depth effect (the ability to manipulate depth data from dual camera iPhones), perspective correction and chromatic aberration support.

The browsing functionality is pretty minimal. It’s useful enough for selecting images for batch-processing, but it doesn’t offer filtering ability (beyond the ability to show only RAW images) or the ability quickly modify image metadata (e.g. rate images), so FastRAWViewer is still the app to beat for managing large directories full of images.

While the hot/cold pixel feature is lovely, the ability to show in-focus areas (another great FastRAWViewer feature) is also missing.

As before, RAW Power both functions as a standalone application and a plugin for Apple’s Photos app (providing enhanced adjustment functionality).

Highly recommended!

Epic Fail

Atul Gawande describes the Epic software system being rolled out in America’s hospitals.

It reads like a  potpourri of everything bad about enterprise IT. Standardize on endpoints instead of interoperability, Big Bang changes instead of incremental improvements, and failure to adhere to the simplest principles of usability.

The sad thing is that the litany of horrors in this article are all solved problems. Unfortunately, it seems that in the process of “professionalizing” usability, the discipline has lost its way.

Reading through the article, you can just tally up the violations of my proposed Usability Heuristics, and there’s very few issues described in the article that would not be eliminated by applying one of them.

The others would fall to simple principles like using battle-tested standards (ISO timestamps anyone?) and picking the right level of database normalization (it should be difficult or impossible to enter different variations of the same problem in “problem lists”, and easier to elaborate on existing problems).

There was a column of thirteen tabs on the left side of my screen, crowded with nearly identical terms: “chart review,” “results review,” “review flowsheet.”

I’m sure the tabs LOOKED nice, though. (Hint: maximize generality, minimize steps, progressive disclosure, viability.)

“Ordering a mammogram used to be one click,” she said. “Now I spend three extra clicks to put in a diagnosis. When I do a Pap smear, I have eleven clicks. It’s ‘Oh, who did it?’ Why not, by default, think that I did it?” She was almost shouting now. “I’m the one putting the order in. Why is it asking me what date, if the patient is in the office today? When do you think this actually happened? It is incredible!”

Sensible defaults can be helpful?! Who knew? (Hint: sensible defaults, minimize steps.)

This is probably my favorite (even though it’s not usability-related):

Last fall, the night before daylight-saving time ended, an all-user e-mail alert went out. The system did not have a way to record information when the hour from 1 a.m. to 1:59 a.m. repeated in the night. This was, for the system, a surprise event.

Face meet palm.

Date-and-time is a fundamental issue with all software and the layers of stupidity that must have signed off on a system that couldn’t cope with Daylight Savings boggles my mind.

A former colleague of mine linked to US Web Design System as if this were some kind of intrinsically Good Thing. Hilariously, the site itself does not appear to have been designed for accessibility or even decent semantic web, and blocks robots.

Even if the site itself were perfect, the bigger problems are that (a) there are plenty of similar open source projects, they could have just blessed one; (b) it’s a cosmetic standard, and (c) there’s pretty much no emphasis on the conceptual side of usability. So, at best it helps make government websites look nice and consistent.

(To be continued…)

Lost Productivity

A couple of months ago I listened to a Planet Money podcast discussing the mysterious slowdown in US productivity growth (the link is to one of several podcasts on this topic). Like most NPR content, the story got recycled through a number of different programs, such as Morning Edition.

The upshot was, that productivity — which is essentially GDP/work — has stalled since the — I dunno — 90s, and it doesn’t make sense given the apparent revolutions in technology — faster computers, better networks, etc.

Anyway, the upshot — and I’m basing this on memory because I can’t find the exact transcript — is that there’s a mysterious hole in productivity growth which, if it were filled, would add up to several trillion dollars worth of lost value added.

Well, I think it’s there to be found, because Free Open Source Software on its own adds up to several trillion dollars worth of stuff that hasn’t been measured by GDP.

Consider the dominant tech platforms of our time — Android and iOS. Both are fundamentally built on Open Source. If it weren’t for Open Source, iOS at minimum would have been significantly altered (let’s assume NeXTStep would have had a fully proprietary, but still fundamentally POSIX base) and Android could not have existed at all. Whatever was in their place would have had to pay trillions in licenses.

On a micro level, having worked through a series of tech booms from 1990 to the present — in the 90s, to do my job my employer or I had to spend about $2000-5000 for software licenses every year just to keep up-to-date with Photoshop, Director, Illustrator, Acrobat, Strata 3d, 3dsmax, Form*Z, and so on and so forth. By the mid aughts it was maybe $1000 per year and the software was So. Much. Better. Today, it’s probably down to less than $500.

And, in this same period, the number of people doing the kind of work I do, is done by far more people.

That’s just software. This phenomenon is also affecting hardware.

The big problem with this “lost productivity” is that the benefits are chiefly being reaped by billionaires.

Photo-backup, Hubic, and Other Stories

So, a year ago I started backing up to HubiC after my previous backup service decided to stop servicing retail customers. At the time, HubiC seemed on paper like a great option — potentially offering the simplicity and utility of DropBox with effectively unlimited storage capacity.

In practice, HubiC is useless. I’ve had a 2012 Mac Pro constantly connected to HubiC via a fast cable connection for 12 months and managed to back up only about 1/4 of the files I’ve pointed at it. The damn thing breaks down constantly. Every time I log in it wants me to change passwords. They just billed me for renewal, but I’m two days late and I can’t cancel my account without paying for another year. Good luck with that, guys.

Now, I haven’t been sitting on my hands while I watch HubiC fail to deliver on any of its promises. Most of my stuff continues to be backed up locally via Time Machine. The stuff I work on is stored in the cloud (iCloud, GitHub, Google Drive, and/or DropBox).

My big problem is photographs and video.

The disaster that is my Flickr account
Flickr’s auto-uploader turned my Flickr account into an unfixable mess

Now, when Flickr raised its “free” tier to offer 1TB of cloud storage for JPEGs, I jumped on that. It may not be RAW storage, but it’s better than nothing, and 1TB is enough space for a huge number of JPEGs. The big problem, Flickr’s (since discontinued) auto-uploader was so stupidly designed that it successfully rendered my Flickr account borderline useless (it created an album for every folder it found an image in, and it uploaded every image it found, including things like UI images inside applications and development trees, so I have “albums” comprising sprites from sample game development projects and logos for PHP templates) and Flickr’s account management tools look and work like something an intern abandoned in 2005, so just deleting stuff is an exercise in frustration. It looks to me like Flickr’s abandonware API isn’t really up to the task of even supporting a third party application to untangle the mess.

And of course, since Yahoo changed hands and various security scandals unfolded just logging into Yahoo accounts is a pain, and you need to navigate ads to even get into your account. Yahoo is the GeoCities of 2018.

Google photos in "action"
When I scroll to some arbitrary point on my timeline, this is what I see for some random (long) amount of time…

Recently, Google raised the “free” tier of photos.google.com to unlimited storage of photos where RAW files are JPEGs are processed into high-but-not-full-quality JPEGs on-the-fly. I’ve tried it and it’s pretty damn good. The uploader is smart enough to skip files that are clearly not important photos (e.g. too small, wrong format) and ignore obvious duplicates. The problems are (i) that the uploader application periodically just hangs and needs to be manually killed and restarted (ii) the web app seems to be weirdly slow and unreliable (I can log on with two machines side-by-side and they’ll see different subsets of my photos), (iii) no Apple TV support, and (iv) online photo editor seems to need one or two extra clicks to accomplish anything (but it’s a lot better than nothing). I’m pretty confident that my stuff is there, just not in my ability to see a given photo from a given machine on a given day. It’s certainly the most complete, easy-to-navigate, and shareable archive I’ve ever managed to create of my photographs. And if I can find a photo there, I can locate the original RAW image pretty easily.

Now, the absolute best system for dealing with my photographs thus far is iCloud. If I could simply rent 10TB from iCloud for a reasonable price (let’s say, $25/month) and get my Mac to automatically sync multiple volumes to iCloud, my problem would be solved. Obviously, I’m a happy Apple customer. If I were a more-than-casual Windows or Linux user then this would not be a useful option to me, and I’m not sure what I’d do, because I’m pretty sure there’s no equivalently seamless option for people who don’t want to pay the “Apple Tax”. Google Drive isn’t even a tolerable substitute for DropBox (although I think it has Sharepoint beaten).

Here’s where iCloud beats all other options:

  • I don’t need to think about it or do anything. (Well, on a desktop device, I need to NOT avoid storing my data in iCloud) If I take a photo, then it ends up in the cloud pretty quickly (basically, when the device gets recharged while on a LAN, if not sooner).
  • By default, full-resolution images are not propagated to all my devices (as would be the case for DropBox, or Hubic if it actually worked). Instead, as with everything in iCloud it’s available on-demand. (Indeed, it’s a bit reminiscent of the way iTunes deals with movies… superficially less convenient than pure streaming, but a lot more flexible and useful in practice).
  • If I ingest a RAW photo from a camera onto a device, then it’s in the cloud and available from any device on-demand (but it’s not wasting space on all my devices).
  • If I want to work on a photo, I can use the best native tools that are available on the device I’m using — seamlessly (although I’m inclined to actively avoid Adobe applications because Adobe’s workflow involves use of Adobe’s barely functional Cloud ecosystem).

The big problem — of course there has to be one — is that Apple’s highest storage tier is 2TB. I’m currently on 200GB which is plenty for the stuff I need that isn’t photos and videos, but hopelessly inadequate for photos and videos. 2TB (the next tier up, and it’s competitively priced) would be sufficient for my photos and videos if I were to curate them, but I don’t want to curate shit. I want to dump it in the cloud and not think about it.

Missing in Action

All of this adds up to a bunch of pretty disappointing non-solutions. Even though Apple provides a file sync system that works pretty well for personal photographs, it wouldn’t work for say a small photography business. (I guess you could use some kind of “family plan” but I’m pretty sure that would run you into weirdness pretty fast.) And it’s not like we’re talking advanced workflow support here — I just want my photos backed up and available.

Where is a tool that automatically detects blurred, underexposed, or overexposed photos and flags them as less worthy of backup? (Google’s photos app does a pretty good job of automatically correcting exposure, I wonder if it’s smart enough to task the uploader with going back to the RAW and reprocessing and re-uploading the photo?)

Where is the tool that remembers which photos have been opened or zoomed in and flags them as more interesting or worthy of backup?

Where is the tool that correlates the GPS location data of your iPhone photos and tentatively applies them to your corresponding camera photos?

Aperture used to collect photos from bursts into a single set and represent them with what it guessed was the best one. Where did this idea disappear to?

There’s a ton of low-hanging fruit here. Someone, please do something. I’m busy.

A Brief Foray into Random Name Generation

I got a bee in my bonnet about name generation this morning, so here’s a simple Javascript module for randomly generating names:

/*
# NameGenerator

Usage:

  const starNameGenerator = new NameGenerator(['Ceti Alpha', 'Scorpio', 'Draconis'...]);
  starNameGenerator.generate(); // -> random name

Works better with a decent sized (hundreds) of thematically similar examples to work from.
*/
/* global module */

function pick(array) {
  return array[Math.floor(Math.random() * array.length)];
}

class NameGenerator {
// data is a map from character-pairs to observed successors,
// consider the examples "how", "now", "brown", "cow"
// the pair "ow" would have the following successors
// [undefined, undefined, "n", undefined] (undefined -> end of word)

  constructor(examples) {
    const data = {'': []};
    examples.
    map(s => s.toLowerCase()).
    forEach(example => {
      let pair = '';
      data[pair].push(example[0]);

      for(let i = 0; i < example.length; i++) {
        pair = pair.substr(-1) + example[i];
        if (! data[pair]) data[pair] = [];
        data[pair].push(example[i + 1]);
      }
    });

    console.log(data);
    this.data = data;
  }

  generate() {
    let s = pick(this.data['']);
    let next = pick(this.data[s]);
    while(next){
      s += next;
      next = pick(this.data[s.substr(-2)]);
    }
    return s;
  }
}

module.exports = NameGenerator;

I wrote a much more convoluted (but simple-minded) star name generator for my galaxy generator several years ago.  The approach I took was to take a collection of star names and break them up into syllables (starting, middle, and ending) and then given a range of syllable lengths, assemble a name out of random pieces.

Today it occurred to me that I’ve never explicitly implemented anything using Markov chains before, so how about I build something that way and see how it compares? If you follow the link, you’ll see examples of star names generated randomly the old way (names with “bad words” in them are rejected).

I took a list of proper star names from Wikipedia and cleaned it up (e.g. the Greek letter prefix of a star name simply indicates its brightness relative to other stars in the same constellation, while a Roman Numeral is simply an indicator of a star being part of a binary or trinary system). This gave me a bit over 600 star names with which to seed the generator, and the results are pretty nice. (Again, no need for bad world filtering.)

The major disadvantage of the new generator is that it can generate really long names pretty easily because of the way it terminates. If I implemented a more sophisticated generator that looked at things like overall length and length of current word in weighing the probabilities it would probably help here, but that might be overthinking it (it’s pretty easy just to reject overly long names).

In general, I think the new generator produces more pronounceable names than my earlier attempt, and some of the new names it generates seem like they should be real names, which is exactly what I’m hoping for.

Algorithm

The algorithm is very simple. The constructor looks at which characters follow a given pair of characters in the input data, so if you start with “how”, “now”, “brown”, “cow” you get the following data for the pair “ow”: [undefined, undefined, ‘n’, undefined]. So, using this data to generate names, 75% of names will end immediately after an ‘ow’ and 25% will have an ‘n’.

In this system, the first character of a name is following the empty string, while the second character of a name is following the first letter. It follows that using [“how”, “now”, “brown”, “cow”] all names will begin with ‘h’, ‘n’, ‘b’, or ‘c’. and most will end in ‘w’ and the rest will end in ‘n’. Not super interesting.

Let’s try slightly more interesting seed data: [‘Frodo’, ‘Peregrin’, ‘Meriadoc’, ‘Bilbo’, ‘Adalgrim’, ‘Bandobras’, ‘Celandine’, ‘Doderic’, ‘Erin’, ‘Griffo’]. This doesn’t seem like it will be enormously fruitful at first glance, but it immediately generates lots of new names that, to me, sound authentic: Adalgrin, Adobris, Grine, Bandine, Froderim.

And here’s a link to a jsfiddle to see it in action (with a bigger set of names from Middle Earth as the seed). One of the really nice things about it is that you don’t need to filter out bad words, because they pretty much don’t get created if they’re not in the source data.

It occurs to me that a lot of the random content generation stuff I’ve done in the past was, in effect, recreating Markov chains naively, and understanding what I’ve done in those terms is powerful and clarifying.

And with that, I leave you with a random Jabberwockish word generator. Don’t bewortle! And have a borpallith day.