A Modest Proposal

I think if we’re going to have guns they shouldn’t be concealed. They should have day-glo grips, stocks, and cases — mandatorily lurid pink I suggest, have built-in GPS sensors, and make wah-wah noises when they’re moved around; the battery that runs the GPS and buzzer also allows the gun to be fired; and every gun should have sample fired bullets and casings registered in a national database (paid for by the bullet tax, see below). After all, if they’re supposed to deter crime shouldn’t criminals know they’re there? I certainly want to know who has guns and avoid them.

Now of course people will argue “if it’s illegal to conceal weapons then only criminals will have concealed weapons”. That’s true, but they need to be careful, especially if the penalties are harsh. E.g. if someone doesn’t like you they can just tell the police you habitually carry a gun. Similarly, it would be illegal to sell guns without these things and when you tear out the mechanism your last known location would be in the cloud.

The GPS sensors and buzzers will run out of batteries and also could be gouged out but not keeping your batteries charged would also be a crime and when your gun stopped responding the authorities would know when and where.

We could require gun ranges to run every bullet fired on the range, and every casing to be matched against the database (expensive, but the bullet tax will pay for it). If a bullet doesn’t have a registered match (e.g. the gun’s owner is not the right person or the gun’s rifling has been tampered with) then we either arrest the owner or register the new bullet.

The buzzers and day-glo would kind of mess up hunting, but the right to go hunting is not enshrined by the constitution — the second amendment is solely there for purposes of preserving us from tyranny, and at such time as we desire to overthrow the government we can always pull the crap out, right? After all, armed insurrection is also illegal. Perhaps to honor the second amendment we can require the mechanisms to be removable in some straightforward way — on the strict understanding that it’s a felony.

All this might sound horribly draconian. It’s supposed to be. The argument is that the 2nd amendment protects our right to overthrow tyrants. I would argue the 4th amendment is far more important (and we can set up the GPS system so it merely tracks your gun anonymously until it’s involved in a shooting).

When a gun owner moves into your neighborhood they should be required to post a public notification in the “known sex offenders and gun owners” registry.

Chris Rock suggests that we simply put a huge tax on bullets. (“That guy must deserve it, they put $50,000 worth of lead in him.”) I would point out that the right to bullets is actually not enshrined in the constitution, but certainly we can put a hefty federal tax on them or require a prescription. After all, they’re kind of a potentially lethal drug (“lead poisoning”) and should be properly controlled. Better make sure you have all your tax stamps and prescriptions ready when you get your hunting license.

The bullet tax can also pay for free kevlar body armor for all citizens who want it, and perhaps provide guns and bullets (which are after all rather expensive as a result of all this) to the poor.

Google’s New Logo

Google's new logo overlaid with circles (top) and Futura (bottom)
Google’s new logo overlaid with circles (top) and Futura (bottom)

Let me begin by saying that I don’t hate it — and I like it a lot more than the old logo which seems to have been the word “Google” in Times New Roman or whatever browsers showed by default as Serif back in the day. The old logo had the chief virtue of being unpretentious and not looking like, say, Marissa Meyer had devoted a whole weekend to it.

I saw someone claim that the new Google logo was an awesome piece of economy because it could be build entirely from booleans of circles and rectangles. I think that would be pretty cool — and in fact conceptually far more interesting than what it actually is, but you can see at a glance that it’s not so.

The new logo is simply a slightly bespoke version of Futura, a nice, modernist typeface which — in very much the Bauhaus style (which it was inspired by — looks simple and industrial but is actually the product of careful design. Google’s logo is essentially some Futura variant, hand-kerned, with slightly modified glyphs. Much like the subtly rounded corners on iOS7 icons and rounded rectangles.

Many corporate logos are simply examples of slightly customized typography — in this case Google seems to have taken the most common variant of Futura (the default weight of the version bundled with Mac OS X) and tweaked it a bit. Not great, not horrible, and better than Times New Roman. I imagine a lot of designers are incensed either because they’re anti-Google and like to disparage anything it does or they’re pro-Google and would like to see it put a bit of effort into picking a logo.

Google in modified Futura (top) vs. DIN Alternate (bottom)
Google in modified Futura (top) vs. DIN Alternate (bottom)

Personally, I think Google should have picked a more interesting typeface that somehow reflected its values. I would have picked a member of the DIN family (or similar) — a typeface designed first and foremost for legibility while being more attractive (in my opinion) than Futura. Google is, at its heart, a supremely utilitarian company, and I think DIN would deeply reflect this. Futura, with its Bauhaus underpinnings, strikes me as pretentiously faux-utilitarian. It looks geometric at the cost of practicality and legibility, but sacrifices design purity in the interest of aesthetics.

The Graph with a Trillion Edges

After looking at this thread on Hacker News about this paper from a bunch of Facebook researchers, and this excellent article by Frank McSherry, I tried to understand McSherry’s approach (since I found the paper kind of impenetrable) and this is my take on it.

The Basic Problem

Let’s suppose you’re Facebook and you have about 1B members who have an arbitrary number of links to each other. How do you scale and parallelize queries (especially iterated queries) on the data?

McSherry basically implements a single thread algorithm on his laptop using brute force. (He uses a publicly available database of 128B edges.) As I understand it, his approach is pretty simple:

  • Encode each connection as a 64 bit number (think connection from A (32 bits) to B (32 bits).
  • Store the numbers in RAM as a sequence of variable-length integer encoded differences. E.g. a set of numbers beginning with 4, 100, 2003, 2005,… would be encoded as 4, 96, 1903, 2,… Since the average distance between 128B values scattered among 1T is 8, the expected distance between connections (64-bit values) will be around 9, which we can encode as ~4 bits of data (using variable-length encoding) instead of 64 bits, and storage needed is proportional to number of values stored [1]).
  • Look for a connection from A to anyone by running through the list and looking for values starting with A’s 32 bits. (You can make this search arbitrarily efficient — trading memory requirements for performance — by partitioning the list into a lookup table.)
  • Store the numbers on disk as deltas between the numbers encoded as a Hilbert Curve [2] using variable-length integer encoding for efficiency. (The naive encoding has an average of ~5bits between values; the optimized encoding was 2.

[1] Each entry will be log(n) bits in size, there’ll be c of them, and the average difference (to be encoded) will be log(c/n) bits. If we assume c scales as n^2 (highly dubious — it means if the world’s population increased by a factor of 2, I’d have twice as many friends) then we get n^2 * log(n) for storage. If we assume c scales as n log(n) (probably still generous) then we get n log(n) * log(log(n)). At any given scale that function looks pretty linear, although it is actually faster than linear — the gradient hangs around 5 for n between 1B and 100B — I don’t think it’s a cause for concern.

[2] A simple way of looking at Hilbert Curve encoding is that it treats a binary number as a series of 2-bit numbers, with each pair of bits selecting a sector of a 2×2 square (in a somewhat non-obvious way). So all numbers starting with a given pair of bits are found in the same sector. Then, look at the next two bits and subdivide that sector in a similarly non-obvious way until you run out of bits. (In an earlier post, McSherry explains Hilbert Curve implementation a graphically.) Incidentally, simply interleaving the bits of the two numbers has much the same effect.

That’s it in a nutshell. Obviously there are plenty of simple ways to optimize the lookups.

He points out that the database he used comes in 700 files. These could each be processed by 700 different computers (stored slightly less efficiently, because the average distance between deltas goes up slightly) and any request could easily be split among them.

The interesting thing is that running in one thread on a laptop, this approach runs faster and scales better than the 128 core systems he benchmarks against.


So let’s suppose I wanted to emulate Facebook and have access to 256 virtual machines. (I could easily afford to do this, as a startup, using AWS or similar.) How would I do this in practice? Remember that Facebook also has to deal with users creating new connections all the time.

First of all (kind of obviously) every connection gets stored twice (i.e. connections are two-way). We need this for worst-case scenarios.

Let’s suppose I number each of my virtual machines with an 8-bit value. When I store a connection between A and B I encode the connection twice (As AB and BA) and take the last 8 bits of one value and record them as a connection on machines with the corresponding number. Each machine stores the value in its representation of its link database.

How would this work in practice?

Well, each value being stored is effectively 56-bits (8 bits are pulled off the 64-bit value to pick the machine are and thus the same). Let’s divide that into 32-bits and 24-bits and store (in some efficient manner) 2^24 lists of 32-bit numbers, again stored in some efficient manner (we can break down and use McSherry’s approach at this point — effectively we’re progressively using McSherry’s approach on different portions of the numbers anyway).

So any lookup will entail grabbing 24-bits of the remaining 56-bits leaving us with 1/2^32 of the original data to search. Our efficient encoding algorithm would be to split the remaining data using the same approach among the cores, but I’m sure you get the idea by now.

So in a nutshell:

To record the existence of the edge AB:

  1. Encode as 2 64-bit values.
  2. Reduce to two 56-bit values and 2 8-bit values.
  3. Ask the 2 (or occasionally 1) machines designated for the 2 8-bit values.
  4. Each machine reduces the 56-bit value to a 32-bit value with a 24-bit lookup.
  5. Then inserts the remaining 32-bit value in a list of ~E/2^32 values (where E is the total number of edges, so 232 values for a 1T vertex DB)

Inserting a value in a list of n values is a solved O(log(n)) problem. Note that all connections Ax will be stored on one machine, but connections xA will be scattered (or vice versa) because we’re storing two-way connections. (Important for the worst case, see below.)

To find all edges from A:

  1. Grab 8-bits using the same algorithm used for storing AB.
  2. Ask the designated machine for the list of ~E/2^32 connections from A.

So to sum up — storing a connection AB involves bothering two machines. Determining if A’s connections involves bothering 1 machine in most cases, and (see below) all the machines in rare cases. However, the rare cases should never actually exist.

Worst Case Response

One problem here is that the worst case response could be huge. If A has 10M outgoing edges then one machine is going to have to cough up one giant database. (If one node has more connections than can easily be stored on one machine then we can deal with that by getting our machine names from the entropic bits of both user ids, but let’s ignore that for now.)

In sufficiently bad cases we reverse the lookup. And reverse lookups will never be bad! If we hit the machine containing all of Ax for all connections of the form Ax — there are too many to deal with, so the machine tells us to reverse the lookup, and we ask all our machines for connections xA, and we can reasonably expect those to be evenly distributed (if 256 bots sign up for and follow A simultaneously, there will be 256 new entries of the form Ax on one machine, but only one connection of the form xA on each machine).

So, the worst case performance of this design comes down to the cost of transmitting the results (which we can evenly distribute across our machines) — you can’t really do better than that, and the key step is treating each machine at having responsibility for one square in a grid picked using Hilbert Curves.

Note that the worst case isn’t terribly bad if we just want to count connections, so interrogating the database to find out how many outgoing links A has is practical.

Anyway, that never happens…

One can assume in the case of most social networks that while there will often be worst cases of the form A follows B (i.e. B has 10M followers) there will never be cases where A follows 10M users (indeed anyone like that could be banned for abuse long, long before that point is reached).

It follows that when someone posts something (on Twitter say) it only gets pulled by followers (it doesn’t get pushed to them). A posts something — solved problem, just update one list. B asks for update — we need to check B’s followers — solved problem. So we don’t even need to store two-way connections or worry about reverse lookups.


This is apparently very similar to PageRank (Google’s algorithm for evaluating websites), which I’ve never bothered to try to understand.

Consider that each edge represents A links to B, and we rank a page by how many incoming links it has and we throw out pages with too many outgoing links (i.e. link farms, sitemaps). Then we iterate, weighting the value by the previously calculated values of each page. Google (et al) add to that by looking at the text in and around the link (e.g. “publicly available database of 128B edges”, above, links to a page about exactly that, so Google might infer that the linked page is “about” things in those words (or in the case of “click here” type links, look at words around the link).

So if from incoming links we infer that a page is about “scalable algorithms”, and from the quality of the pages with incoming links the quality of the page itself — and we maintain a database of connections between topics and pages — then when someone searches for “scalable algorithms”, we take the id of “scalable” and algorithms”, find pages about both, sort them by “quality”, create a web page, and festoon it with sponsored links and ads. By golly we’ve got a search engine!

The Myth of the $500 FX Sensor

Bubble defects in a silicon wafer — SEM image
Bubble defects in a silicon wafer — SEM image

Disclaimer: I am not an electrical engineer and have no special knowledge about any of this.

Some time ago Thom Hogan estimated the cost of an FX camera sensor to be around $500 (I don’t have the reference, but I’m pretty sure this is true since he said as much recently in a comment thread). Similarly, E. J. Pelker, who is an electrical engineer, estimated an FX sensor to cost around $385 based on industry standard cost and defect rates in 2006. So it seems like there’s this general acceptance of the idea that an FX sensor costs more than 10x what a DX sensor costs (Pelker estimates $34 for a Canon APS sensor, which is slightly smaller than DX, and $385 for a 5D sensor).

My assumptions can be dramatically off but the result will be the same.

E.J. Pelker

I don’t mean to be mean to Pelker. It’s a great and very useful article — I just think it’s not that the assumptions he knows he’s making are off, it’s that he’s made tacit assumptions he doesn’t realize he’s made are completely and utterly wrong.

The assumption is that if you get an 80% yield making DX sensors then you’re get a 64% (80% squared) yield from FX sensors (let’s ignore the fact that you’ll get slightly fewer than half as many possible FX sensors from a wafer owing to fitting rectangles into circles).

Here are Peltzer’s “unknown unknowns”:

Sensors are fault-tolerant, CPUs aren’t

First, Peltzer assumes that a defect destroys a sensor. In fact if all the defect is doing is messing up a sensel then the camera company doesn’t care – it finds the bad sensel during QA, stores its location in firmware, and interpolates around it when capturing the image. How do we know? They tell us they do this. Whoa — you might say — I totally notice bad pixels on my HD monitors, I would totally notice bad pixels when I pixel peep my 36MP RAW files. Nope, you wouldn’t because the camera writes interpolated data into the RAW file and unless you shoot ridiculously detailed test charts and examine the images pixel by pixel or perform statistical analysis of large numbers of images you’ll never find the interpolated pixels. In any event (per the same linked article) camera sensors acquire more bad sensels as they age, and no-one seems to mind too much.

Sensor feature sizes are huge, so most “defects” won’t affect them

Next, Peltzer also assumes industry standard defect rates. But industry standard defect rates are for things like CPUs — which usually have very small features and cannot recover from even a single defect. The problem with this assumption is that the vast majority of a camera sensor comprises sensels and wires hooking them up. Each sensel in a 24MP FX sensor is roughly 4,000nm across, and the supporting wiring is maybe 500nm across, with 500nm spacing — which is over 17x the minimum feature size for 28nm process wafers. If you look at what a defect in a silicon wafer actually is, it’s a slight smearing of a circuit usually around the process size — if your feature size is 17x the process size, the defect rate will be vanishingly close to zero. So the only defects that affect a camera sensor will either be improbably huge or (more likely) in one of the areas with delicate supporting logic (i.e. a tiny proportion of any given camera sensor). If the supporting logic is similar in size to a CPU (which it isn’t) the yield rate will be more in line with CPUs (i.e. much higher).

This eliminates the whole diminishing yield argument (in fact, counter-intuitively, yield rates should be higher for larger sensors since their feature size is bigger and the proportion of the sensor given over to supporting logic is smaller).

(Note: there’s one issue here that I should mention. Defects are three dimensional, and the thickness of features is going to be constant. This may make yields of three dimensional wafers more problematic, e.g. BSI sensors. Thom Hogan recently suggested — I don’t know if he has inside information — that Sony’s new (i.e. BSI) FX sensors are turning out to have far lower yields — and thus far higher costs — than expected.)

Bottom Line

To sum up — an FX sensor would cost no more than slightly over double a DX sensor (defect rates are the same or lower, but you can fit slightly fewer than half as many sensors onto a die owing to geometry). So if a DX sensor costs $34, an FX sensor should cost no more than $70.

Affinity Photo — No Good For Photography

Sydney Harbor by night — processed using Photos
Sydney Harbor by night — processed using Photos

This is a pretty important addition to my first impressions article.

After reading a comment thread on a photography blog it occurred to me that I had not looked particularly hard at a core feature of Affinity Photo, namely the develop (i.e. semi-non-destructive RAW-processing) phase.

I assumed Affinity Photo used Apple’s OS-level RAW-processing (which is pretty good) since just writing a good RAW importer is a major undertaking (and an ongoing commitment, as new cameras with new RAW formats are released on an almost daily basis) and concentrated my attention on its editing functionality.

(There is a downside to using Apple’s RAW processor — Apple only provides updates for new cameras for recent OS releases, so if you were using Mac OS X 10.7 (Lion) and just bought a Nikon D750 you’d be out of luck.)

In the thread, one commenter suggested Affinity Photo as a cheaper alternative to Phase One (which misses the point of Phase One entirely) to which someone had responded that Affinity Photo was terrible at RAW-processing. I wanted to check if this was simply a random hater or actually true and a quick check showed it to be not only true but horribly true.

Photos default RAW import
Photos default RAW import
Affinity Photo default RAW input
Affinity Photo default RAW input

White Balance

Acorn's RAW import dialog
Acorn’s RAW import dialog — it respects the camera’s white balance metadata and also lets you see what it is (temperature and tint).
Affinity simply ignores WB metadata by default
Affinity simply ignores WB metadata by default
Affinity with WB adjustment turned on and the settings copied from Acorn
Affinity with WB adjustment turned on and the settings copied from Acorn (note that it still doesn’t match Acorn, and I know which I trust more at this point).

Affinity Photo ignores the white balance metadata in the RAW file. If you toggle on the white balance option in develop mode you still need to find out the white balance settings (somehow) and type them in yourself.

Good cameras do a very good job of automatically setting white balance for scenes. Serious photographers will often manually set white balance after every lighting change on a shoot. Either way, you want your RAW-processing software to use this valuable information.

Noise Reduction

Top-Right Corner at 200% — Photos on the left, Affinity on the right
Top-Right Corner at 200% — Photos on the left, Affinity on the right

Affinity Photo’s RAW processing is terrible. It somehow manages to create both chrome and tonal noise even for well-exposed images shot in bright daylight — night shots at high ISO? Don’t even ask. (If you must, see the Sydney Harbor comparison, earlier.) It’s harder to say this definitively, it seems to me that it also smears detail. It’s as if whoever wrote the RAW importer in Affinity Photo doesn’t actually know how to interpolate RAW images.

Incidentally, Affinity Photo’s noise reduction filter appears to have little or no effect. An image with noise reduction maxed out using Affinity Photo is far noisier than the same image processed without noise reduction using any decent program or Apple’s RAW importer’s noise reduction.

Now, if you’re using Affinity Photo in concert with a photo management program like Lightroom, Aperture, Photos, or iPhoto — programs which do the RAW processing and simply hand over a 16-bit TIFF image — you simply won’t notice a problem with the lack of white balance support or the noise creation. But if you actually use Affinity Photo to work on RAW images (i.e. if you actually try to use it’s semi-non-destructive “develop” mode) you’re basically working with garbage.

I can only apologize to any photographers who might have bought Affinity Photo based on my earlier post. I mainly use would-be Photoshop replacements for editing CG images where RAW processing isn’t a factor, but my failure to carefully check its RAW processing is egregious.

If you want to use Affinity Photo for working on photographs I strongly recommend you wait until its RAW processing is fixed (or it simply adopts the RAW processing functionality Apple provides “for free”).

Remember when I discovered that Affinity Designer’s line styling tools simply didn’t work at all? That’s ridiculous. Well, a self-declared photo editing tool that doesn’t do a halfway decent job of RAW processing is just as ridiculous.

So, what to do?

Photos offers powerful RAW processing if you figure out how to turn it on
Photos offers powerful RAW processing if you figure out how to turn it on

Apple’s new(ish) Photos application is actually surprisingly good once you actually expose its useful features. By default it doesn’t even show a histogram, but with a few clicks you can turn it into a RAW-processing monster.

And, until Apple somehow breaks it, Aperture is still an excellent piece of software.

Acorn does a good job of using Apple’s RAW importer (it respects the camera’s metadata but allows you to override it). Unfortunately, the workflow is destructive (once you use the RAW importer if you want to second guess your import settings you need to start again from scratch).

Adobe still offers a discounted subscription for Photographers, covering Lightroom and Photoshop. It’s annoying to subscribe to software, but it may be the best and cheapest option right now (especially with Apple abandoning Aperture).

If noise reduction is your main concern, Lightroom, Aperture, Photoshop, and other generalist programs just don’t cut it. You either need a dedicated RAW processing program or a dedicated noise reduction program.

RAW Right Now speeds up RAW previews and makes QuickLook more useful
RAW Right Now speeds up RAW previews and makes QuickLook more useful

Finally, if you’re happy to use different programs for image management (I mainly use Finder with these days), RAW processing, and editing then you have a lot of pretty attractive options. FastRAWViewer is incredibly good for triaging RAW photos (its Focus Peaking feature is just wonderful). DxOMark and Phase One offer almost universally admired RAW-processing capabilities and exceptionally good built-in noise handling. Many serious photographers consider the effect of switching to either of these programs for RAW processing as important as using a better lens. Even the free software offered by camera makers usually does a very good job of RAW processing (it just tends to suck for anything else). If you don’t use Affinity Photo for RAW processing there’s not much wrong with it (but you don’t have a non-destructive workflow).