The Myth of the $500 FX Sensor

Bubble defects in a silicon wafer — SEM image
Bubble defects in a silicon wafer — SEM image

Disclaimer: I am not an electrical engineer and have no special knowledge about any of this.

Some time ago Thom Hogan estimated the cost of an FX camera sensor to be around $500 (I don’t have the reference, but I’m pretty sure this is true since he said as much recently in a comment thread). Similarly, E. J. Pelker, who is an electrical engineer, estimated an FX sensor to cost around $385 based on industry standard cost and defect rates in 2006. So it seems like there’s this general acceptance of the idea that an FX sensor costs more than 10x than a DX sensor (Pelker estimates $34 for a Canon APS sensor, which is slightly smaller than DX, and $385 for a 5D sensor).

My assumptions can be dramatically off but the result will be the same.

E.J. Pelker

I don’t mean to be mean to Pelker. It’s a great and very useful article — I just think it’s not that the assumptions he knows he’s making are off, it’s that he’s made tacit assumptions he doesn’t realize he’s made are completely and utterly wrong.

The assumption is that if you get an 80% yield making DX sensors then you’re get a 64% (80% squared) yield from FX sensors (let’s ignore the fact that you’ll get slightly fewer than half as many possible FX sensors from a wafer owing to fitting rectangles into circles).

Here are Peltzer’s “unknown unknowns”:

Sensors are fault-tolerant, CPUs aren’t

First, Peltzer assumes that a defect destroys a sensor. In fact if all the defect is doing is messing up a sensel then the camera company doesn’t care – it finds the bad sensel during QA, stores its location in firmware, and interpolates around it when capturing the image. How do we know? They tell us they do this. Whoa — you might say — I totally notice bad pixels on my HD monitors, I would totally notice bad pixels when I pixel peep my 36MP RAW files. Nope, you wouldn’t because the camera writes interpolated data into the RAW file and unless you shoot ridiculously detailed test charts and examine the images pixel by pixel or perform statistical analysis of large numbers of images you’ll never find the interpolated pixels. In any event (per the same linked article) camera sensors acquire more bad sensels as they age, and no-one seems to mind too much.

Sensor feature sizes are huge, so most “defects” won’t affect them

Next, Peltzer also assumes industry standard defect rates. But industry standard defect rates are for things like CPUs — which usually have very small features and cannot recover from even a single defect. The problem with this assumption is that the vast majority of a camera sensor comprises sensels and wires hooking them up. Each sensel in a 24MP FX sensor is roughly 4,000nm across, and the supporting wiring is maybe 500nm across, with 500nm spacing — which is over 17x the minimum feature size for 28nm process wafers. If you look at what a defect in a silicon wafer actually is, it’s a slight smearing of a circuit usually around the process size — if your feature size is 17x the process size, the defect rate will be vanishingly close to zero. So the only defects that affect a camera sensor will either be improbably huge or (more likely) in one of the areas with delicate supporting logic (i.e. a tiny proportion of any given camera sensor). If the supporting logic is similar in size to a CPU (which it isn’t) the yield rate will be more in line with CPUs (i.e. much higher).

This eliminates the whole diminishing yield argument (in fact, counter-intuitively, yield rates should be higher for larger sensors since their feature size is bigger and the proportion of the sensor given over to supporting logic is smaller).

(Note: there’s one issue here that I should mention. Defects are three dimensional, and the thickness of features is going to be constant. This may make yields of three dimensional wafers more problematic, e.g. BSI sensors. Thom Hogan recently suggested — I don’t know if he has inside information — that Sony’s new (i.e. BSI) FX sensors are turning out to have far lower yields — and thus far higher costs — than expected.)

Bottom Line

To sum up — an FX sensor would cost no more than slightly over double a DX sensor (defect rates are the same or lower, but you can fit slightly fewer than half as many sensors onto a die owing to geometry). So if a DX sensor costs $34, an FX sensor should cost no more than $70.

Affinity Photo — No Good For Photography

Sydney Harbor by night — processed using Photos
Sydney Harbor by night — processed using Photos

This is a pretty important addition to my first impressions article.

After reading a comment thread on a photography blog it occurred to me that I had not looked particularly hard at a core feature of Affinity Photo, namely the develop (i.e. semi-non-destructive RAW-processing) phase.

I assumed Affinity Photo used Apple’s OS-level RAW-processing (which is pretty good) since just writing a good RAW importer is a major undertaking (and an ongoing commitment, as new cameras with new RAW formats are released on an almost daily basis) and concentrated my attention on its editing functionality.

(There is a downside to using Apple’s RAW processor — Apple only provides updates for new cameras for recent OS releases, so if you were using Mac OS X 10.7 (Lion) and just bought a Nikon D750 you’d be out of luck.)

In the thread, one commenter suggested Affinity Photo as a cheaper alternative to Phase One (which misses the point of Phase One entirely) to which someone had responded that Affinity Photo was terrible at RAW-processing. I wanted to check if this was simply a random hater or actually true and a quick check showed it to be not only true but horribly true.

Photos default RAW import
Photos default RAW import
Affinity Photo default RAW input
Affinity Photo default RAW input

White Balance

Acorn's RAW import dialog
Acorn’s RAW import dialog — it respects the camera’s white balance metadata and also lets you see what it is (temperature and tint).
Affinity simply ignores WB metadata by default
Affinity simply ignores WB metadata by default
Affinity with WB adjustment turned on and the settings copied from Acorn
Affinity with WB adjustment turned on and the settings copied from Acorn (note that it still doesn’t match Acorn, and I know which I trust more at this point).

Affinity Photo ignores the white balance metadata in the RAW file. If you toggle on the white balance option in develop mode you still need to find out the white balance settings (somehow) and type them in yourself.

Good cameras do a very good job of automatically setting white balance for scenes. Serious photographers will often manually set white balance after every lighting change on a shoot. Either way, you want your RAW-processing software to use this valuable information.

Noise Reduction

Top-Right Corner at 200% — Photos on the left, Affinity on the right
Top-Right Corner at 200% — Photos on the left, Affinity on the right

Affinity Photo’s RAW processing is terrible. It somehow manages to create both chrome and tonal noise even for well-exposed images shot in bright daylight — night shots at high ISO? Don’t even ask. (If you must, see the Sydney Harbor comparison, earlier.) It’s harder to say this definitively, it seems to me that it also smears detail. It’s as if whoever wrote the RAW importer in Affinity Photo doesn’t actually know how to interpolate RAW images.

Incidentally, Affinity Photo’s noise reduction filter appears to have little or no effect. An image with noise reduction maxed out using Affinity Photo is far noisier than the same image processed without noise reduction using any decent program or Apple’s RAW importer’s noise reduction.

Now, if you’re using Affinity Photo in concert with a photo management program like Lightroom, Aperture, Photos, or iPhoto — programs which do the RAW processing and simply hand over a 16-bit TIFF image — you simply won’t notice a problem with the lack of white balance support or the noise creation. But if you actually use Affinity Photo to work on RAW images (i.e. if you actually try to use it’s semi-non-destructive “develop” mode) you’re basically working with garbage.

I can only apologize to any photographers who might have bought Affinity Photo based on my earlier post. I mainly use would-be Photoshop replacements for editing CG images where RAW processing isn’t a factor, but my failure to carefully check its RAW processing is egregious.

If you want to use Affinity Photo for working on photographs I strongly recommend you wait until its RAW processing is fixed (or it simply adopts the RAW processing functionality Apple provides “for free”).

Remember when I discovered that Affinity Designer’s line styling tools simply didn’t work at all? That’s ridiculous. Well, a self-declared photo editing tool that doesn’t do a halfway decent job of RAW processing is just as ridiculous.

So, what to do?

Photos offers powerful RAW processing if you figure out how to turn it on
Photos offers powerful RAW processing if you figure out how to turn it on

Apple’s new(ish) Photos application is actually surprisingly good once you actually expose its useful features. By default it doesn’t even show a histogram, but with a few clicks you can turn it into a RAW-processing monster.

And, until Apple somehow breaks it, Aperture is still an excellent piece of software.

Acorn does a good job of using Apple’s RAW importer (it respects the camera’s metadata but allows you to override it). Unfortunately, the workflow is destructive (once you use the RAW importer if you want to second guess your import settings you need to start again from scratch).

Adobe still offers a discounted subscription for Photographers, covering Lightroom and Photoshop. It’s annoying to subscribe to software, but it may be the best and cheapest option right now (especially with Apple abandoning Aperture).

If noise reduction is your main concern, Lightroom, Aperture, Photoshop, and other generalist programs just don’t cut it. You either need a dedicated RAW processing program or a dedicated noise reduction program.

RAW Right Now speeds up RAW previews and makes QuickLook more useful
RAW Right Now speeds up RAW previews and makes QuickLook more useful

Finally, if you’re happy to use different programs for image management (I mainly use Finder with these days), RAW processing, and editing then you have a lot of pretty attractive options. FastRAWViewer is incredibly good for triaging RAW photos (its Focus Peaking feature is just wonderful). DxOMark and Phase One offer almost universally admired RAW-processing capabilities and exceptionally good built-in noise handling. Many serious photographers consider the effect of switching to either of these programs for RAW processing as important as using a better lens. Even the free software offered by camera makers usually does a very good job of RAW processing (it just tends to suck for anything else). If you don’t use Affinity Photo for RAW processing there’s not much wrong with it (but you don’t have a non-destructive workflow).

Affinity Photo — First Impressions

Affinity Photo in action

Affinity Photo has just come out of beta and is being sold for a discounted price of $40 (its regular price will be $50). As with Affinity Designer, it’s well-presented, with an attractive icon and a dark interface that is reminiscent of late model Adobe Creative Cloud and Apple Pro software. So, where does it fit in the pantheon of would-be Photoshop alternatives?

In terms of core functionality, it appears to fit in above Acorn and below Photoline. In particular, Photoline supports HDR as well as 16-bit and LAB color, while Affinity Photo lacks support for HDR editing. Unless you work with HDR (and clearly not many people do) then Affinity Designer is both less expensive than Photoline, and far more polished in terms of the features it does support.

Affinity Designer supports non-destructive import of RAW files. When you open a RAW file you enter “Develop” mode where you can perform adjustments to exposure, curves, noise, and so forth on the RAW data before it gets converted to 8- or 16-bit RGB. Once you leave Develop mode, you can return and second-guess your adjustments (on a layer-by-layer basis). This alone is worth the price of admission, and leaves Acorn, Pixelmator, and Photoline in the dust.

In essence you get the non-destructive workflow of Lightroom and the pixel-manipulation capabilities of Photoshop in a single package, with the ability to move from one to the other at any point in your workflow. Let me repeat that — you can “develop” your raw, go mess with pixels in the resulting image, then go back and second-guess your “develop” settings (while retaining your pixel-level manipulations) and so on.

This feature isn’t quite perfect. E.g. you can’t go back and second-guess a crop, and vector layer operations, such as text overlays, get reduced to a “pixel” layer if you go back to develop mode. But it’s a big step in the right direction and for a lot of purposes it’s just dandy.

This is just my first impressions, but there are some things that could be better.

Affinity Photo provides adjustment layers, live filter layers, filters, and layer effects — in many cases providing multiple versions of the same filter in different places. Aside from having functionality scattered and in arbitrary buckets, you get several different user interfaces. This is a mess, and it is a direct result of copying Photoshop’s crazy UI (accumulated over decades of accumulated functionality) rather than having a consolidated, unified approach the way Acorn does.

At first I thought Affinity Photo didn’t support layer styles, but it does. Unfortunately you can’t simply copy and paste layer styles (the way you can in Photoshop and Acorn), so the workflow is a bit more convoluted (you need to create a style from a selection and then apply it elsewhere — often you just want to copy a style from A to B without creating a reusable (or linked) style so this is a bit unfortunate).

I really like the fact that the RGB histogram gives a quick “approximate” view but shows a little warning symbol on it. When you click it, it does a per-pixel histogram (quite quickly, at least on my 24MP images).

I don’t see any support for stitching images, so if that’s important to you (and it’s certainly very important to landscape photographers) then you’ll need to stick with Adobe, or specialized plugins or software.

It also seems to lack smart resize and smart delete or Photoshop’s new motion blur removal functions. (Photoline also does smart delete and smart resize.)

Anyway, it’s a great first release, and definitely fulfills the promise of the public betas. It seems to me that it’s a more solid overall effort than Affinity Designer was when first released, and I’m probably a more demanding user of Photoshop-like programs than I am of Illustrator-like programs. I can understand the desire to provide a user interface familiar to Adobe products even at the cost of making them unnecessarily confusing and poorly organized, but I hope that sanity prevails in the long run.

Bottom line: a more complete and attractive package than either Photoline or Acorn (its most credible competitors) and better in some ways than Photoshop.

Email E. Neumann

Email E. NeumannHow stupid is email?

Actually, email is great. It’s robust, widely-supported, and highly accessible (in the 508 and economic senses of the word). The problem is email clients.

Security

A colleague of mine and I once considered starting up a business around a new email client. The problem though, is that it works best when someone send emails using your email client to someone else using your email client. E.g. you can easily implement PGP encryption:

  • if you’ve previously exchanged email, you both have each others’ keys — snap you’re done;
  • if you haven’t, your client asks whether you want it sent insecurely or asks you for authentication information (something you know about the person that a man-in-the-middle probably doesn’t, or an out-of-band mechanism for authentication such as calling you on the phone; and then sends an email initiating a secure authentication process OR allowing them to contact you and opt to receive insecure communication; all this can happen pretty seamlessly if the recipient is using your email client — they get asked the question and if they answer correctly keys get sent).

It’s relatively easy to create a secure encryption system if you (a) opt out of email, and (b) have a trusted middleman (e.g. if both parties trust a specific website and https then you’re done — even a simple forum will work). But then you lose the universality of email, which is kind of important.

The obvious goal was to create a transparently secure email client. The benefits are huge — e.g. spam can be dealt with more easily (even insecure email can be sent with authentication) and then you can add all the low-hanging fruit. But it’s the low-hanging fruit I really care about. After all, I figure if the NSA can hack my storage device’s firmware, my network card’s firmware, and subvert https, encryption standards, and TOR — and that’s just stuff we know about — the only paths to true security are anonymity (think of it as “personal steganography”) or extreme paranoia. When dealing with anyone other than the NSA, Google, China, Iran, etc. you can probably use ordinary caution.

Well, how come Windows Mail / Outlook and Apple Mail don’t do exactly what I’ve just said and automatically handshake, exchange keys and authentication questions, and make email between their own email clients secure? If it’s that easy (and really, it is that easy) why the hell? Oddly enough, Apple has done exactly this (using a semi-trusted middleman — itself) with Messages. Why not Mail?

OK, set all that aside.

Why?

  • Why can’t I conveniently send a new message the way I send a reply (i.e. “Reply with new subject and empty body” or “Reply all with new subject and empty body”)? When using an email client most people probably use Reply / Reply All most, then create new message and copy/paste email addresses from some other message second, and create a new message and type in the email address or use some kind of autocomplete last. Furthermore, many replies are actually intended to be new emails to the sender or sender and recipients. Yet no email client I know of supports the second — very frequent usage.
  • Why does my email client start me in the subject line? Here’s an idea: when you create a new email you start in the body. As you type the body the email client infers the subject from what you type (let’s say using the first sentence if it’s short, or the first clause with an ellipsis if that works, or a reasonable chunk of it with an ellipsis otherwise).
  • Why does my OS treat email, IMs, and SMSs as completely separate things? Studies show grown-ups use email and hardly SMS. Younger people use SMS and hardly use email. Both probably need to communicate with each other, and both are generally sending short messages to a person, not a phone number or an email address.
  • (While I’m at it, why does an iPhone treat email and IMs as different buckets? How come they had the nous to merge IMs and SMSs, and even allow semi-transparent switching between secure and free iMessages and less secure and not-necessarily-free SMSs based on whether the recipient was using an Apple device or not? I don’t ask why Android (or heaven forfend Windows) does this because (a) Android generally hasn’t even integrated mailboxes, and (b) don’t expect real UI innovation from Google; they can imitate, but when they originate it tends to be awful — aside from Google’s home page which remains one of the most brilliant UI decisions in history.
  • Oh yeah, and voicemail.

Nirvana

Now imagine a Contacts app that did all this stuff. I’d suggest it needs to be built into email because email is the richest of these things in terms of complexity and functionality, but let’s call it Contact. Consider the nirvana it would lead to:

  • Instantly, four icons on your iPhone merge into one (Mail, Phone, Messages, Contacts (the existence of the last has always bothered me, now it would make sense). Three of those are likely on your home screen; now you have more space.
  • You no longer have to check for messages in four different places (e.g. if you have a voicemail system that emails you transcripts of voicemails, you can mark them both as read in one place, or possibly even have them linked automatically.)
  • Similarly, when you reply to a given message, you can decide how to do so. (Is it urgent? Are they online? Is it the middle of the night? What is your preferred method of communicating with this person?) Maybe even multiple linked channels.
  • Message threads can cross message domains (imagine if you reply to an email with a phone call and Contacts knows this and attaches the record of the call to the thread containing the emails, SMSs, iMessages, voicemails, and so on). Some of this would require cleverness (e.g. Apple owns iMessages, so it could do things like add subject threads to messages on the side, but SMSs are severely constrained and would lose their thread context).
  • Oh, and you can use the same transparent encryption implementation across whichever bands make sense.
  • Obviously some of these things won’t work with some message channels e.g. you can’t do much with SMS because the messages aren’t big enough, but MMS, which is what most of us are using, works fine, similarly Visual Voicemail could support metadata but doing it with legacy voicemail systems isn’t going to happen.

Consider for a moment how much rocket science was involved in getting Continuity to work on iOS and OS X devices. To begin with it requires hardware that isn’t on older Macs and iOS devices. And what it does is pretty magical — I am working on a Keynote presentation, walk over to my Mac and automagically I am working on the same document in the same place. But really, how useful is this really and how often? Usually when I switch devices I am also switching tasks. Maybe that’s because I grew up in a world without Continuity.

Now consider how this stuff would require almost no rocket science and how often it would be useful.

 

 

The Second Amendment

A well regulated militia being necessary to the security of a free state, the right of the people to keep and bear arms shall not be infringed.

As a result of the sideshow over the Confederate flag that has replaced any substantive debate about racism and gun violence in the US (something had to, right?) I ended up having a bit of an argument with a pro-gun commenter on an Economist article suggesting that it may not [just] be guns that are the problem in the US.

This isn’t a particularly novel argument. People generally assume Bowling for Columbine is a standard left-wing anti-gun polemic, but at the end Michael Moore — a card-carrying member of the NRA — ends up discussing Canada, which is nearly as well-armed as the US and yet has a far lower homicide rate, and concludes that there’s something paranoid at the heart of American culture that may be the real problem.) Well, this blog post isn’t about the flaws in American culture — it’s about the right to bear arms.

Anyhow, my anonymous adversary argued that the point of the second amendment is that it has kept the US safe from the kind of ethnic cleansing and other large scale atrocities that afflicted Europe and Asia during the 20th century. In other words, for the enormous benefit of not having large scale ethnic cleansing occasionally we pay the price of having a high murder and suicide rate. “A well regulated militia” is meant to be understood as “The regulation of the militia by civilians”.

OK, I get it. My adversary is right. The NRA is right. The right-wing militias are right. The purpose of the second amendment is to allow us to regulate the militia — i.e. to overthrow the government so as to maintain our “free state”. Their interpretation is correct.

My adversary is wrong, I think, on his history.

The US has had plenty of opportunities for unjust government or corporate actions to be prevented by the armed populace — consider Douglas Macarthur’s use of cavalry and tanks against the Bonus Marchers — unemployed veterans no less! Or the Battle of Blair Mountain (John Sayles’s movie Matewan depicts the prelude to it). Oh yeah, and slavery. Where is there an example of real government excess being prevented by the right to bear arms? There are plenty of examples of government excess being resisted by the right to bear arms, the largest and most depressing examples being the resistance of some American Indians to the government, others (such as Ruby Ridge and Waco) simply being unsuccessful.

Perhaps the best example in favor of this argument is Little Big Horn (and that victory was Pyrrhic.

And if you believe that the Confederacy was right, then that’s the largest example of the populace (including a large proportion of the military) being unable to prevent government overreach, no?

On the other hand, Mahatma Gandhi defeated a superpower without using weapons. And when the injustices that were not prevented by the right to bear arms were mitigated (Congress paid the Bonus Army, Roosevelt allowed the miners to unionize), it wasn’t the right to bear arms that made it happen.

It seems quite clear that given the intent of the amendment, we should have the right to nuclear submarines, tanks, nerve gas, atomic warheads, and so forth. After all, how can we credibly regulate the militia with semi-automatic rifles, shotguns, and handguns? The disparity in power between the government’s forces — military and paramilitary — and ordinary citizens has never been greater and shows no sign of narrowing. Even when the gap was considerably smaller, the second amendment proved of little use in preventing horrible injustices. The only real conclusion is that we need to abolish the second amendment — it costs us too much and gives us nothing.