The Myth of the $500 FX Sensor

Bubble defects in a silicon wafer — SEM image
Bubble defects in a silicon wafer — SEM image

Disclaimer: I am not an electrical engineer and have no special knowledge about any of this.

Some time ago Thom Hogan estimated the cost of an FX camera sensor to be around $500 (I don’t have the reference, but I’m pretty sure this is true since he said as much recently in a comment thread). Similarly, E. J. Pelker, who is an electrical engineer, estimated an FX sensor to cost around $385 based on industry standard cost and defect rates in 2006. So it seems like there’s this general acceptance of the idea that an FX sensor costs more than 10x what a DX sensor costs (Pelker estimates $34 for a Canon APS sensor, which is slightly smaller than DX, and $385 for a 5D sensor).

My assumptions can be dramatically off but the result will be the same.

E.J. Pelker

I don’t mean to be mean to Pelker. It’s a great and very useful article — I just think it’s not that the assumptions he knows he’s making are off, it’s that he’s made tacit assumptions he doesn’t realize he’s made are completely and utterly wrong.

The assumption is that if you get an 80% yield making DX sensors then you’re get a 64% (80% squared) yield from FX sensors (let’s ignore the fact that you’ll get slightly fewer than half as many possible FX sensors from a wafer owing to fitting rectangles into circles).

Here are Peltzer’s “unknown unknowns”:

Sensors are fault-tolerant, CPUs aren’t

First, Peltzer assumes that a defect destroys a sensor. In fact if all the defect is doing is messing up a sensel then the camera company doesn’t care – it finds the bad sensel during QA, stores its location in firmware, and interpolates around it when capturing the image. How do we know? They tell us they do this. Whoa — you might say — I totally notice bad pixels on my HD monitors, I would totally notice bad pixels when I pixel peep my 36MP RAW files. Nope, you wouldn’t because the camera writes interpolated data into the RAW file and unless you shoot ridiculously detailed test charts and examine the images pixel by pixel or perform statistical analysis of large numbers of images you’ll never find the interpolated pixels. In any event (per the same linked article) camera sensors acquire more bad sensels as they age, and no-one seems to mind too much.

Sensor feature sizes are huge, so most “defects” won’t affect them

Next, Peltzer also assumes industry standard defect rates. But industry standard defect rates are for things like CPUs — which usually have very small features and cannot recover from even a single defect. The problem with this assumption is that the vast majority of a camera sensor comprises sensels and wires hooking them up. Each sensel in a 24MP FX sensor is roughly 4,000nm across, and the supporting wiring is maybe 500nm across, with 500nm spacing — which is over 17x the minimum feature size for 28nm process wafers. If you look at what a defect in a silicon wafer actually is, it’s a slight smearing of a circuit usually around the process size — if your feature size is 17x the process size, the defect rate will be vanishingly close to zero. So the only defects that affect a camera sensor will either be improbably huge or (more likely) in one of the areas with delicate supporting logic (i.e. a tiny proportion of any given camera sensor). If the supporting logic is similar in size to a CPU (which it isn’t) the yield rate will be more in line with CPUs (i.e. much higher).

This eliminates the whole diminishing yield argument (in fact, counter-intuitively, yield rates should be higher for larger sensors since their feature size is bigger and the proportion of the sensor given over to supporting logic is smaller).

(Note: there’s one issue here that I should mention. Defects are three dimensional, and the thickness of features is going to be constant. This may make yields of three dimensional wafers more problematic, e.g. BSI sensors. Thom Hogan recently suggested — I don’t know if he has inside information — that Sony’s new (i.e. BSI) FX sensors are turning out to have far lower yields — and thus far higher costs — than expected.)

Bottom Line

To sum up — an FX sensor would cost no more than slightly over double a DX sensor (defect rates are the same or lower, but you can fit slightly fewer than half as many sensors onto a die owing to geometry). So if a DX sensor costs $34, an FX sensor should cost no more than $70.

AppleTV

Yesterday, 9to5 Mac noticed that Apple had rejigged its online store so as to position AppleTV as a product category. Also interestingly, Lee Clow has apparently hinted that, for the first time since 1984, Apple may be airing a super bowl spot. And then during Apple’s first quarter earnings call, Tim Cook foreshadowed new product categories for 2014. We’ve also had rumors of Apple cutting content deals over the last year that never turned into announcements.

It seems pretty clear that one new product category is going to be AppleTV. And here’s where things get really interesting.

Facts

  • Apple’s online store now treats AppleTV as a product category rather than an accessory.
  • Apple is not currently selling an Apple-branded 4K display (the 4K displays it is selling are from Sharp)
  • Apple’s OS-level support for 4K displays is conspicuously poor (they need to be treated as Retina displays)
  • iOS now provides proper (API) support for bluetooth game controllers
  • The price for high quality 4K displays is about to drop well under $1000
  • The current AppleTV does not support 4K displays
  • The current AppleTV does not support 802.11ac

Opinions

  • The last crop of consoles (Xbox One, PS4, Wii U) had the most anemic rollout (in terms of launch titles) in recent memory
  • The way AppleTV’s remote app works is primitive compared to the way Chromecast can be “handed” a playback task (and Apple knows this)
  • AppleTV currently needs a system update in order to add a new content channel; the tools for managing “apps” in AppleTV are primitive to put it mildly
  • There is already an ecosystem of iOS-compatible controllers and iOS games supporting those controllers
  • 4K displays blur or even erase the line between monitors and TVs

Rumors

  • Apple has bought a Super Bowl spot
  • Nintendo has suggested it is looking at developing titles for mobile platforms
  • Apple has been negotiating content deals with major players (movie studios, etc.) but it has borne no visible fruit as yet

Predictions

  • Apple is at last going to release an AppleTV console (whether it’s called AppleTV or not remains to be seen)
    • It will have access to major new sources of content
    • It will have an App Store
    • It will support Bluetooth controllers
    • It will support the use of other iOS devices as controllers
    • It will be powered by the A7 or something more powerful
    • If it is powered by a new chip (e.g. “A7x”) it will support 4K (the A7 can drive 2K)
    • It will have a shockingly good set of launch titles (how else to explain the lackluster launch titles for all the other consoles?)
    • It will not have a tuner or Cablecard support or any other horrific kludge
    • It may introduce streaming video with ads for content from networks (effectively on-demand playback of licensed content with ads)
    • It will cost $199-399 (I’d predict $199, but Apple might actually sell a range of products with varying storage capacities)
    • The ghastly Apple Remote iOS app will be given a proper overhaul, and work in more of a peer-to-peer manner (and be able to hand off tasks to the AppleTV)
  • An even smaller $99 version which doesn’t play games might continue as AppleTV Nano or some such
  • We’re going to see extensive 4K support across Apple’s product lines over the next 12 months
  • We’re going to see Apple-branded 4K displays (“Retina HD” perhaps?) designed to work seamlessly with all this new stuff

Apple’s To-Do List

Mavericks (courtesy of Wikipedia)

Based on my own wishes, preconceptions, and random thoughts — here’s what Apple potentially might be doing tomorrow. My predictions (which are worth nothing) are in black.

  • TV
    • Apple-branded TV — Implausible. I think this is a stupid idea and doubt it’s in the offing, unless it’s an Apple-branded 4K Display. Apple could leverage its manufacturing, supply chain, and scale capabilities to do something amazing in the 4K space. If I’m wrong, I suspect it will be a big monitor you can use as a TV (after all, there’s pretty much no 4K TV content anyway).
    • AppleTV Console (“iOS TV”?) — Unlikely. I want one. I think this is a great idea, it would cut Microsoft and Sony off at the knees, but there’s no reason to think it will happen. Besides, it would probably hurt Microsoft and Sony even more if Apple waits until after Microsoft and Sony have shipped a few million razor blade handles as loss leaders.
    • AppleTV Refresh — Likely — because the current model lacks 802.11ac support.
  • Mac OS
    • Mavericks — certain
    • Macbook Pro with Haswell (and Iris Pro in 13″) — almost certain
    • Macbook Air Retina — improbably: would be nice, but seems unlikely (Macbook Air was just refreshed)
    • Mac Mini Haswell — 50/50: possibly a stealth update or bare mention
    • Mac Pro — almost certain.
      • Price — I think it needs to be cheaper than the existing Mac Pro, otherwise being a cute cylinder does not make up for lack of internal expansion.
      • Upgradeable? Can we swap new CPUs/GPUs into it?
      • External upgrade chassis (either from Apple or third parties)?
      • 4K displays?
      • Thunderbolt displays with built-in (upgradeable?!) GPU?
  • iOS
    • 10″ iPad (gen 5, A7X, retina, iPad Mini style case) — almost certain, except for the “X” part.
    • 8″ iPad (gen 2, A7/A7X, retina) — almost certain:  maybe it will be A6/A6X to keep the performance gap relative to the larger model. (Recall that the Mini shipped with the A5 versus the Gen 4 iPad’s A6X.)
    • Bigger iPad — unlikely.
    • Hybrid Device — implausible. (You can’t ship two new OSes and new platform in quick succession, can you?)
    • iPod Touch Gen 6 (802.11ac, A6 CPU, iPhone 5c-style hardware) — likely: the fact that the current iPod Touch actually looks like a classier device than the iPhone 5c, and that it lacks 802.11ac support makes me think this might be coming.
  • Miscellanea
    • iWatch? — seems highly unlikely
    • non-iOS iPod refreshes? — shrug. It would be interesting to see the Nano become a mini (non-Retina) iPod Touch.

Note that my guess is that as initially shipped, the new Mac Pro will not be an attractive product (too expensive and too non-upgradeable). I think this may change in the future (the necessary pieces of the puzzle may not be available) — it will be a bit like FCP X — howls of pain from the pro community followed by demands for the old boxes to stay on sale (if they get taken off the market), followed by a slow clawback of goodwill as Apple addresses the obvious shortcomings. On the other hand, maybe Apple learned something from FCP X.

My predictions are very conservative. I hope there’s at least one surprise.

A Little Privacy

Securing data from prying eyes is pretty much a solved problem. PGP is just as good as ever. So all you need to do to receive communications securely from another person is to create a PGP Private/Public key pair, broadcast your public key (hint — it’s shorter) to anyone who might want to contact you, and then decrypt incoming messages using your private key on the way in.

This only addresses security. Authentication is a separate issue, possibly just as important, and if anything harder to address (because it involves trusting third parties), and I won’t deal with this. Privacy is plenty to deal with right now.

So we’ve heard that secure communications providers are shutting down or destroying their servers rather than surrender to demands from the US government (NSA, FBI, CIA? We don’t know which branch or branches because they’re not allowed to say — lovely, huh?). What demands might these service providers be concerned about?

  • Surrender private keys (why would they even have these)
  • Install malware on their servers or on users’ machines (why would a secure email provider install any software on its users’ machines?)
  • Help surveil users (e.g. notify government agency when a specific user addresses his/her mail)
  • Monitor metadata (e.g. while the body of an email might be encrypted, the header information has to be plaintext).

Can you think of other things?

There’s a recent thriller (you probably haven’t heard of it — it tanked at the box office) starring John Cusack called Numbers Station. The idea is that the CIA maintains a network of shortwave broadcast stations that send out encrypted messages to sleeper agents. To do this they need a specially trained cryptographer and a network of highly fortified shortwave transmitters. Or something. It’s a stupid, stupid premise. (But not as bad as 2012.)

Let’s suppose we want to communicate with field agents securely. Well, before leaving HQ our field agent creates a private/public key pair and leaves the public key behind. He/she secretes the private key on his/her person (committing it to memory is probably impossible, so it might be in a tiny subcutaneous LED projector!) and then goes on his/her merry way, having told his/her handlers to post messages on usenet using his/her public key. There’s no other step required.

Now, how do we handle authentication? Hey, I said this wasn’t about authentication! In any event, same way we handle it using any other less secure communication channel. Perhaps authentic messages are agreed to end with “Signed Bob” or “The peanut walks by night”. Doesn’t matter — we’re talking about security not authentication.

How does Double Secret Agent VII find the publicly posted messages on usenet? Any number of ways. Perhaps they’re in messages entitled “but I like wesley” on alt.wesley.crusher.die.die.die. Perhaps they’re embedded in the comment tags of PNG images posted on alt.sex.donkeys. It doesn’t matter.

Heck, you could just use mailinator. Want to email Double Secret Agent VII? Send an email to [email protected] and use the correct key. Done.

The beauty of the usenet example is that thousands of people will be downloading the message accidentally as a matter of course, and the message will be automatically distributed to thousands of servers whether anyone reads it or not. I really don’t know how PRISM, et al, would help against a determined, competent opponent communicating this way. This is probably why PGP had the US Government so riled up back in the 90s.

So, what about losing track of Agent VII? Simple. You’re Control (or whatever). If a communications channel is compromised (e.g. Kaos figures out you’re posting messages as EXIF data in pornographic images and deletes them or posts confusing spam) then Agent VII can use the Control’s public key to phone home. It’s not complicated.

So, here’s my modest suggestion for creating a secure replacement for email that everyone can use, and which can be gradually migrated to.

  1. set up a standard mail server.
  2. configure it to bounce any email that appears not to be encrypted using PGP with a message saying “if you want to contact [email protected] then use [email protected]’s public key to encrypt the message and provide your own public key so a secure response can be sent” and provide a link to a web page for securely sending such emails if the person doesn’t want to.
  3. outgoing emails are decorated with a public key for securely replying to the sender.
  4. account holders can have any number of handles (“email addresses”) associated with a given public key. They can access their email simply by asking for it. (Either there’s no passwords or everyone has the same password.)
  5. the server holds public keys so it can send the messages in item 2 (and provide a convenient system for sending the messages).
  6. Provide a simple to use web-based client for the service (which does all its encryption / decryption client-side) and provide links to a number of alternative open source clients. Make all the clients as transparent as possible.
  7. Provide a web-based client that deals only in encrypted data. (I.e. requires the user to manually extract and decrypt incoming messages, and encrypt outgoing messages.)
  8. Pay for all of this by charging a small amount (say $0.01) for each message sent to a user. (This is Bill Gates’s proposed solution to spam from way back, and if we’re going to migrate off email, we might as well cash in that idea.) Any profits could be donated to MSF, or the campaign to drown Jenny McCarthy in cat vomit.

Now, practically speaking, we could use passwords simply to prevent nuisance denial of service attacks, but we’d have absolutely no problem giving those passwords to anyone who showed up to our office in a sufficiently impressive suit, or driving a big enough SUV.

So, this gives us a pretty secure email system that is fairly interoperable with existing email systems (modulo requiring users “outside” the system to opt into using it, at least to contact its users) and which doesn’t hold any private information or keys at all. Heck, it can simply expose all of its data to Google. (Indeed, it could keep its code repositories exposed so that suspicious users could review changes to its codebase.) Now, it can’t be used with idiotic services that send you your login details, but you can either use another email service (e.g. gmail or mailinator) for those or implement a cryptographic bridge (e.g. if you subscribe using an email address prefixed with “insecure-” then it might do the encryption serverside for you.

Note that as described, the system doesn’t conceal metadata. So if [email protected] sends [email protected] orders to assassinate that pesky reporter, the fact that such a communication occurred (if not its content) is stored on the server. Of course, you could use the web client to anonymously send and/or receive the message, and use Tor to avoid leaving too much of a trace of having done that, but it’s kind of inconvenient, so normal people won’t do it very often. A normal person wants an email client that Just Works (this can provide that) and to exchange email with other people (this can get you there).

The proposed system provides end-to-end encryption of message content without the server needing to store any private keys and would allow all key components of the system to run in the browser (and thus have openly inspectable runtime code that could be monitored for changes). But it won’t stop the NSA from hitting you with a $5 wrench until you tell them where you keep your private key.

Apple OS

I saw an interesting blog post saying that rumors are rife that iOS and OS X are going to be merged (now that the relevant engineering teams are both under one person). It seems pretty clear to me that Apple would have to plan not to have two OSes at some time in the future, and the options are:

  • Phase iOS out in favor of OSX
  • Merge iOS and OSX
  • Phase OSX out in favor of iOS

The first option seems laughable, although it does have the advantage that Apple already has it all working (OSX can emulate iOS pretty well and could presumably be engineered to run it perfectly, probably even natively). It’s also hard to credit simply because Apple appears to be moving to use iOS across its product line (the new Nano is all but an iOS device, for example) and adding a ton of bloat to it wouldn’t help with this.

The second option, which is what the latest rumors suggest, seems like the most desirable option. The idea here would be that Cocoa and Cocoa Touch live on top of the same kernel, side-by-side, and run natively or possibly have a Rosetta-style ARM/x86 emulator sitting on the side to run older software (assuming Apple decides to pick one or the other CPU architecture for a given device — they could “easily” just stick some ARM CPUs in every Mac if they wanted to).

The third option has the advantage of delivering a simpler, lighter operating system in the long run, with the disadvantages of abandoning a huge amount of software, having to port or create from whole cloth entire slabs of functionality, and delivering a simpler, lighter operating system (after all OSX is pretty lean thanks to almost merciless shedding of “legacy” functionality). Ultimately, the third option would probably be to OSX what OSX was to Mac OS — devices would boot into iOS and then load the OSX “compatibility box” only as needed, and eventually not at all.

A resulting “AppleOS” will have a complete OS on each side of the divide, allowing Apple will to ship touch-only devices with pure touch interfaces, non-touch devices with pure keyboard/mouse interfaces, and hybrid devices, such as a “Macbook Flip” which can mix and match. Beefier devices can have all the software installed, while leaner devices can essentially just have the iOS components.

So, I predict that iPhone OS will subsume Mac OS X within three years. Obviously, it will long since have ceased being iPhone OS, of course. Hence, the title of this post.

That’s me, in 2010, predicting the third option’s inevitability. I guess I could point out that Steve Jobs died in the mean time, which may have slowed things down (one can only imagine that the process of merging the two operating systems caused significant internal tensions). I guess I’ve got another year before I’m wrong, but I still think it will happen by 2014. I would further argue that the accelerated pace of Mac OS X releases (which would deliver 10.10 in 2014) gels with this speculation.

But there’s actually no real way to tell the latter options apart — assuming iOS has been built with an eye towards eventual reunion it’s quite possible options 2/3 are working in the lab right now (just as NeXT had NeXTStep running on PowerPC hardware years before cutting the deal with Apple).