Apple Music… Woe Woe Woe

Meryn Cadell’s “Sweater” — I particularly recommend “Flight Attendant” and “Job Application” but they don’t have video clips that I can find.

We’ve been an Apple Music family pretty much from day one. I used to spend a lot of time and money in stores shopping for albums. Now I have access to pretty much everything there is (including comedy albums) for the price of a CD per month.

I love the fact that my kids can just play any music they like and aren’t forced to filter down their tastes to whatever is in our CD collection, or their friends like, or what’s on commercial radio (not that we listen to commercial radio).

I also love the fact that when I get a new Apple device it just effortlessly has everything in my library on the device the next day.

But…

As I said, I used to spend quite a bit of time buying music. So I have some unusual stuff. Also, I lived in Australia for a long time, so I have a lot of stuff that isn’t available in the US or is available in subtly different form in the US. Similarly, my wife is a huge David Bowie fan and has some hard-to-get Bowie albums, e.g. Japanese and British imports.

We’ve both been ripping CDs for a long time, and in 2012 we ripped everything we hadn’t already ripped as part of packing for a move.

So now we have music that isn’t quite recognized by iTunes. To some extent it gets synced across our devices, probably via a process that went like this:

  1. (Before Apple Music existed) explicitly send music to iPhone
  2. When we get new phone, restore phone from backup on Mac.
  3. (Later) Restore phone from cloud backup.
  4. (Apple Music arrives) Hey, as a service we’ll look through your music library and match tracks to the thing we think it is in iTunes and rather than waste backup space, we’ll simply give you copies of our (superior!?) version. Oh, yeah, it’s DRMed because we need to disable the tracks if you stop paying a subscription fee.

Now this mostly works swimmingly. But sometimes we encounter one of three failure modes:

  • You have something Apple Music doesn’t recognize
  • You have something Apple Music misrecognizes (this happened in the old days when the hacky way iTunes (using the CDDB et al) identified tracks would identify an album incorrectly and misname it and all your tracks, but you could fix it and rename them manually)
  • You have something Apple Music recognizes correctly as something it doesn’t sell in your current region, and disables your ability to play it!

The first failure means that Apple may be able to restore the track from a backup of a device that had it, but it won’t restore it otherwise. So you have to find a machine with the original (non-DRMed file and fix it).

The second failure means that Apple’s Music (iTunes on a Mac) application will start playing random crap. If you’re lucky you can find the correct thing in Apple Music and just play that instead, but now you’re stuck with Apple’s DRM and may even end up losing track of or deleting your (non-DRMed) version.

The third mode is particularly pernicious. I have a fantastic album by Canadian performance artist Meryn Cadell (Angel Food for Thought) that I bought after hearing a couple of the tracks played on Phillip Adams’ “Late Night Live” radio program many years ago. I freaking love that album. For years, Apple would sync the album from device to device because it had no freaking clue what it was…

But sometime recently, Apple added Meryn Cadell’s stuff to Apple Music. As far as I can tell, it makes a second album (that I didn’t know existed) to the Apple Music US region but not Angel Food for Thought. So it knows that Angel Food for Thought exists but it won’t let me play it.

Now, I happen to know where my backups are. I fired up iTunes on my old Mac Pro and there’s Angel Food for Thought. It plays just fine. Then I turned on “sync to cloud” and all the tracks get disabled. It’s magical, but not in a good way.

This is ongoing… I will report further if anything changes.

Update

After escalation, I’ve been told iTunes is working as intended. So, basically, it will play music it can’t identify or that is DRM-free and already installed, but what it won’t do is download and play music from iTunes that (it thinks) matches music that (it thinks) you have if it doesn’t have the rights to that music in your jurisdiction.

So, I have the album “Angel Food for Thought” which iTunes used not to know about, so it just worked. But, now iTunes knows that it exists BUT it doesn’t have US distribution rights, so it won’t propagate copies of “Angel Food for Thought” to my new devices (but it won’t stop me from manually installing them). Super annoying, but not actively harmful.

It does seem to mean that there’s a market for something that lets you stick all your own music in iCloud and play stream it for you.

Blender 2.8 is coming

Unbiased rendering? Check. Realtime rendering? Check. Unified shader model? Check. Class-leading user interface. Check. Free, open source, small? Check. Blender 2.8 offers everything you want in a 3d package and nothing you don’t (dongles, copy protection, ridiculous prices, massive hardware requirements).

There aren’t many pieces of open source software that have been under continuous active development that haven’t gone through a single “major version change” in twenty years. When I started using Blender 2.8 in the early 2000s, it was version 2.3-something. In the last year it’s been progressing from 2.79 to 2.8 (I think technically the current “release” version is 2.79b, b as in the third 2.79 release not beta).

What brought me to blender was a programming contract for an updated application which, in my opinion, needed an icon. I modeled a forklift for the icon in Silo 3D (which introduced me to “box-modeling”) but needed a renderer, and none of my very expensive 3d software (I owned licenses for 3ds max, ElectricImage, and Strata StudioPro among other thins) on my then current hardware. Blender’s renderer even supported motion blur (kind of).

The blender I started using had a capable renderer that was comparatively slow and hard to configure, deep but incomprehensible functionality, and a user interface that was so bad I ended up ranting about it on the blender forums and got so much hatred in response that I gave up being part of the community. I’ve also blogged pretty extensively about my issues with blender’s user interface over the years. Below is a sampling…

Blender now features not one, not two, but three renderers. (And it supports the addition of more renderers via a plugin architecture.) The original renderer (a ray-tracing engine now referred to as Workbench) is still there, somewhat refined, but it is now accompanied by a real-time game-engine style shader based renderer (Eevee) and a GPU-accelerated unbiased (physically-based) renderer (Cycles). All three are fully integrated into the editor view, meaning you can see the effects of lighting and procedural material changes interactively.

The PBR revolution has slowly brought us to a reasonably uniform conceptualization of what a 3d “shader” should look like. Blender manages to encapsulate all of this into one, extremely versatile shader (although it may not be the most efficient option, especially for realtime applications).

Eevee and Cycles also share the same shader architecture (Workbench does not) meaning that you can use the exact same shaders for both realtime purposes (such as games) and “hero renders”.

Blender 2.8 takes blender from — as of say Blender 2.4 — having one of the worst user interfaces of any general-purpose 3D suite, to having arguably the best.

The most obvious changes in Blender 2.8 are in the user-interface. The simplification, reorganization, and decluttering that has been underway for the last five or so years has culminated in a user interface that is bordering on elegant — e.g. providing a collection of reasonable simple views that are task-focused but yet not modal — while still having the ability to instantly find any tool by searching (now command-F for find instead of space by default; I kind of miss space). Left-click to select is now the default and is a first class citizen in the user interface (complaining about Blender’s right-click to select, left click to move the “cursor” and screw yourself is this literally got me chased off Blender’s forums in 2005).

Blender still uses custom file-requesters that are simply worse in every possible way than the ones the host OS provides. Similarly, but less annoyingly, Blender uses a custom-in-window-menubar that means it’s simply wasting a lot of screen real estate when not used in full screen mode.

OK so the “globe” means “world” and the other “globe” means “shader”…

Blender relies a lot on icons to reduce the space required for the — still — enormous numbers of tabs and options, and it’s pretty hard to figure out what is supposed to mean what (e.g. the “globe with a couple of dots” icon refers to scene settings while the nearly identical “globe” icon refers to materials — um, what?). The instant search tool is great but doesn’t have any support for obvious synonyms, so you need to know that it’s a “sphere” and not a “ball” and a “cube” and not a “box” but while you “snap” the cursor you “align” objects and cameras.

Finally, Blender can still be cluttered and confusing. Some parts of the UI are visually unstable (i.e. things disappear or appear based on settings picked elsewhere, and it may not be obvious why). Some of the tools have funky workflows (e.g. several common tools only spawn a helpful floating dialog AFTER you’ve done something with the mouse that you probably didn’t want to do) and a lot of keyboard shortcuts seem to be designed for Linux users (ctrl used where command would make more sense).

The blender 2.8 documentation is pretty good but also incomplete. E.g. I couldn’t find any documentation of particle systems in the new 2.8 documentation. There’s plenty of websites with documentation or tutorials on blender’s particle systems but which variant of the user interface they’ll pertain to is pretty much luck-of-the-draw (and blender’s UI is in constant evolution).

Expecting a 3D program with 20 years of development history and a ludicrously wide-and-deep set of functionality to be learnable by clicking around is pretty unreasonable. That said, blender 2.8 comes close, generally having excellent tooltips everywhere. “Find” will quickly find you the tool you want — most of the time — and tell you its keyboard shortcut — if any — but won’t tell you where to find it in the UI. I am pretty unreasonable, but even compared to Cheetah 3D, Silo, or 3ds max (the most usable 3D programs I have previously used) I now think Blender more than holds its own in terms of learnability and ease-of-use relative to functionality.

Performance-wise, Cycles produces pretty snappy previews despite, at least for the moment, not being able to utilize the Nvidia GPU on my MBP. If you use Cycles in previews expect your laptop to run pretty damn hot. (I can’t remember which if any versions of Blender did, and I haven’t tried it out on either the 2013 Mac Pro/D500 or the 2012 Mac Pro/1070 we have lying around the house because that would involve sitting at a desk…)

Cranked up, Eevee is able to render well-beyond the requirements for broadcast animated TV shows. This frame was rendered on my laptop at 1080p in about 15s. Literally no effort has been made to make the scene efficient (there’s a big box of volumetric fog containing the whole scene with a spotlight illuminating a bunch of high polygon models with subsurface scattering and screenspace reflections.

Perhaps the most delightful feature of blender 2.8 though is Eevee, the new OpenGL-based renderer, which spans the gamut from nearly-fast-enough-for-games to definitely-good-enough-for-Netflix TV show rendering, all in either real time or near realtime. Not only does it use the same shader model as Cycles (the PBR renderer) but, to my eye, for most purposes it produces nicer results and it does so much, much faster than Cycles does.

Blender 2.8, now in late beta, is a masterpiece. If you have any interest in 3d software, even or especially if you’ve tried blender in the past and hated it, you owe it to yourself to give it another chance. Blender has somehow gone from having a user interface that only someone with Stockholm Syndrome could love to an arguably class-leading user interface. The fact that it’s an open source project, largely built by volunteers, and competing in a field of competitors with, generally, poor or at best quirky user interfaces, makes this something of a software miracle.

Eon and π

One important spoiler! (Again, it’s a great book. Go read it.)

Another of my favorite SF books from the 80s and 90s is Greg Bear’s Eon. At one point it seemed to me that Greg Bear was looking through the SF pantheon for books with great ideas that never really went anywhere and started doing rewrites where he took some amazing concept off the shelf and wrote a story around it. Eon seemed to me to be taking the wonder and promise of Rendezvous with Rama and going beyond it in all respects.

One of the interesting things about Eon is that it was a book with a lot of Russian — or more accurately Soviet — characters written in the pre-Glaznost era. For those of you not familiar with the Cold War, our relationship with the USSR went through a rapid evolution from the early 70s, during which we signed a bunch of arms control agreements and fell in love with Eastern bloc gymnasts and things seemed to be improving, through to the Soviet invasion of Afghanistan and the so-called “Star Wars” missile defense program where things got markedly worse, the death of Leonid Brezhnev which was followed by a quick succession of guys who were dying when they got to power, and then the appearance of Mikhael Gorbachev, who — with the help of Reagan and Thatcher — reduced tensions and eventually made the peaceful end of the Cold War possible in, shall we say, 1989.

Eon was published in 1985, which means it was probably written a year or two earlier — at the height of US-Soviet tensions, and it portrays both the Americans and their Russian counterparts as more doctrinaire, paranoid, and xenophobic than the reader is likely to be. Set around 2005, the Earth is significantly more advanced technologically than we now know it was at the time, but a lot of the timeline is ridiculously optimistic (and there are throwaway comments such as the US orbital missile defense platforms having been far more effective than anyone expected in the limited nuclear exchange of the 90s).

When I first read Eon, I can remember being on the cusp of giving up on it as the politics seemed so right-wing. The Russians were bad guys and the Americans were kind of idiotically blinkered. I made allowances for the fact that the setting included a historical nuclear exchange between Russia and the US which certainly would justify bad feelings, but which itself was predicated on the US and Russians being a lot more bone-headed than the real US and Russians seemed to be.

I should note that I read Eon shortly after it was published, and Gorky Park had been published five years earlier and adapted as a movie in 1983. So it’s not like there weren’t far more nuanced portrayals of Soviets citizens in mainstream popular culture despite increasing Cold War tensions and the invasion of Afghanistan.

The Wikipedia entry for Eon is pretty sparse, but claims that:

the concept of parallel universes, alternate timelines and the manipulation of space-time itself are major themes in the latter half of the novel.

Note: I don’t really think books have “themes”. I think it’s a post-hoc construction by literary critics that some writers (and film makers) have been fooled by.

It’s surprising to me then that something Greg Bear makes explicit and obvious not long into the novel is that the entire story takes place in an alternate universe. He actually compromises the science in what is a pretty hard SF novel to make the point clear: Patricia Vasquez (the main protagonist) is a mathematician tasked with understanding the physics of the seventh chamber. To do this she has technicians make her a “multimeter” that measures various mathematical and physical constants to make it possible to detect distortions in space-time — boundaries between universes. Upon receiving it, she immediately checks to see it is working and looks at the value of Pi, which is shown to be wrong.

Anyone with a solid background in Physics or Math will tell you that changing the value of π is just not possible without breaking Math. (The prominence of π in Carl Sagan’s Contact is slightly less annoying, since it is taken to be a message from The Creator. The implication being that The Creator exists outside Math, which is more mind-boggling than living outside Time, say.) It’s far more conceivable to mess with measured and — seemingly — arbitrary constants such as electro-permeability, the Planck constant, gravity, the charge on the electron, and so forth, and some of these are mentioned. But most people don’t know them to eight or ten places by heart and lack a ubiquitous device that will display them, so (I assume) Bear chose π.

My point is, once it’s clear and explicit that Eon is set in an alternate universe the question switches from “is Bear some kind of right-wing nut-job like so many otherwise excellent SF writers” (which he doesn’t seem to be, based on his other novels) to “does the universe he is describing make internal sense”? It also, I suspect, makes it harder to turn this novel into a commercially successful TV series or movie. Which is a damn shame.

Use of Weapons, Revisited

I just finished listening to Use of Weapons. I first read it shortly after it was published, and it remains my favorite book — well, maybe second to Excession — by Iain M. Banks (who is sorely missed), and one of my favorite SF novels ever.

Spoilers. Please, if you haven’t, go read the book.

First of all, after it finished and I had relistened to the last couple of chapters just to get them straight in my head, I immediately went looking for any essays about the end, and found this very nice one. What follows was intended to be a comment on this post, but WordPress.com wouldn’t cooperate so I’m posting it here.

I’d like to add my own thoughts which are a little counter to the writer of the referenced post’s wholly negative take on Elethiomel. First, he never tries to blurt out a justification for his actions to Livueta, despite many opportunities. Even in his own internal monologues he never tries to justify his own actions. Similarly, if anything Livueta remembers him more fondly than he remembers himself (at least before the chair). If he’s a psychopath, he’s remarkably wracked by conscience.

In an earlier flashback he wonders what it is he wants from her, and considers and (if I recall correctly) rejects forgiveness.

If we read between the lines, we might conclude that the regime to which the Zakalwes are loyal is actually pretty horrible. The strong implication is that Elethiomel’s family narrowly escapes annihilation only owing to their being sheltered by the Zakalwe’s. It has the feel of Tsarist Russia about it.

Elethiomel, for all his negative qualities seems naturally attracted to the nicer side in every scrap he ends up in. When he freelances in the early flashbacks, he’s not doing anything public, he’s quietly and secretly using his wealth and power to (crudely) attempt to do the kinds of things the Culture does.

The book is full of symmetries. If you’d like one more, the Zakalwes are “nice” people loyal to a terrible regime, whereas Elethiomel is a ruthless bastard who works for good, or at least less terrible, regimes.

So it’s perfectly possible that the rebellion he led was in fact very much a heroic and well-intentioned thing, but at the end, when it was doomed, he fell victim to his two great weaknesses — the unwillingness to back down from an untenable position (if he looks like he’s losing, he simply keeps on fighting to the bitter end) and his willingness to use ANYTHING as a weapon no matter how terrible. I think it’s perfectly possible that he did not kill Darkense, but was willing to use her corpse as a weapon because it gave him one more roll of the dice. What did he want to say to Livueta, after all?

I further submit that his final unwillingness to perform the decapitation attack in his last mission shows that he has actually learned something at long last. And this is the thing in him that has changed and caused him to start screwing up (from Special Circumstances’ point of view) in missions since he was, himself, literally decapitated.

b8r v2

b8r v2 is going to adopt web-components over components and import over require

Having developed b8r in a period of about two weeks after leaving Facebook (it was essentially a distillation / brain-dump of all the opinions I had formed about writing front-end code while working there), I’ve just spent about two years building a pretty solid social media application using it.

But, times change. I’m done with social media, and the question is, whither b8r?

In the last few weeks of my previous job, one of my colleagues paid me a huge compliment, he said I’d “really hit it out of the park” with b8r. We’d just rebuilt half the user interface of our application, replacing some views completely and significantly redesigning others, in two weeks, with one of the three front-end coders on leave.

I designed b8r to afford writing complex applications quickly, turn hacky code into solid code without major disruption, make maintenance and debugging as easy as possible, and to allow new programmers to quickly grok existing code. As far as I can tell, it seems to do the trick. But it’s not perfect.

Two significant things have emerged in the front-end world in the last few years: import and web-components (custom DOM elements in particular)

import

When I wrote b8r, I found the various implementations of require to be so annoying (and mutually incompatible) I wrote my own, and it represents quite an investment in effort since then (e.g. I ended up writing an entire packaging system because of it).

Switching to import seems like a no-brainer, even if it won’t be painless (for various reasons, import is pretty inimical to CommonJS and not many third-party libraries are import-friendly, and it appears to be impossible for a given javascript file to be compatible with both import and require).

I experimentally converted all of the b8r core over to using import in an afternoon — enough that it could pass all the unit tests, although it couldn’t load the documentation system because any component that uses require or require.lazy won’t work.

Which got me to thinking about…

web-components

I’ve been struggling with improving the b8r’s component architecture. The most important thing I wanted was for components to naturally provide a controller (in effect, for components to be instances of a class), and pave some good paths and wall up some bad ones. But, after several abortive attempts and then thinking about the switch from require to import, I’ve decided to double-down on web-components. The great thing about web-components is that they have all the virtues I want from v2 components and absolutely no dependency on b8r.

I’ve already added a convenience library called web-components.js. You can check it out along with a few sample components. The library makes it relatively easy to implement custom DOM elements, and provides an economical javascript idiom for creating DOM elements that doesn’t involve JSX or other atrocities.

Using this library you can write code like this (this code generates the internals of one of the example components):

fragment(
div({classes: ['selection']}),
div({content: '▾', classes: ['indicator']}),
div({classes: ['menu'], content: slot()}),
)

I think it’s competitive with JSX while not having any of the dependencies (such as requiring a transpile cycle for starters).

<div className={'selection'}></div>
<div className={'indicator'}>▾</div>
<div className={'menu'}>
{...props.children}
</div>

To see just how lean a component implemented using this library can be, you can compare the new switch component to the old CSS-based version.

Aside — An Interesting Advantage of Web Components

One of the interesting properties of web-components is that internally the only part of the DOM they need to care about is whether they have a child <slot>. Web-components don’t need to use the DOM at all except for purposes of managing hierarchical relationships. (Annoyingly, web-components cannot be self-closing tags. You can’t even explicitly self-close them.)

E.g. imagine a web-component that creates a WebGL context and child components that render into container’s scene description.

In several cases while writing b8r examples I really wanted to be able to have abstract components (e.g. the asteroids in the asteroids example or the character model in the threejs example). This is something that can easily be done with web-components but is impossible with b8r’s HTML-centric components. It would be perfectly viable to build a component library that renders itself as WebGL or SVG.

Styling Custom Elements for Fun and Profit

One of the open questions about custom DOM elements is how to allow them to be styled. So far, I’ve seen one article suggesting subclassing, which seems to me like a Bad Idea.

I’m currently leaning towards one or both of (a) making widgets as generic and granular as possible (e.g. implement a custom <select> and a custom <option> and let them be styled from “outside”) and (b) when necessary driving styles via CSS variables (e.g. you might have a widget named foo that has a border, and give it a generic name (–widget-border-color), specific name (–foo-border-color), and a default to fall back to.

So, in essence, b8r v2 will be smaller and simpler — because it’s going to be b8r minus require and components. You won’t need components, because you’ll have web-components. You won’t need require because you’ll have import. I also plan one more significant change in b8r v2 — it will be a proper node package, so you can manage it with npm and yarn and so forth.

<b8r-bindery>

One idea that I’m toying with is to make b8r itself “just a component”. Basically, you’d get a b8r component that you simply stick anywhere in the DOM and you’d get b8r’s core functionality.

In essence the bindery’s value — presumably an object — becomes accessible (via paths) to all descendants, and the component handles all events the usual way.

I’m also toying with the idea of supporting Redux (rather than b8r’s finer-grained two-way bindings). There’s probably not much to do here — just get Redux to populate a bindery and then instead of the tedious passing of objects from parent-to-child-to-grandchild that characterizes React-Redux coding you can simply bind to paths and get on with your life.

Summing Up

After two years, I’m still pretty happy with b8r. Over the next year or so I hope to make it more interoperable (“just another package”), and to migrate it from (its) require and HTML components to import and web-components. Presumably, we’ll have import working in node and (and hence Electron and nwjs) by then.

Happy new year!