So, I came across this Dell 4K display while visiting the new Nebraska Furniture that has opened close to us. (It’s quite an amazing place — retail’s revenge on Amazon.com — and it sells a lot more than furniture.) Anyway, I got it home, plugged it into my Macbook Pro 15″ (2014) and it just works (at 60Hz). That’s about all I can say about it.
Just as Core Graphics created a race to replace Photoshop (at least for the great unwashed masses who don’t care about Color Spaces, CMYK, Lab Color, HDR, Stitching, Content-Aware Resizing and Deletion, seamless integration with other Adobe software, and so forth) there is now a race to replace Illustrator. Part of the problem is that the bewildering range of screen densities has made working primarily with bitmaps essentially a mug’s game (even Apple-targeted designers, used to supporting two resolutions, are suddenly faced with four (one of which is seriously weird).
You may recall some old stalwarts, such as Inkscape, Intaglio Designer, iDraw, Artboard, ZeusDraw, EasyDraw — of which I like iDraw — and Sketch; some of these are pretty credible Illustrator replacements (at least for casual users) but there are even more entrants in the field now, and they’re even more interesting:
Let me pause here to say this: I freaking love Sketch (2). I probably use it more than any other non-3d graphics program (I use it in preference to iDraw these days). I probably use it more than all other non-3d graphics software put together. Sketch is my go-to app for UI graphics and textures for (cartoony) 3d models.
Finally, Affinity Designer comes from Serif, a company dedicated to competing with Adobe at the low-end in the Windows market, switching over to the Mac-only market with a bang. Their plan is to start with an Illustrator-killer, then proceed with Affinity Photo and Affinity Publisher. (Publisher? Really? Do they want to take on Pages and InDesign at once because that seems to me to be a losing proposition.) Of the three, Affinity Designer is clearly the most Illustrator-like, while Sketch 3 is kind of an Illustrator/Fireworks hybrid, and Paintcode/Webcode are simply unique.
I played with Affinity Designer briefly during the free beta, but it didn’t leave me with a strong impression. When they announced its release, I ponied up the (discounted) $40 (iDraw, my current favorite, is $25, but there’s been no significant improvements to it over the last year, and the things that annoy me about it still annoy me). The first thing I noticed when I launched Affinity Designer is that — like Illustrator — it defaults to print usage (CMYK, paper-oriented layout). It’s nice to discover that it also has web- and device-centric settings and defaults, and @2x retina support out of the box (but unlike Paintcode it hasn’t figured out what to do about the iPhone 6/Plus).
The first thing I did was take a pretty damn complicated SVG file (with layers and typography) and export it from iDraw and import it into Affinity Designer. Every font failed to import correctly (Helvetica Bold and DIN Condensed), but otherwise it seemed to do a pretty good job — overall, it did a better job than Sketch (2) at importing the same file. I think the problem lies with how SVG stores font information (Sketch had the same issue when importing the file; note that iDraw can import its own SVGs flawlessly.)
But here’s where things get ugly — when I tried to fix the font issues, I discovered that I can’t change the character style settings for more than one object at a time. (And this is not a problem in Sketch or iDraw.) As a workaround, I tried to create a style from one object and apply it to others but that didn’t work at all — styles seem to be limited to fill color and the like (and fill color doesn’t seem to be the same thing as text color). Bad start.
Time to look at the program in its own terms. One of the best things about Sketch relative to iDraw is its support for gaussian blur as a style. Affinity Designer has this and more (e.g. emboss, and a weird “3D” effect that I’m not sure what it’s supported to do). What it doesn’t do (and what Sketch and iDraw both do) is allow you to apply the same filters multiple times (e.g. much the same way you can stack box-shadow effects in CSS). Another annoyance with Affinity Designer’s effects is that important settings are buried in a modal dialog box (iDraw is annoying in a different way in that you need to disclose the settings with an extra double-click, but that’s a pretty minor annoyance). So far, I’d call this a mixed result.
Here’s an example of Affinity Designer at its worst. I draw a test bezier curve and then try to apply a stroke to it. So far so good. But it’s stroked in the center of the curve.
- I almost always want to stroke inside the curve, and sometimes outside, but almost never centered on the curve. So I look at the stroke settings, and all that is exposed is color, opacity, and radius.
- To access the extra properties I need to click on the little “gear” icon that lets you configure the other settings of a given filter.
- As I’ve mentioned before, this dialog is modal; it also defaults to showing some random filter setting, not the one you were working on (which confused me a minute, since it was showing bevel/emboss options and not line options).
- OK, so having switched over to the line settings, I discover that the options (inside, outside, centered) which — for some reason — is not the top setting (compositing mode is at the top).
- Here’s the rub — the different options (a) don’t work and (b) appear to override and block the modeless setting (i.e. when I change radius in the modeless view, the setting no longer works. WTF?).
This is a freaking disaster. First of all, how can an Illustrator clone go out the door with broken strokes?
I do like the basic selection affordances. In particular rotation gets its own affordance (the little dot out on its own) rather than requiring mouse/keyboard chording. The basic Bezier drawing tools seem to be solid.
But there’s one more global observation I need to make before I move on: the tools all feel wrong. There’s a nuance to the rules that govern how 2d graphics tools, in drawing programs especially, behave. When they should be sticky vs. revert to a selection tool, and so forth. This stuff is so basic that it happens below the level of conscious decision-making. For better or (mostly) worse, a lot of us have Illustrator’s behavior in our muscle memory (where it displayed MacDraw, which was generally more intuitive).
In any event, just as iDraw and Sketch and many other Mac graphics programs get this somehow right, Affinity Designer gets this somehow wrong, and it bugs the hell out of me. If the program were in a more functional state, I might even spend the time to go figure out exactly what’s wrong and write some kind of detailed bug report for the development team, but I find the program, as a whole, to be so fractally unusable that I just can’t be bothered.
At this point, it’s not worth continuing the review. Affinity Designer is a promising and polished looking piece of software, but basic functionality is completely broken, and it has horrific workflow problems (styles don’t work with text formatting, you can’t edit multiple selections in a useful way, the wrong properties are disclosed in the modeless floater, and the modal dialog is both weird and buggy). So, in summary:
- Some features that are lacking in iDraw (Gaussian Blur effect)
- Single window UI (unlike iDraw which suffers from palette-itis)
- Better SVG import than most rivals
- Limited effects options (can’t apply multiple instances of a given effect to a single element)
- Editing effects is clumsy (useful stuff is buried in modal dialog, which does not open on the right effect) and buggy (the settings don’t work properly).
- Affinity Designer’s tools just feel wrong; they stick in modes when they shouldn’t (or I don’t expect them to) and it’s just infuriating.
Affinity Designer manages to be promising, attractive, and completely useless in its current form.
Note: I purchased Affinity Designer from the App Store after using the public beta a few times. I was so frustrated with the release version that I have requested a refund from Apple, and have deleted the app. (I think this is maybe the second time I have ever asked for an App purchase to be refunded.)
One of the biggest changes in Blender over the last two years has been major changes and reorganization of its user interface. The goal was to address Blender’s glaring usability issues, and indeed many of them have been addressed. Yet Blender remains a dauntingly difficult program to use, and — worse — a difficult program to remember how to use. I’d say that to me (and I am hardly an expert user of any) Blender is now still somewhat harder to use and more disorganized than, say, Modo (I have Modo 601) or Cinema 4D (I have v11 XL) but that’s not saying much.
All serious 3D applications, and Blender unquestionably qualifies as a serious 3D application, are plagued by the intrinsic complexity of what they’re trying to do, combined with a lack of agreement on the “right way to do things”, and the fact that real technical improvements are constantly being made in 3D graphics, leading to proliferation of overlapping tools and terminology (consider that there are multiple implementations of subdivision surfaces — such as Catmull-Clarke and Stam-Loop — and it’s important for a 3D program to support as many as possible if it’s to be compatible with other 3D programs. Similarly, there are many different tools for modeling organic objects and curved surfaces, including NURBS, subdivision surfaces, deformation lattices, and multiple approaches to sculpting (including voxels and displacement).
The “Right Way” isn’t obvious
If you go back to the era of DOS word-processors there was a similar lack of agreement — Word Perfect, for example, had a completely different paradigm for editing and formatting text than did Microsoft Word, and on the Mac side MacWrite and FullWrite used a “ruler” formatting paradigm while WriteNow and Microsoft Word used “styles” (but each wrapped them in a unique user interface and terminology). Pages and Microsoft Word, by comparison, are virtually identical in how they solve most problems, the differences coming down to nuances. The Mac toolbox (along with excellent free software such as MacPaint and MacWrite, and — eventually — written Human Interface Guidelines) standardized basic text editing operations (selection, inserting the caret, clipboard operations, and undo) but it took close to ten years for the world to settle on the next level of convention — word and paragraph styles with “spot” overrides. (And indeed, Windows still hasn’t quite settled on clipboard or text edit field behavior standards.)
And then let’s return for a moment to the point that — unlike with word-processing — almost every aspect of 3D technology is constantly churning. With word-processors and even page layout programs there have been some technical improvements to be sure (e.g. typefaces have become more sophisticated with things like multiple master fonts and automatic ligatures, there’s Unicode of course, and compositing has improved markedly) but these have been relatively simple changes conceptually, and in general they have just made things easier (e.g. instead of having to render a drop shadow in Photoshop, export it as EPS, and then embed a reference to it in Quark XPress, you can now perform the entire operation in a click or two in your page layout program or word-processor). Contrast this with the world of 3D which has seen major new technologies emerge in pretty much every piece of core functionality from modeling to rigging to texture-mapping to lighting to rendering — and all these different technologies have to live side-by-side, inter-operate, and somehow fit into a single user interface.
As you’ll discover if you visit Blender’s website right now, Blender is now 20 years old, and that means it has 20 years worth of functionality clogging up its user interface. It supports NURBS and subdivision surfaces and multi-resolution displacement sculpting. Similarly it has support for rigging using bone envelopes, vertex-painting, and heat-mapping. It has three built-in renderers. It has two different node-based shader architectures (or is it three?). I’m sure you’re getting the picture.
Left-Click to Select
Blender remains unusually bad at a fundamental level because it refuses to adhere to such cross-application conventions as do exist. The most hilarious example is left-click to select. It’s almost impossible to find another application anywhere that uses left-click for something other than selection (unless there is no concept of selection at all). It’s simply bizarre. What’s worse, left click is used to position the 3d “cursor” object — which is actually a very clever concept — except that it’s a far less common operation than simple selection, and doing it by accident is really annoying.
(If you consider the behavior of a text cursor, clicking somewhere in a body of text positions the cursor rather than selecting anything. The cursor position determines where stuff you type will appear. This is the exact behavior of the Blender cursor. But it’s not the behavior of any 2D or 3D graphics program I’ve ever seen. But to select text you left-click and drag or double-click — this is definitely not how Blender works, so justifying Blender’s weirdness by analogy to text doesn’t get you very far.)
Now I need to note that Blender does allow you to make left-click into select as a preference, but the program really doesn’t work properly in this mode (and you need to reset the preference when you upgrade Blender) so that’s not a satisfying solution. Just to give you one example: there’s a shortcut in Blender to allow you to “lasso select” using ctrl+left-mouse (why left-mouse when selection is normally with the right mouse?) — this becomes ctrl+right-mouse if you’ve set left-click to select. What I want is to select stuff with the left mouse button. Not flip a ridiculous set of defaults and create a different ridiculous set of defaults. On the whole, I end up sticking with right-mouse to select simply because it means not having to reverse engineer every tutorial I find.
If it were up to me, selecting something (with the left mouse button) would also, by default, position the cursor at the selected object’s local origin (sometimes referred to as its “pivot”). You could explicitly position the cursor with the right mouse button (or perhaps setting the cursor could be a mode or tool), and there would also be an option to lock the cursor (many 3d programs effectively have a “cursor” locked at the origin). This would actually maximize the conceptual mapping between the 3d cursor and the text cursor — left clicking sets your insertion point – and not compromise users’ familiarity with programs other than Blender.
Blender’s improved user interface — and it is definitely improved! — has two more major annoyances from my point of view. First, contextual tool panes have been added to the 3D view (which is great) but the process of showing them and hiding them is asymmetric. To disclose them you click on a little + widget — which then disappears. To hide them you either need to learn the keyboard shortcut, or use the mouse to scale them down to nothing (which causes the little + widget to reappear).
Second, with the advent of the new Cycles renderer, to get a live-rendered viewport you need to switch to the Cycles renderer (one step) and then in a completely different place you need to set the viewport mode to Rendered. For your trouble you are rewarded with a wonderful live-rendered view that gives you no UI affordances (e.g. you can’t tell what’s selected) but remains live for editing purposes. Ugh. So to make this workable, you then need to create a new viewport and tweak its settings.
Finally, to modify the quality of Cycles renders — and spent quite a bit of time googling to figure this out — you need to delve deep into the rendering panel and — slightly hilarious — the place where you determine how many samples to obtain for a render is the only place I know of in Blender’s interface where the preview settings are at the bottom and the final render settings are at the top. If you’re used to Blender’s “conventions” everywhere else (e.g. subdivision settings) it’s always the other way around.
These are most of the first order problems in Blender today (as of 2.69). Clutter and ambiguity are also first order problems — I’d suggest that a lot of that could be improved simply be making a decision as to which slabs of functionality to hide by default. I hope we can some day get to second order issues, but on the positive side most of the new stuff getting added to Blender seems to have a much higher standard of usability than older features. E.g. to do a physics simulation you simply click on the Dynamics tab, select your objects, assign them some properties (e.g. passive rigidbody) and then click the play button in the timeline panel and it just works. More of that please! Similarly, the node-based material system associated with Cycles is very easy to work with (indeed, you don’t really need to use the nodes at all) although right now it’s missing a lot of functionality.
You may recall that I was a longtime user and fan of GraphicConverter, but gave up on it when v7 was a paid upgrade. I haven’t even launched GC in years. Well, v8 just came out (it’s still a paid upgrade for me, but not for folks who upgraded to 7) and it addresses one of my major gripes with v7 — it supports layers. It doesn’t address other concerns (it’s torpid and bloated — over 300MB!) but it does a fantastic job of displaying directories full of RAW images and allowing them to be rated quickly (even if, strangely, ratings made in the document window can’t be saved to NEFs, but those made in the browser window can). Despite importing RAW images and providing layer support, it doesn’t allow you to directly adjust RAW import settings (the way Acorn and all serious photo editors do), provide a non-destructive RAW workflow (the way, or attempt to remove lens distortions (iPhoto, Aperture, Lightroom, and many other photography-oriented image editors do all of these things).
So, it’s awesome that it supports layers, but image adjustments are incredibly slow (and don’t appear to work at all, but I assume that’s a bug), and the lack of (non-destructive) image adjustments mean that GC remains a pretty useless program for the time being.
So, for now, my photo-management software of choice remains Pathfinder.
For years I’ve been talking about life after Adobe, and reading yet another glowing review of Pixelmator 3.0 FX (a product I reviewed in favorable, but less-than-glowing terms), it struck me that I’ve actually gone cold turkey on Adobe for over a year now and there’s nothing pulling me back.
As in all articles I write about Adobe, this one carries the caveat “except for Adobe Ideas on the iPad which I love”. (I should note that I bought Adobe Photoshop touch when it came out, and I never use it.)
Now, I’m not going to argue that Adobe’s amazingly capable programs aren’t needed by anyone. If I were a full-time graphics professional I would doubtless use Photoshop (perhaps no longer Illustrator). If I were still working in print I would use InDesign or Quark (or FrameMaker). If I were doing video, I might use After Effects (if not Fire / Flame / Combustion / Pyromaniac / Napalm or whatever). But I’m not, and I suspect a good many of Adobe’s long-term customers aren’t either. Furthermore, a lot of people I see “professionally” using Photoshop aren’t really doing anything with it that they couldn’t do faster and more easily in Acorn or whatever.
But it doesn’t matter — for me, Adobe is irrelevant. (Except for Adobe Ideas, which I love.)
Acorn replaces Photoshop
Probably the main thing I miss in Acorn from Photoshop is solid typographic tools but, frankly, I don’t do very much typographic work any more so I don’t care. (And Acorn provides direct support for some of Cocoa’s core typographic tools that Photoshop either doesn’t or successfully buries.) For logo work all you really need is “convert to outlines” (or in Acorn’s terms: “Convert to Bezier Shape”) and Acorn has that covered. Acorn launches faster, and has deeper and more powerful non-destructive filter support. And when I encounter bugs I report them and usually see a fix within one or two releases (which are frequent).
Pixelmator is occasionally useful. Mischief and Art Rage deserve mention too. I have utterly given up on Photoline — its lack of attention to detail (in terms of producing excellent output) always seems to bite me so I stopped paying for upgrades.
iDraw replaces Illustrator
iDraw not only replaces Illustrator for my purposes, it exceeds it (and also Illustrator used in combination with Photoshop) for my purposes. It’s not perfect, but neither was Illustrator. My current main use for iDraw is creating textures for my 3D projects.
I briefly flirted with Artboard (but soured on it), and I’ve also tried Inkscape (which is free and very solid), Intaglio (which is a straighter Illustrator replacement, but overpriced), ZeusDraw, EasyDraw, and Lineform. They’re all acceptable, but iDraw is downright awesome. I also bought the iPad version which is staggeringly capable (although I still find touch-based draw programs hard to use).
I was never a big Fireworks user, but Sketch is something of a replacement/successor for Fireworks — a vector-centric UI creation tool that happens to live and breath SVG (making it highly interoperable with iDraw).
Now, if I were going to do Flash-type stuff, I would probably want to use Hype or Adobe Edge, but certainly not Flash. Just to give you an idea of how dead Flash is, Coherent UI is a product that lets you develop video game interfaces using webkit instead of Flash (or Scaleform) and it’s taking the Unity community, at least, by storm. Indeed, given how behind the Unity next-gen UI seems to be, perhaps Unity will license Coherent instead of continuing with its apparently stalled project.
Aperture does not replace Lightroom; perhaps Finder does
I don’t currently use Lightroom, but every time I use Aperture or iPhoto I’m tempted to switch (back — I used LR2). I’m honestly hoping for a major upgrade to Aperture in the next few months or I’m going to switch to something. That said, I didn’t much care for Lightroom’s overarching user interface, so I’m not sure I’d switch to Lightroom in any event (what I’ve already halfway done is switched to Finder for managing my photos, and then dumping them into Lightroom for processing (which it is great at).
Motion replaces After Effects and Premiere
If I did more video editing, I’d probably mention Final Cut Pro X, but I don’t. Motion does pretty much everything I need in a video editor, and pretty much anything After Effects can do (minus deep Photoshop integration) and some things besides. I love it. This is lucky, because video editors are one thing the indie and open source community appear to suck at.
Markdown and CSS replace InDesign and FrameMaker
I used to suggest Pages for casual page layout, but bugs in Pages ePub export have driven me to despair. Every year for the last ten years or so I’ve used Markdown for more things. Thanks to Mou, I’m using Markdown for word-processing now, and the amazing thing is that together with CSS (and electronic publishing) it essentially removes the need for page layout software. If you want something fancier than Mou there are quite a lot of options including Ulysses, Texts, and Markdown Pro.
So that’s it. Except for Adobe Ideas, I really no longer have any use for any of Adobe’s products. I might be more tempted by Lightroom if it weren’t an Adobe product with all that currently entails. I might be tempted to try out Adobe Edge if I were still working in Web Advertising. (Heck, I might still be using Flash.) But it’s remarkable to see how Adobe has gone from indispensable to dispensed-with in five short years.
Farewell Adobe, it was great — well usually pretty good — while it lasted.