A Quick Look at Panic’s Nova

I recently became aware that Nova (Panic’s next gen replacement for Coda) has been out since September, and decided to give it a quick spin.

Now, I’m currently working on an internal IDE that runs in Chrome so this is of special interest to me. In particular, I was wondering if Panic had come up with new magic sauce to advance the “state of the art” in lightweight platform-native IDEs, the way they did when they first shipped Nova.

I remember my first complaint about Coda was that its CSS editing tools were not up-to-par with CSSEdit, which Cabel (who has been known to reply to me directly) said was something they’d hoped to license. (Turns out they were working on Espresso, an ambitious and interesting but perpetually not-quite-there yet Coda rival.) Anyway, Coda’s CSS tools eventually got decent, but never caught up with CSSEdit.

Luckily, I became so ridiculously fluent in CSS that I’ve not really needed anything beyond a basic text editor in years. (At the time, I regarded CSS with disdain and hoped to learn as little about it as possible.)

Anyway, some quick observations:

  1. It’s beautiful. Yay.
  2. It seems very much like a native vscode. Yay.
  3. It’s not interoperable with vscode extensions (and that’s likely very hard to do) and I haven’t looked into how hard it would be to “port” an extension from vscode. Boo.
  4. It doesn’t appear to leverage existing quasi-standards, the way Sublime leveraged TextMate stuff, and vscode leverages… stuff. Boo.
  5. It doesn’t handle terminals as gracefully as vscode. vscode’s terminal by default appears as needed, at the bottom of the main window, and tucks itself into a tiny space with a click. Boo.
  6. Running node package scripts is clunky (it requires a third-party extension that isn’t nearly as nice as what vscode has gotten me used to). I don’t want Panic to over-focus on node users, because Coda’s over-focus on PHP users is one reason I stopped using it. Boo.
  7. Nova supports themes. Not a surprise. I couldn’t quickly find documentation on how to create a new theme, but I imagine it won’t be terribly difficult and it will be feasible to convert themes from other popular things to Nova. Meh.
  8. Nova lets you theme its overall UI, code editors, and terminals separately! This is awesome, since it allows you to easily differentiate panes at a glance, rather than everything looking pretty much identical. Yay.
  9. I was disappointed, when trying to open a Remote Project that it automatically (or with a click) import server settings from Transmit, of which I have been a user since 1.0). I had just made a quick fix to one of my demos using Transmit, which let me open the server, open the file (on the server) directly with BBEdit, make the fix, hit save the way I would for a desktop file, and reload the site to check the fix worked — that’s the workflow it needs to at least match. Boo.

Nova renders color swatches on the line-count column as opposed to the way vscode does it (inline in the text, somewhat disconcertingly). In both cases you can click to get a color-picker, but Nova’s is much nicer (it has spaces for you to keep common colors, for example, but seriously — use css variables!).

This is nice, but I don’t see any assistance for things like setting fonts, or borders. Although I have a license for Espresso somewhere or other, I’m not going to reinstall it just to take a screenshot, but here’s a capture from their website (yes, it lives!):

The user’s cursor is in a rule, and in addition to the text, the values are exposed as the UI shown above. This is basically a tiny part of what CSSEdit was doing, over ten years ago — my receipt for CSSEdit 2.5 is dated 2007. CSSEdit also allowed you to open a web page and then edit its underlying styles and then save those changes directly to the sources, kind of like an automatic round-trip version of the workflow many of us use in Chrome/Safari/Firefox.

The first extension I tried to use was from standardjs, which I love. I didn’t notice this until later:

This is not good. It reminds me of Lotus Notes telling me it needs me to do five specific things to do what I obviously want to do. Automatically loading dependencies is kind of a solved problem these days… Boo.

It’s hard to compete with a free tool that’s backed by Microsoft. And vscode has become a lot of people’s, especially web-developers’, preferred editor, and it’s easy to see why. Because any web developer must pretty much know all the tech that underlies vscode, it has a particularly rich extension library.

Jetbrains suite of java-based tools has also earned a huge following. (Sublime Text—once seemingly everyone’s favorite editor—hasn’t received an update since 2019. The team seems to have switched their focus to Sublime Merge.)

Overall, my initial reaction to Nova is slightly more negative than positive. I’m sorry to say this but I’m unconvinced.

More Adventures in 3D Printing

The D&D boardgames come with plastic miniatures. They’re not especially nice, but neither is anything else out there.

So, I started playing the D&D boardgames with my kids. The games are pretty good (I initially bought them as a way of getting a buttload of materials for playing regular D&D, but that went nowhere).

But, my game designer brain won’t shut down… I had already toyed with the idea of a card-based RPG system, but couldn’t quite get it working. My (minor) frustrations with the D&D games gave me an idea. I’ll get to that in another post.

Anyway, in a nutshell, D&D boardgames with miniatures led me to think about custom miniatures which led me (back) to HeroForge which led my to think about 3D printing, and thus I discovered that SLA (stereolithography) printers have suddenly gotten a lot cheaper and by all signs are actually good.

So, I bought a Voxelab (or maybe Voxellab? — they can’t make up their minds) Polaris resin printer for about $160 plus tax, and a couple of 500mL bottles of resin for about $15 apiece.

This all arrived on News Year’s Eve. Woohoo! Champagne and 3D printing!

The printer didn’t come with any resin, and the instructions make reference to items not included: e.g. a container to dip prints in denatured alcohol, denatured alcohol, and a curing box. I made do with old jars, ordinary 70% alcohol (more on this later; a costly mistake I think), and paper napkins and the initial results were amazing.

Setup is amazingly easy. You basically:

  1. Place it somewhere level(ish)
  2. Plug it in
  3. Pull out resin tray
  4. Place a sheet of paper over the spot the resin tray occupied
  5. Loosen two screws on the print head, allowing it to move
  6. Run the print head down until the paper is flattened
  7. Setting this as ZERO
  8. Raise the print head
  9. Replace the tray
  10. Pour in some resin (word of warning: it’s really hard to tell how much transparent resin you’ve poured in).

This would have been quicker if the instructions weren’t in broken English and were decently organized and/or had an index. Even so, not bad at all.

The software is provided on a USB stick, which is big enough to serve as a file transfer device (this printer doesn’t do WiFi, so you save models (in *.fdb format) using the included Chitubox (free) software.

Chitubox is a semi-nice program. The buttons are all non-standard, tiny, nearly indistinguishable, and have no tooltips. So that’s nice.

My single biggest frustration was figuring out how to import the profile (on the USB stick) into Chitubox (once installed). Turns out there’s a video that more-or-less explains the process on the stick.

To print a 3D model (or several):

  1. Export models as OBJ files. (Ideally a model should be “water-tight” and clean, but it’s very forgiving.)
  2. Import one or more obj files into Chitubox.
  3. Scale, rotate, position the models to taste.
  4. Go to the supports tab and click Add All.
  5. Go back to the first tab and click Slice. This may take a while
  6. Drag the slice slider (not to be confused with the view slider) up-and-down to look at the cross sections and check they seem correct. (The view slider shows you a quickly calculated cross-section of your model, the slice slider shows you the actual rendered cross-section that will be used to print a layer.)
  7. Save the model (as FDG) to a memory stick.
  8. Put the stick in the printer.
  9. On the front panel pick your model (a correctly saved file should display your model; ignore weird Mac hidden files and such) and press Play.
It’s definitely worth scanning through the layers after slicing to sanity check before spending hours printing something that makes no sense. One annoying mistake I’ve made is making hollows with no place for the fluid to drain (or rather making them in the wrong place).

Printing

Here’s the printer in action. The curved thing is the print platform—a solid hunk of aluminum—and it is bathing in transparent resin (you can see bubbles in the resin towards the top-left). After each layer is exposed, the head is pulled up 5mm (this is adjustable) which draws more resin under the print, and then lowered 0.5mm above its previous exposure position, and so on.

Printing is slow (but it’s dictated by the height of the print, not the volume, so you can print a bunch of miniatures in parallel just as quickly as you can print one). It’s printing your model 0.5mm (?) at a time (with antialiased voxels!—or maybe just pixels) and each for each layer, the model is lowered to 0.5mm above the glass plate (at the bottom of the resin chamber), exposed, then raised to allow fluid to seep in. Lather, rinse, repeat.

Each layer is being printed by shining a UV lamp through an LCD displaying the bitmap cross-section for that layer (as seen in step 6, above). That’s it! (The printer displays the layer being printed on its color touch screen.) Mechanically, this is much simpler than an FDM printer, and the quality of the output is dictated by the resolution of the LCD screen and the quality of the resin.

The printer is quiet compared to FDM (still, a pretty loud fan) and it doesn’t cause noticeable vibration (it’s only moving the head up and down with a screw drive).

When it’s done, it beeps and raises the print clear of the resin basin.

  1. Rinse it in alcohol
  2. Pull it out of the alcohol and let the alcohol drain
  3. Leave it to cure somewhere well-ventilated

So, how good are the results?

Fresh out of the printer, transparent prints look amazing. This is all covered in liquid resin—goop Gwyneth Powell might be proud of—probably not. Below the toroid you can see the support platform (from which the model hangs during printing.

It’s hard for me not to be hyperbolic and create unreasonable expectations, given my previous frustrations with FDM (fusion deposit modeling, or “robot glue guns” as I call them), but here goes.

This is the first inexpensive 3d printer that I have found not to be disappointing.

This is the first 3d printer I have used that produced a good model the first time I used it. (And, no, it wasn’t the demo model that came with the USB stick.)

This is my first print, 36h after it emerged from the resin. It is perfect. (The support platform looks nicer than anything I’ve seen out of an FSM printer). If you pixel peep you can see “sedimentary” layers in the print surface, but this is not reflected by the surface feeling rough. The only blemishes are tiny nubs where the supports were broken off.

I compare this to the first time I saw printed output from a laser printer, or the first time I used a good color flatbed scanner, or the first time I captured video from a broadcast quality camera with a broadcast quality capture card. In terms of output quality, this shit has arrived.

In a nutshell, the output is smooth (wet, transparent output almost looks like glass, but it becomes cloudy as it cures) and the prints feel rigid but a little flexible. The way I describe it is “somewhere between plastic army men and lego”. Prints feel about as durable as a nice plastic manufactured item of the same form (perhaps slightly nicer than the plastic D&D miniatures that sent me off on this tangent).

I should add that the printer estimates the cost of each print based on the volume of resin to be used and the cost of the resin. So far, my models have “cost” less than $0.25. But they’ve all been quite small. (But wait to find out about other consumable costs…)

Flies in the ointment

New printing method, new failure modes. This print of my omnidude model mid-run looks great at first glance, but turns out to have a defect—partially visible here (most of the right leg is missing, as is the back of the base. I believe the cause is deposits on the PEP film blocking uniform exposure or possibly the a portion of the print base failing to adhere to the head.

Let me begin by saying that most of this is a result of user error. I couldn’t get pure isopropyl alcohol on New Year’s Eve at 10pm and I wanted to play with my toy!

Let’s start with nasty chemicals—the resin smells bad, and is viscous, sticky, slimy, hard to clean off, and probably bad for you. One bottle of resin arrived having leaked a bit inside its container. The prints—perhaps because I’m not using pure alcohol and/or don’t have a proper “curing box”—are slightly tacky for hours after printing. Nevertheless, my first print has lost almost all its tackiness 36h or so since printing.

Recovering unused resin using the provided paper filter is horrible, and cleaning the tray is painful. But then there’s some really gnarly issues…

First, my second print had some odd, minor flaws in it, which I ascribed to my modeling, but turned out to be much nastier.

Next, we had a power failure while a print was in progress and the printer lost track of where it was. I tried to recalibrate by just moving the print head down and fiddling (rather than draining all the resin and cleaning everything and recalibrating properly, and so on, but it didn’t work.

My weasel character was one of the two figures in progress when the power failed. (Shown here atop a quarter and an old two pound coin for scale—this thing is tiny.) It shed a few layers from the back of its tail, and there’s that small blemish on the front of his abdomen (which I don’t see with my eyes). I added some very fine detail to the eyes (an indentation on his right (cyborg) eye, and a pupil on his left eye. The latter isn’t visible.
This profile shows the flawed fail, several layers of which sloughed off at a touch.

My first print after the power outage and my dodgy recalibration was a disaster. The model fell from its supports and became a nasty blob, lightly stuck to the PEP film at the bottom of the tray.

After draining and filtering as much resin as I could (and spilling a fair bit and cleaning that up—ugh!) I cleaned the tray and found little bits of solid resin stuck to the film all over. It looked to me like one stuck deposit had been the cause of missing volume in two of my prints.

One of the stubborn spots left over after my initial attempts at cleaning the resin tray screen.

“Fixing” this involved alcohol and careful scraping with the—included—plastic scraping tool, and then wiping away the alcohol and detritus and allowing it to air dry. The results looked pretty good, but on inspection the PEP screen is quite scratched up now.

So, with this in mind, I decided to track down a decent amount of pure isopropyl alcohol, replacement PEP film, and more and—I hope—better filter cones, and then I will try it again.

(These links are not an endorsement. I provide them because I found identifying suitable items on Amazon to be quite tricky—and getting them from China was going to be slow and quite exensive.)

To be continued…

Computed Properties are Awesome…

…and should be used as little as possible

One of the things I’ve seen happen multiple times in my career is a language which didn’t have computed properties get them, and then watch as mediocre coders continue to cargo-cult old behaviors that are now counter-productive and obsolete.

The Problem

Things change. In particular, data-structures change. So one day you might decide that a Person looks like:

{
  name,
  address,
  email
}

and another day you decide it really should be:

{
  firstName,
  lastName,
  streetAddress,
  city,
  state,
  postcode,
  country,
  contacts[]
}

In fact, you’re likely to move from the first model to the second model in increments. Meanwhile you’re writing other code that assumes the current model, whatever it happens to be. There’ll always be “perfectly good” stuff that works with a given representation at a given time, and breaking stuff that’s “perfectly good” because other stuff is changing is bad.

The Solution

The solution is to use “getters” and “setters”. So you tell people never to expose underlying representations of data and instead write methods that can remain graven in stone that don’t change. So your initial representation is:

{
  getName,
  setName,
  …,
  _name,
  _address,
  …,
} 

and now you can split _name into _firstName and _lastName, rewrite getName, figure out how to implement setName and/or find all occurrences of setName and fix them, and so on.

Problem solved!

Problems with the Solution

Before, when you started out with only three fields, your Person object had exactly three members and no code. The solution immediately changed that to nine members (3 times as many) and six of those are functions. Now, every time you access any property, it’s a function call. If you’re using a framework that transparently wraps function calls for various reasons you’re maybe calling 3 functions to do a simple property lookup.

But aside from that, it works, and the fact that it prevents unnecessary breakage is a win, even if it makes for more code, less readable code, experimentation, and costs performance and memory.

A better solution

A better solution is immediately obvious to anyone who has used a language with computed properties. In this case you go back to your original object to start with, and when things change, you replace the changed properties with getters and setters so that for legacy code (and everything is legacy code eventually) nothing changes.

E.g. when name becomes firstName and lastName, you implement name as a computed property (implemented in the obvious way) and code that expects a name property to exist just keeps on working (with a slight performance cost that is the same performance cost you would have starting out with the previous solution.

The Antipattern

This exact thing happened when Objective-C added computed properties. All the folks who told you to write getters and setters for your Objective-C properties told you to ignore computed properties and keep doing things the old way or, perhaps worse, use computed properties to write getters and setters, so now you start with:

{
  get name() { return this._name },
  set name(x) { this._name = x },
  _name,
  /* and so on */
}

Is there are good reason for this?

There can be arguments at the margins in some cases that computed properties will be less performant than getters and setters (although in Obj-C that’s a huge stretch, since method dispatch is, by default, not lightning fast in Obj-C — indeed it’s a known performance issue that has a standard workaround (method calls in Obj-C involve looking up methods by name, the optimization is that you essentially grab a function pointer and hang on to it).

There’s absolutely no way that writing computed property wrappers around concrete properties just for the sake of it has any benefit whatsoever.

The short answer is “no”.

Aside: there’s one more marginal argument for getters and setters, that if you’re deprecrating a property (e.g. name) and you’d rather people didn’t call a hacky name setter that tries to compute firstName and lastName from a name string, it’s easier to codemod or grep calls to setName than calls to, say, .name =. I’ll reference this argument here even though I find it about as solid as arguments about how to write Javascript based on how well it plays with obfuscation.

I’m pretty sure cargo cult coders are still writing getters and setters in Swift, which has had computed properties since first release. (By the way, lazy computed properties are even more awesome.)

Must all data-tables kind of suck?

b8r's data-table in action
b8r now has a data-table component. It’s implemented in a little over 300loc, supports custom header and fixed headers, a correctly-positioned scrollbar, content cells, sorting, resizing, showing/hiding of columns.

Bindinator, up until a few weeks ago, did not have a data-table component. It has some table examples, but they’ve not seen much use in production because b8r is a rewrite of a rewrite of a library named bindomatic that I wrote for the USPTO that was implemented to do all the data-binding outside of data-tables in a large, complex project and then replaced all the data-tables.

It turns out that almost any complex table ends up being horrifically special-case and bindomatic, by being simpler, made it easier to build complex things bespoke. (I should note that I also wrote the original data-table component that bindomatic replaced.)

Ever since I first wrote a simple file-browser using b8r-native I’ve been wishing to make the list work as well as Finder’s windows. It would also benefit my RAW file manager if that ever turns into a real product.

The result is b8r’s data-table component, which is intended to scratch all my itches concerning data-tables (and tables in general) — e.g. fixed headers, resizable columns, showing/hiding columns, sorting, affording customization, etc. — while not being complex or bloated.

At a little over 300 loc I think I’ve avoided complexity and bloat, at any rate.

As it is, it can probably meet 80% of needs with trivial configuration (which columns do you want to display?), another 15% with complex customization. The remaining 5% — write your own!

Implementation Details

The cutest trick in the implementation is using a precisely scoped css-variable to control a css-grid to conform all the column contents to desired size. In order to change the entire table column layout, exactly one property of one DOM element gets changed. When you resize a column, it rewrites the css variable (if the number of rows is below a configurable threshold, it live updates the whole table, otherwise it only updates the header row until you’re done).

Another cute trick is to display the column borders (and the resizing affordance) using a pseudo-element that’s 100vh tall and clipped to the table component. (A similar trick could be used to hilite columns on mouse-over.)

Handling the actual drag-resizing of columns would be tricky, but I wrote b8r’s track-drag library some time ago to manage dragging once-and-for-all (it also deals with touch interfaces).

Next Steps…

There’s a to-do list in the documentation. None of it’s really a priority, although allowing columns to be reordered should be super easy and implementing virtualization should also be a breeze (I scratched that itch some time back).

The selection column shown in the animation is actually implemented using a custom headerCell and custom contentCell (there’s no selection support built in to data-table). I’ll probably improve the example and provide it as an helper library that provides a “custom column” for data-table and also serves as an example of adding custom functionality to the table.

I’d like to try out the new component with some hard-core use-cases, e.g. the example RAW file manager (which can easily generate lists of over 10,000 items), and the file-browser (I have an idea for a Spotlight-centric file browser), and — especially — my galaxy generator (since I have ideas for a web-based MMOG that will leverage the galaxy generator and use Google Firebase as its backend).

Blender 2.8 is coming

Unbiased rendering? Check. Realtime rendering? Check. Unified shader model? Check. Class-leading user interface. Check. Free, open source, small? Check. Blender 2.8 offers everything you want in a 3d package and nothing you don’t (dongles, copy protection, ridiculous prices, massive hardware requirements).

There aren’t many pieces of open source software that have been under continuous active development that haven’t gone through a single “major version change” in twenty years. When I started using Blender 2.8 in the early 2000s, it was version 2.3-something. In the last year it’s been progressing from 2.79 to 2.8 (I think technically the current “release” version is 2.79b, b as in the third 2.79 release not beta).

What brought me to blender was a programming contract for an updated application which, in my opinion, needed an icon. I modeled a forklift for the icon in Silo 3D (which introduced me to “box-modeling”) but needed a renderer, and none of my very expensive 3d software (I owned licenses for 3ds max, ElectricImage, and Strata StudioPro among other thins) on my then current hardware. Blender’s renderer even supported motion blur (kind of).

The blender I started using had a capable renderer that was comparatively slow and hard to configure, deep but incomprehensible functionality, and a user interface that was so bad I ended up ranting about it on the blender forums and got so much hatred in response that I gave up being part of the community. I’ve also blogged pretty extensively about my issues with blender’s user interface over the years. Below is a sampling…

Blender now features not one, not two, but three renderers. (And it supports the addition of more renderers via a plugin architecture.) The original renderer (a ray-tracing engine now referred to as Workbench) is still there, somewhat refined, but it is now accompanied by a real-time game-engine style shader based renderer (Eevee) and a GPU-accelerated unbiased (physically-based) renderer (Cycles). All three are fully integrated into the editor view, meaning you can see the effects of lighting and procedural material changes interactively.

The PBR revolution has slowly brought us to a reasonably uniform conceptualization of what a 3d “shader” should look like. Blender manages to encapsulate all of this into one, extremely versatile shader (although it may not be the most efficient option, especially for realtime applications).

Eevee and Cycles also share the same shader architecture (Workbench does not) meaning that you can use the exact same shaders for both realtime purposes (such as games) and “hero renders”.

Blender 2.8 takes blender from — as of say Blender 2.4 — having one of the worst user interfaces of any general-purpose 3D suite, to having arguably the best.

The most obvious changes in Blender 2.8 are in the user-interface. The simplification, reorganization, and decluttering that has been underway for the last five or so years has culminated in a user interface that is bordering on elegant — e.g. providing a collection of reasonable simple views that are task-focused but yet not modal — while still having the ability to instantly find any tool by searching (now command-F for find instead of space by default; I kind of miss space). Left-click to select is now the default and is a first class citizen in the user interface (complaining about Blender’s right-click to select, left click to move the “cursor” and screw yourself is this literally got me chased off Blender’s forums in 2005).

Blender still uses custom file-requesters that are simply worse in every possible way than the ones the host OS provides. Similarly, but less annoyingly, Blender uses a custom-in-window-menubar that means it’s simply wasting a lot of screen real estate when not used in full screen mode.

OK so the “globe” means “world” and the other “globe” means “shader”…

Blender relies a lot on icons to reduce the space required for the — still — enormous numbers of tabs and options, and it’s pretty hard to figure out what is supposed to mean what (e.g. the “globe with a couple of dots” icon refers to scene settings while the nearly identical “globe” icon refers to materials — um, what?). The instant search tool is great but doesn’t have any support for obvious synonyms, so you need to know that it’s a “sphere” and not a “ball” and a “cube” and not a “box” but while you “snap” the cursor you “align” objects and cameras.

Finally, Blender can still be cluttered and confusing. Some parts of the UI are visually unstable (i.e. things disappear or appear based on settings picked elsewhere, and it may not be obvious why). Some of the tools have funky workflows (e.g. several common tools only spawn a helpful floating dialog AFTER you’ve done something with the mouse that you probably didn’t want to do) and a lot of keyboard shortcuts seem to be designed for Linux users (ctrl used where command would make more sense).

The blender 2.8 documentation is pretty good but also incomplete. E.g. I couldn’t find any documentation of particle systems in the new 2.8 documentation. There’s plenty of websites with documentation or tutorials on blender’s particle systems but which variant of the user interface they’ll pertain to is pretty much luck-of-the-draw (and blender’s UI is in constant evolution).

Expecting a 3D program with 20 years of development history and a ludicrously wide-and-deep set of functionality to be learnable by clicking around is pretty unreasonable. That said, blender 2.8 comes close, generally having excellent tooltips everywhere. “Find” will quickly find you the tool you want — most of the time — and tell you its keyboard shortcut — if any — but won’t tell you where to find it in the UI. I am pretty unreasonable, but even compared to Cheetah 3D, Silo, or 3ds max (the most usable 3D programs I have previously used) I now think Blender more than holds its own in terms of learnability and ease-of-use relative to functionality.

Performance-wise, Cycles produces pretty snappy previews despite, at least for the moment, not being able to utilize the Nvidia GPU on my MBP. If you use Cycles in previews expect your laptop to run pretty damn hot. (I can’t remember which if any versions of Blender did, and I haven’t tried it out on either the 2013 Mac Pro/D500 or the 2012 Mac Pro/1070 we have lying around the house because that would involve sitting at a desk…)

Cranked up, Eevee is able to render well-beyond the requirements for broadcast animated TV shows. This frame was rendered on my laptop at 1080p in about 15s. Literally no effort has been made to make the scene efficient (there’s a big box of volumetric fog containing the whole scene with a spotlight illuminating a bunch of high polygon models with subsurface scattering and screenspace reflections.

Perhaps the most delightful feature of blender 2.8 though is Eevee, the new OpenGL-based renderer, which spans the gamut from nearly-fast-enough-for-games to definitely-good-enough-for-Netflix TV show rendering, all in either real time or near realtime. Not only does it use the same shader model as Cycles (the PBR renderer) but, to my eye, for most purposes it produces nicer results and it does so much, much faster than Cycles does.

Blender 2.8, now in late beta, is a masterpiece. If you have any interest in 3d software, even or especially if you’ve tried blender in the past and hated it, you owe it to yourself to give it another chance. Blender has somehow gone from having a user interface that only someone with Stockholm Syndrome could love to an arguably class-leading user interface. The fact that it’s an open source project, largely built by volunteers, and competing in a field of competitors with, generally, poor or at best quirky user interfaces, makes this something of a software miracle.