More Adventures in 3D Printing

The D&D boardgames come with plastic miniatures. They’re not especially nice, but neither is anything else out there.

So, I started playing the D&D boardgames with my kids. The games are pretty good (I initially bought them as a way of getting a buttload of materials for playing regular D&D, but that went nowhere).

But, my game designer brain won’t shut down… I had already toyed with the idea of a card-based RPG system, but couldn’t quite get it working. My (minor) frustrations with the D&D games gave me an idea. I’ll get to that in another post.

Anyway, in a nutshell, D&D boardgames with miniatures led me to think about custom miniatures which led me (back) to HeroForge which led my to think about 3D printing, and thus I discovered that SLA (stereolithography) printers have suddenly gotten a lot cheaper and by all signs are actually good.

So, I bought a Voxelab (or maybe Voxellab? — they can’t make up their minds) Polaris resin printer for about $160 plus tax, and a couple of 500mL bottles of resin for about $15 apiece.

This all arrived on News Year’s Eve. Woohoo! Champagne and 3D printing!

The printer didn’t come with any resin, and the instructions make reference to items not included: e.g. a container to dip prints in denatured alcohol, denatured alcohol, and a curing box. I made do with old jars, ordinary 70% alcohol (more on this later; a costly mistake I think), and paper napkins and the initial results were amazing.

Setup is amazingly easy. You basically:

  1. Place it somewhere level(ish)
  2. Plug it in
  3. Pull out resin tray
  4. Place a sheet of paper over the spot the resin tray occupied
  5. Loosen two screws on the print head, allowing it to move
  6. Run the print head down until the paper is flattened
  7. Setting this as ZERO
  8. Raise the print head
  9. Replace the tray
  10. Pour in some resin (word of warning: it’s really hard to tell how much transparent resin you’ve poured in).

This would have been quicker if the instructions weren’t in broken English and were decently organized and/or had an index. Even so, not bad at all.

The software is provided on a USB stick, which is big enough to serve as a file transfer device (this printer doesn’t do WiFi, so you save models (in *.fdb format) using the included Chitubox (free) software.

Chitubox is a semi-nice program. The buttons are all non-standard, tiny, nearly indistinguishable, and have no tooltips. So that’s nice.

My single biggest frustration was figuring out how to import the profile (on the USB stick) into Chitubox (once installed). Turns out there’s a video that more-or-less explains the process on the stick.

To print a 3D model (or several):

  1. Export models as OBJ files. (Ideally a model should be “water-tight” and clean, but it’s very forgiving.)
  2. Import one or more obj files into Chitubox.
  3. Scale, rotate, position the models to taste.
  4. Go to the supports tab and click Add All.
  5. Go back to the first tab and click Slice. This may take a while
  6. Drag the slice slider (not to be confused with the view slider) up-and-down to look at the cross sections and check they seem correct. (The view slider shows you a quickly calculated cross-section of your model, the slice slider shows you the actual rendered cross-section that will be used to print a layer.)
  7. Save the model (as FDG) to a memory stick.
  8. Put the stick in the printer.
  9. On the front panel pick your model (a correctly saved file should display your model; ignore weird Mac hidden files and such) and press Play.
It’s definitely worth scanning through the layers after slicing to sanity check before spending hours printing something that makes no sense. One annoying mistake I’ve made is making hollows with no place for the fluid to drain (or rather making them in the wrong place).

Printing

Here’s the printer in action. The curved thing is the print platform—a solid hunk of aluminum—and it is bathing in transparent resin (you can see bubbles in the resin towards the top-left). After each layer is exposed, the head is pulled up 5mm (this is adjustable) which draws more resin under the print, and then lowered 0.5mm above its previous exposure position, and so on.

Printing is slow (but it’s dictated by the height of the print, not the volume, so you can print a bunch of miniatures in parallel just as quickly as you can print one). It’s printing your model 0.5mm (?) at a time (with antialiased voxels!—or maybe just pixels) and each for each layer, the model is lowered to 0.5mm above the glass plate (at the bottom of the resin chamber), exposed, then raised to allow fluid to seep in. Lather, rinse, repeat.

Each layer is being printed by shining a UV lamp through an LCD displaying the bitmap cross-section for that layer (as seen in step 6, above). That’s it! (The printer displays the layer being printed on its color touch screen.) Mechanically, this is much simpler than an FDM printer, and the quality of the output is dictated by the resolution of the LCD screen and the quality of the resin.

The printer is quiet compared to FDM (still, a pretty loud fan) and it doesn’t cause noticeable vibration (it’s only moving the head up and down with a screw drive).

When it’s done, it beeps and raises the print clear of the resin basin.

  1. Rinse it in alcohol
  2. Pull it out of the alcohol and let the alcohol drain
  3. Leave it to cure somewhere well-ventilated

So, how good are the results?

Fresh out of the printer, transparent prints look amazing. This is all covered in liquid resin—goop Gwyneth Powell might be proud of—probably not. Below the toroid you can see the support platform (from which the model hangs during printing.

It’s hard for me not to be hyperbolic and create unreasonable expectations, given my previous frustrations with FDM (fusion deposit modeling, or “robot glue guns” as I call them), but here goes.

This is the first inexpensive 3d printer that I have found not to be disappointing.

This is the first 3d printer I have used that produced a good model the first time I used it. (And, no, it wasn’t the demo model that came with the USB stick.)

This is my first print, 36h after it emerged from the resin. It is perfect. (The support platform looks nicer than anything I’ve seen out of an FSM printer). If you pixel peep you can see “sedimentary” layers in the print surface, but this is not reflected by the surface feeling rough. The only blemishes are tiny nubs where the supports were broken off.

I compare this to the first time I saw printed output from a laser printer, or the first time I used a good color flatbed scanner, or the first time I captured video from a broadcast quality camera with a broadcast quality capture card. In terms of output quality, this shit has arrived.

In a nutshell, the output is smooth (wet, transparent output almost looks like glass, but it becomes cloudy as it cures) and the prints feel rigid but a little flexible. The way I describe it is “somewhere between plastic army men and lego”. Prints feel about as durable as a nice plastic manufactured item of the same form (perhaps slightly nicer than the plastic D&D miniatures that sent me off on this tangent).

I should add that the printer estimates the cost of each print based on the volume of resin to be used and the cost of the resin. So far, my models have “cost” less than $0.25. But they’ve all been quite small. (But wait to find out about other consumable costs…)

Flies in the ointment

New printing method, new failure modes. This print of my omnidude model mid-run looks great at first glance, but turns out to have a defect—partially visible here (most of the right leg is missing, as is the back of the base. I believe the cause is deposits on the PEP film blocking uniform exposure or possibly the a portion of the print base failing to adhere to the head.

Let me begin by saying that most of this is a result of user error. I couldn’t get pure isopropyl alcohol on New Year’s Eve at 10pm and I wanted to play with my toy!

Let’s start with nasty chemicals—the resin smells bad, and is viscous, sticky, slimy, hard to clean off, and probably bad for you. One bottle of resin arrived having leaked a bit inside its container. The prints—perhaps because I’m not using pure alcohol and/or don’t have a proper “curing box”—are slightly tacky for hours after printing. Nevertheless, my first print has lost almost all its tackiness 36h or so since printing.

Recovering unused resin using the provided paper filter is horrible, and cleaning the tray is painful. But then there’s some really gnarly issues…

First, my second print had some odd, minor flaws in it, which I ascribed to my modeling, but turned out to be much nastier.

Next, we had a power failure while a print was in progress and the printer lost track of where it was. I tried to recalibrate by just moving the print head down and fiddling (rather than draining all the resin and cleaning everything and recalibrating properly, and so on, but it didn’t work.

My weasel character was one of the two figures in progress when the power failed. (Shown here atop a quarter and an old two pound coin for scale—this thing is tiny.) It shed a few layers from the back of its tail, and there’s that small blemish on the front of his abdomen (which I don’t see with my eyes). I added some very fine detail to the eyes (an indentation on his right (cyborg) eye, and a pupil on his left eye. The latter isn’t visible.
This profile shows the flawed fail, several layers of which sloughed off at a touch.

My first print after the power outage and my dodgy recalibration was a disaster. The model fell from its supports and became a nasty blob, lightly stuck to the PEP film at the bottom of the tray.

After draining and filtering as much resin as I could (and spilling a fair bit and cleaning that up—ugh!) I cleaned the tray and found little bits of solid resin stuck to the film all over. It looked to me like one stuck deposit had been the cause of missing volume in two of my prints.

One of the stubborn spots left over after my initial attempts at cleaning the resin tray screen.

“Fixing” this involved alcohol and careful scraping with the—included—plastic scraping tool, and then wiping away the alcohol and detritus and allowing it to air dry. The results looked pretty good, but on inspection the PEP screen is quite scratched up now.

So, with this in mind, I decided to track down a decent amount of pure isopropyl alcohol, replacement PEP film, and more and—I hope—better filter cones, and then I will try it again.

(These links are not an endorsement. I provide them because I found identifying suitable items on Amazon to be quite tricky—and getting them from China was going to be slow and quite exensive.)

To be continued…

Must all data-tables kind of suck?

b8r's data-table in action
b8r now has a data-table component. It’s implemented in a little over 300loc, supports custom header and fixed headers, a correctly-positioned scrollbar, content cells, sorting, resizing, showing/hiding of columns.

Bindinator, up until a few weeks ago, did not have a data-table component. It has some table examples, but they’ve not seen much use in production because b8r is a rewrite of a rewrite of a library named bindomatic that I wrote for the USPTO that was implemented to do all the data-binding outside of data-tables in a large, complex project and then replaced all the data-tables.

It turns out that almost any complex table ends up being horrifically special-case and bindomatic, by being simpler, made it easier to build complex things bespoke. (I should note that I also wrote the original data-table component that bindomatic replaced.)

Ever since I first wrote a simple file-browser using b8r-native I’ve been wishing to make the list work as well as Finder’s windows. It would also benefit my RAW file manager if that ever turns into a real product.

The result is b8r’s data-table component, which is intended to scratch all my itches concerning data-tables (and tables in general) — e.g. fixed headers, resizable columns, showing/hiding columns, sorting, affording customization, etc. — while not being complex or bloated.

At a little over 300 loc I think I’ve avoided complexity and bloat, at any rate.

As it is, it can probably meet 80% of needs with trivial configuration (which columns do you want to display?), another 15% with complex customization. The remaining 5% — write your own!

Implementation Details

The cutest trick in the implementation is using a precisely scoped css-variable to control a css-grid to conform all the column contents to desired size. In order to change the entire table column layout, exactly one property of one DOM element gets changed. When you resize a column, it rewrites the css variable (if the number of rows is below a configurable threshold, it live updates the whole table, otherwise it only updates the header row until you’re done).

Another cute trick is to display the column borders (and the resizing affordance) using a pseudo-element that’s 100vh tall and clipped to the table component. (A similar trick could be used to hilite columns on mouse-over.)

Handling the actual drag-resizing of columns would be tricky, but I wrote b8r’s track-drag library some time ago to manage dragging once-and-for-all (it also deals with touch interfaces).

Next Steps…

There’s a to-do list in the documentation. None of it’s really a priority, although allowing columns to be reordered should be super easy and implementing virtualization should also be a breeze (I scratched that itch some time back).

The selection column shown in the animation is actually implemented using a custom headerCell and custom contentCell (there’s no selection support built in to data-table). I’ll probably improve the example and provide it as an helper library that provides a “custom column” for data-table and also serves as an example of adding custom functionality to the table.

I’d like to try out the new component with some hard-core use-cases, e.g. the example RAW file manager (which can easily generate lists of over 10,000 items), and the file-browser (I have an idea for a Spotlight-centric file browser), and — especially — my galaxy generator (since I have ideas for a web-based MMOG that will leverage the galaxy generator and use Google Firebase as its backend).

Blender 2.8 is coming

Unbiased rendering? Check. Realtime rendering? Check. Unified shader model? Check. Class-leading user interface. Check. Free, open source, small? Check. Blender 2.8 offers everything you want in a 3d package and nothing you don’t (dongles, copy protection, ridiculous prices, massive hardware requirements).

There aren’t many pieces of open source software that have been under continuous active development that haven’t gone through a single “major version change” in twenty years. When I started using Blender 2.8 in the early 2000s, it was version 2.3-something. In the last year it’s been progressing from 2.79 to 2.8 (I think technically the current “release” version is 2.79b, b as in the third 2.79 release not beta).

What brought me to blender was a programming contract for an updated application which, in my opinion, needed an icon. I modeled a forklift for the icon in Silo 3D (which introduced me to “box-modeling”) but needed a renderer, and none of my very expensive 3d software (I owned licenses for 3ds max, ElectricImage, and Strata StudioPro among other thins) on my then current hardware. Blender’s renderer even supported motion blur (kind of).

The blender I started using had a capable renderer that was comparatively slow and hard to configure, deep but incomprehensible functionality, and a user interface that was so bad I ended up ranting about it on the blender forums and got so much hatred in response that I gave up being part of the community. I’ve also blogged pretty extensively about my issues with blender’s user interface over the years. Below is a sampling…

Blender now features not one, not two, but three renderers. (And it supports the addition of more renderers via a plugin architecture.) The original renderer (a ray-tracing engine now referred to as Workbench) is still there, somewhat refined, but it is now accompanied by a real-time game-engine style shader based renderer (Eevee) and a GPU-accelerated unbiased (physically-based) renderer (Cycles). All three are fully integrated into the editor view, meaning you can see the effects of lighting and procedural material changes interactively.

The PBR revolution has slowly brought us to a reasonably uniform conceptualization of what a 3d “shader” should look like. Blender manages to encapsulate all of this into one, extremely versatile shader (although it may not be the most efficient option, especially for realtime applications).

Eevee and Cycles also share the same shader architecture (Workbench does not) meaning that you can use the exact same shaders for both realtime purposes (such as games) and “hero renders”.

Blender 2.8 takes blender from — as of say Blender 2.4 — having one of the worst user interfaces of any general-purpose 3D suite, to having arguably the best.

The most obvious changes in Blender 2.8 are in the user-interface. The simplification, reorganization, and decluttering that has been underway for the last five or so years has culminated in a user interface that is bordering on elegant — e.g. providing a collection of reasonable simple views that are task-focused but yet not modal — while still having the ability to instantly find any tool by searching (now command-F for find instead of space by default; I kind of miss space). Left-click to select is now the default and is a first class citizen in the user interface (complaining about Blender’s right-click to select, left click to move the “cursor” and screw yourself is this literally got me chased off Blender’s forums in 2005).

Blender still uses custom file-requesters that are simply worse in every possible way than the ones the host OS provides. Similarly, but less annoyingly, Blender uses a custom-in-window-menubar that means it’s simply wasting a lot of screen real estate when not used in full screen mode.

OK so the “globe” means “world” and the other “globe” means “shader”…

Blender relies a lot on icons to reduce the space required for the — still — enormous numbers of tabs and options, and it’s pretty hard to figure out what is supposed to mean what (e.g. the “globe with a couple of dots” icon refers to scene settings while the nearly identical “globe” icon refers to materials — um, what?). The instant search tool is great but doesn’t have any support for obvious synonyms, so you need to know that it’s a “sphere” and not a “ball” and a “cube” and not a “box” but while you “snap” the cursor you “align” objects and cameras.

Finally, Blender can still be cluttered and confusing. Some parts of the UI are visually unstable (i.e. things disappear or appear based on settings picked elsewhere, and it may not be obvious why). Some of the tools have funky workflows (e.g. several common tools only spawn a helpful floating dialog AFTER you’ve done something with the mouse that you probably didn’t want to do) and a lot of keyboard shortcuts seem to be designed for Linux users (ctrl used where command would make more sense).

The blender 2.8 documentation is pretty good but also incomplete. E.g. I couldn’t find any documentation of particle systems in the new 2.8 documentation. There’s plenty of websites with documentation or tutorials on blender’s particle systems but which variant of the user interface they’ll pertain to is pretty much luck-of-the-draw (and blender’s UI is in constant evolution).

Expecting a 3D program with 20 years of development history and a ludicrously wide-and-deep set of functionality to be learnable by clicking around is pretty unreasonable. That said, blender 2.8 comes close, generally having excellent tooltips everywhere. “Find” will quickly find you the tool you want — most of the time — and tell you its keyboard shortcut — if any — but won’t tell you where to find it in the UI. I am pretty unreasonable, but even compared to Cheetah 3D, Silo, or 3ds max (the most usable 3D programs I have previously used) I now think Blender more than holds its own in terms of learnability and ease-of-use relative to functionality.

Performance-wise, Cycles produces pretty snappy previews despite, at least for the moment, not being able to utilize the Nvidia GPU on my MBP. If you use Cycles in previews expect your laptop to run pretty damn hot. (I can’t remember which if any versions of Blender did, and I haven’t tried it out on either the 2013 Mac Pro/D500 or the 2012 Mac Pro/1070 we have lying around the house because that would involve sitting at a desk…)

Cranked up, Eevee is able to render well-beyond the requirements for broadcast animated TV shows. This frame was rendered on my laptop at 1080p in about 15s. Literally no effort has been made to make the scene efficient (there’s a big box of volumetric fog containing the whole scene with a spotlight illuminating a bunch of high polygon models with subsurface scattering and screenspace reflections.

Perhaps the most delightful feature of blender 2.8 though is Eevee, the new OpenGL-based renderer, which spans the gamut from nearly-fast-enough-for-games to definitely-good-enough-for-Netflix TV show rendering, all in either real time or near realtime. Not only does it use the same shader model as Cycles (the PBR renderer) but, to my eye, for most purposes it produces nicer results and it does so much, much faster than Cycles does.

Blender 2.8, now in late beta, is a masterpiece. If you have any interest in 3d software, even or especially if you’ve tried blender in the past and hated it, you owe it to yourself to give it another chance. Blender has somehow gone from having a user interface that only someone with Stockholm Syndrome could love to an arguably class-leading user interface. The fact that it’s an open source project, largely built by volunteers, and competing in a field of competitors with, generally, poor or at best quirky user interfaces, makes this something of a software miracle.

As the Wwworm Turns

Microsoft’s recent announcement that it is, in effect, abandoning the unloved and unlamented Edge browser stack in favor of Chromium is, well, both hilarious and dripping in irony.

Consider at first blush the history of the web in the barest terms:

  • 1991 — http, html, etc. invented using NeXT computers
  • 1992 — Early browsers (Mosaic, NetScape, etc.) implement and extend the standard, notably NetScape adds Javascript and tries to make frames and layers a thing. Also, the <blink> tag.
  • 1995 — Microsoft “embraces and extends” standards with Internet Explorer and eventually achieves a 95% stranglehold on the browser market.
  • 1997 — As NetScape self-destructs and Apple’s own OpenDoc-based browser “Cyberdog” fails to gain any users (mostly due to being OpenDoc-based), Apple begs Microsoft for a slightly-less-crummy version of IE5 to remain even vaguely relevant/useful in an era where most web stuff is only developed for whatever version of IE (for Windows) the web developer is using.
  • 2002 — FireFox rises from the ashes of NetScape. (It is essentially a cross-platform browser based on Camino, a similar Mac-only browser that was put together by developers frustrated by the lack of a decent Mac browser.)
  • 2003 — Stuck with an increasingly buggy and incompatible IE port, Apple develops its own browser based on KHTML after rejecting Netscape’s Gecko engine. The new browser is called “Safari”, and Apple’s customized version of KHTML is open-sourced as Webkit.
  • As a scrappy underdog, Apple starts a bunch of small PR wars to show that its browser is more standards-compliant and runs javascript faster than its peers.
  • Owing to bugginess, neglect, and all-round arrogance, gradually Microsoft loses a significant portion of market share to FireFox (and, on the Mac, Safari — which is at least as IE-compatible as the aging version of IE that runs on Macs). Google quietly funds FireFox via ad-revenue-sharing since it is in Google’s interest to break Microsoft’s strangehold on the web.
  • 2007 — Safari having slowly become more relevant to consumers as the best browser on the Mac (at least competitive with Firefox functionally and much faster and more power efficient than any competitor) is suddenly the only browser on the iPhone. Suddenly, making your stuff run on Safari matters.
  • 2008 — Google starts experimenting with making its own web browser. It looks around for the best open source web engine, rejects Gecko, and picks Webkit!
  • Flooded with ad revenue from Google, divorced from any sense of user accountability FireFox slowly becomes bloated and arrogant, developing an email client and new languages and mobile platforms rather than fixing or adding features to the only product it produces that anyone cares about. As Firefox grows bloated and Webkit improves, Google Chrome benefits as, essentially, Safari for Windows. (Especially since Apple’s official Safari for Windows is burdened with a faux-macOS-“metal”, UI and users are tricked into installing it with QuickTime.) When Google decides to turn Android from a Sidekick clone into an iPhone clone, it uses its Safari clone as the standard browser. When Android becomes a success, suddenly Webkit compatibility matters a whole lot more.
  • 2013 — Google is frustrated by Apple’s focus on end-users (versus developers). E.g. is the increase in size and power consumption justified by some kind of end-user benefit? If “no” then Apple simply won’t implement it. Since Google is trying to become the new Microsoft (“developers, developers, developers”) it forks Webkit so it can stop caring about users and just add features developers think they want at an insane pace. It also decides to completely undermine the decades-old conventions of software numbering and make new major releases at an insane pace.
  • Developers LOOOOVE Chrome (for the same reason they loved IE). It lets them reach lots of devices, it has lots of nooks and crannies, it provides functionality that lets developers outsource wasteful tasks to clients, if they whine about some bleeding edge feature Google will add it, whether or not it makes sense for anyone. Also it randomly changes APIs and adds bugs fast enough that you can earn a living by memorizing trivia (like the good old days of AUTOEXEC.BAT) allowing a whole new generation of mediocrities to find gainful employment. Chrome also overtakes Firefox as having the best debug tools (in large part because Firefox engages in a two year masturbatory rewrite of all its debugging tools which succeeds mainly in confusing developers and driving them away).
  • 2018 — Microsoft, having seen itself slide from utter domination (IE6) to laughingstock (IE11/Edge), does the thing-that-has-been-obvious-for-five-years and decides to embrace and extend Google’s Webkit fork (aptly named “Blink”).

RAW Power 2.0

RAW Power 2.0 in action — the new version features nice tools for generating black-and-white images.

My favorite tool for quickly making adjustments to RAW photos just turned 2.0. It’s a free upgrade but the price for new users has increased to (I believe) $25. While the original program was great for quickly adjusting a single image, the new program allows you to browse directories full of images quickly and easily, to some extent replacing browsing apps like FastRAWViewer.

The major new features — and there are a lot of them — are batch processing, copying and pasting adjustments between images, multiple window and tab support, hot/cold pixel overlays (very nicely done), depth effect (the ability to manipulate depth data from dual camera iPhones), perspective correction and chromatic aberration support.

The browsing functionality is pretty minimal. It’s useful enough for selecting images for batch-processing, but it doesn’t offer filtering ability (beyond the ability to show only RAW images) or the ability quickly modify image metadata (e.g. rate images), so FastRAWViewer is still the app to beat for managing large directories full of images.

While the hot/cold pixel feature is lovely, the ability to show in-focus areas (another great FastRAWViewer feature) is also missing.

As before, RAW Power both functions as a standalone application and a plugin for Apple’s Photos app (providing enhanced adjustment functionality).

Highly recommended!