Announcing bindinator.js

Having recently set up bindinator.com, I am “officially” announcing my side-project Bind-O-Matic.js bindinator.js. It’s a small (currently 7kB gzipped and minified) Javascript library that is designed to make developing in vanilla javascript better in every way than using one or more frameworks. It embodies my current ideas about Javascript, Web, and UI development, and programming — for whatever that’s worth.

Also, I’m having a ton of fun hacking on it.

By way of “dogfooding”, I’m simultaneously building a skunkworks version of my main work project (which is an Electron-based desktop application) with it, adapting any code I can over to it, building b8r’s own demo environment, and slowly porting various other components and code snippets to it.

Above is my old galaxy generator, updated with a bunch of SVG goodness, and implemented using b8r (it was originally cobbled together using jQuery).

Why another framework?

I’ve worked with quite a few frameworks over the years, and in the end I like working in “vanilla” js (especially now that modern browsers have made jQuery pretty much unnecessary). Bindinator is intended to provide a minimal set of tools for making vanilla js development more:

  • productive
  • reusable
  • debuggable
  • maintainable
  • scalable

Without ruining the things that make vanilla js development as pleasant as it already is:

  • Leverage debugging tools
  • Leverage browser behavior (e.g. accessibility, semantic HTML)
  • Leverage browser strengths (e.g. let it parse and render HTML)
  • Be mindful of emerging ideas (e.g. semantic DOM, import)
  • Super fast debug cycle (no transpiling etc.) — see “leverage debugging tools”
  • Don’t require the developer to have to deal with different, conflicting models

The last point is actually key: pretty much every framework tries to abstract away the behavior of the browser (which, these days, is actually pretty reasonable) with some idealized behavior that the designer(s) of the framework come up with. The downside is that, like it or not, the browser is still there, so you (a) end up having to unlearn your existing, generally useful knowledge of how the browser works, (b) learn a new — probably worse — model, and then (c) reconcile the two when the abstraction inevitably leaks.

Being Productive

Bindinator is designed to make programmers and designers more (and separately) productive, decouple their activities, and be very easy to pick up.

To make something appear in a browser you need to create markup or maybe SVG. The easiest way to create markup or SVG that looks exactly like what you want is — surprise — to create what you want directly, not write a whole bunch of code that — if it works properly — will create what you want.

Guess what? Writing Javascript to create styled DOM nodes is slower, more error-prone, less exact, probably involves writing in pseudo-languages, adds compilation/transpilation steps, doesn’t leverage something the browser is really good at (parsing and rendering markup), and probably involves adding a mountain of dependencies to your code.

Bindinator lets you take HTML and bind it or turn it into reusable components without translating it into Javascript, some pseudo-language, a templating language, or transpilation. It also follows that a designer can style your markup.

Here’s a button:

<button class="awesome">Click Me</button>

Now it’s bound — asynchronously and by name.

<button class="awesome" data-event="click:reactor.selfDestruct">
  Click Me
</button>

When someone clicks on it, an object registered as “reactor” will have its “selfDestruct” property (presumably a function) called. If the controller object hasn’t been loaded, b8r’s event handler will store the event and replay it when the controller is registered.

Here’s an input:

<inpu type="range">

And now its value is bound to the fuel_rod_position of an object registered as “reactor”:

<input type="range" data-bind="value=reactor.fuel_rod_position">

And maybe we want to allow the user to edit the setting manually as well, so something like this:

<input type="range" data-bind="value=reactor.fuel_rod_position">
<input type="number" data-bind="value=reactor.fuel_rod_position">

…just works.

Suppose later we find ourselves wanting lots of sliders like this, so we want to turn it into a reusable component. We take that markup, and modify it slightly and add some setup to make it behave nicely:

<input type="range" data-bind="value=_component_.value">
<input type="number" data-bind="value=_component_.value">
<script>
 const slider = findOne('[type="range"]');
 slider.setAttribute('min', component.getAttribute('min') || 0);
 slider.setAttribute('max', component.getAttribute('max') || 10);
 register(data || {value: 0});
</script>

This is probably the least self-explanatory step. The script tag of a component executes in a private context where there are some useful local variables:

component is the element into which the component is loaded; find and findOne are syntax sugar for component.querySelector and component.querySelectorAll (converted to a proper array) respectively, and register is syntax sugar for registering the specified object as having the component’s unique id.

And save it as “slider-numeric.component.html”. We can invoke it thus:

<span 
  data-component="slider-numeric"
  data-bind="component(value)=reactor.fuel_rod_position"
></span>

And load it asynchronously thus:

const {component} = require('b8r');
component('slider-numeric');

To understand exactly what goes on under the hood, we can look at the resulting markup in (for example) the Chrome debugger:

Chrome debugger view of a simple b8r component

Some things to note: data-component-id is human-readable and tells you what kind of component it is. The binding mechanism (change and input event handlers) is explicit and self-documented in the DOM, and the binding has become concrete (_component_ has been replaced with the id of that component’s instance). No special debugging tools required.

Code Reuse

Bindinator makes it easy to separate presentation (and presentation logic) from business logic, making each individually reusable with little effort. Components are easily constructed from pieces of markup, making “componentization” much like ordinary refactoring.

A bindinator component looks like this:

<style>
  /* style rules go here */
</style>
<div>
  <!-- markup goes here -->
</div>
<script>
  /* component logic goes here */
</script>

All the parts are optional. E.g. a component need not have any actual

When a component is loaded, the HTML is rendered into DOM nodes, the script is converted into the body of a function, and the style sheet is inserted into the document head. When a component is instanced, the DOM elements are cloned and the factory function is executed in a private context.

Debugging

Bindinator is designed to have an incredibly short debug cycle, to add as little cognitive overhead as possible, and work well with debugging tools.

To put it another way, it’s designed not to slow down the debug cycle you’d have if you weren’t using it. Bindinator requires no transpilation, templating languages, parallel DOM implementations, it’s designed to leverage your existing knowledge of the browser’s behavior rather than subvert and complicate it, and if you inspect or debug code written with bindinator you’ll discover the markup and code you wrote where you expect to. You’ll be able to see what’s going on by looking at the DOM.

Maintenance

If you’re productive, write reusable (and hence DRY) code, and your code is easier to debug, your codebase is likely to be maintainable.

Scale

Bindinator is designed to make code scalable:

Code reuse is easy because views are cleanly separated from business logic.

Code is smaller because bindinator is small, bindinator code is small, and code reuse leads to less code being written, served, and executed.

Bindinator is designed for asynchrony, making optimization processes (like finessing when things are served, when they are loaded, and so forth) easy to employ without worrying about breaking stuff.

Core Concepts

Bindinator’s core concepts are event and data binding (built on the observation that data-binding is really just event-binding, assuming that changes to bound objects generate events) and object registration (named objects with properties accessed by path).

Bindinator provides a bunch of convenient toTargets — DOM properties to which you might want to write a value, in particular value, text, attr, style, class, and so forth. In most cases bindings are self-explanatory, e.g.

data-bind="style(fontFamily)=userPrefs.uiFont"

There are fewer fromTargets (value, text, and checked) which update bound properties based on user changes — for more complex cases you can always bind to methods by name and path.

Components are simply snippets of web content that get inserted the way you’d want and expect them to, with some syntax sugar for allowing each snippet to be bound to a uniquely named instance object.

And, finally, b8r provides a small number of convenience methods (which it needs to do what it does) to make it easier to work with ajax (json, jsonp), the DOM, and events.

The Future

I’m still working on implementing literate programming (allowing the programmer to mix documentation, examples, and tests into source code), providing b8r-specific lint tools, building out the standard control library (although mostly vanilla HTML elements work just fine), and adding more tests in general. I’m tracking progress publicly using Trello.

What does a good API look like?

I’m temporarily unemployed — my longest period of not having to work every day since I was laid off by Valuclick in 2010 — so I’m catching up on my rants. This rant is shaped by recent experiences I had during my recent job interview process (I was interviewed by two Famous Silicon Valley Companies, one of which hired me and the other of which “didn’t think I was a good fit”). Oddly enough, the company that hired me interviewed me in an API-agnostic (indeed almost language-agnostic) way while the company that didn’t interviewed me in terms of my familiarity with an API created by the other company (which they apparently are using religiously; although maybe not so much following recent layoffs).

Anyway, the API in question is very much in vogue in the Javascript / Web Front End world, in much the same way as CoffeeScript and Angular were a couple of years ago. (Indeed, the same bunch of people who were switching over to Angular a few years back have recently been switching over to this API, which is funny to watch). And this API and the API it has replaced in terms of mindshare is in large part concerned with a very common problem in UI programming.

A Very Common Problem in UI Programming

Probably the single most common problem in UI programming is synchronizing data with the user interface, or “binding”. There are three fundamental things almost any user-facing program has to do:

  • Populate the UI with data you have when (or, better, before) the UI first appears on screen
  • Keep the UI updated with data changed elsewhere (e.g. data that changes over time)
  • Update the data with changes made by the user

This stuff needs to be done, it’s boring to do, it often gets churned a lot (there are constant changes made to a UI and the data it needs to sync with in the course of any real project). So it’s nice if this stuff isn’t a total pain to do. Even better if it’s pretty much automatic for simple cases.

The Javascript API in question addresses a conspicuous failing of Javascript given its role as “the language of the web”. In fact almost everything good and bad about it stems from its addressing this issue. What’s this issue? It’s the difficulty of dealing with HTML text. If you look at Perl, PHP, and JSP, the three big old server-side web-programming languages, each handles this particular issue very well. The way I used to look at it was:

  • A perl script tends to look like a bunch of code with snippets of HTML embedded in it.
  • A PHP script tends to look like a web page with snippets of code embedded in it.
  • A JSP script tends to look like a web page with horrible custom tags and/or snippets of code embedded in it.

If you’re trying to solve a simple problem like get data from your database and stick it in your dynamic web page, you end up writing that web page the way you normally would (as bog standard HTML) and just putting a little something where you want your data to be and maybe some supporting code elsewhere. E.g. in PHP you might write “<p>{$myDate}</p>” while in  JSP you’d write something like “<p><%= myDate %></p>”. These all look similar, do similar things, and make sense.

It’s perfectly possible to defy these natural tendencies, e.g. write a page that has little or no HTML in it and just looks like a code file, but this is pretty much how many projects start out.

Javascript, in comparison, is horrible at dealing with HTML. You either end up building strings manually “<p>” + myDate + “</p>” which gets old fast for anything non-trivial, or you manipulate the DOM directly, either through the browser’s APIs, having first added metadata to your existing HTML, e.g. you’d change “<p></p>” to “<p id=”myDate”></p>” and then write “document.getElementById(‘myDate’).text = myDate;” in a script tag somewhere else.

The common solution to this issue is to use a template language implemented in Javascript (there are approximately 1.7 bazillion of them, as there is of anything involving Javascript) which allow you to write something like “<p>{{myDate}}</p>” and then do something like “Populate(slabOfHtml, {myDate: myDate});” in the simplest case (cue discussion about code injection). The net effect is you’re writing non-standard HTML and using a possibly obscure and flawed library to manipulate markup written in this non-standard HTML (…code injection). You may also be paying a huge performance penalty because depending on how things work, updating the page may involve regenerating its HTML and getting the browser to parse it again, which can suck — especially with huge tables (or, worse, huge slabs of highly styled DIVs pretending to be tables). OTOH you can use lots of jQuery to populate your DOM-with-metadata fairly efficiently, but this tends to be even worse for large updates.

The API in question solves this problem by uniting non-standard HTML and non-standard Javascript in a single new language that’s essentially a mashup of XML and Javascript that compiles into pure Javascript and [re]builds the DOM from HTML efficiently and in a fine-grained manner. So now you kind of need to learn a new language and an unfamiliar API.

My final interview with the company that did not hire me involved doing a “take home exam” where I was asked to solve a fairly open-ended problem using this API, for which I had to actually learn this API. The problem essentially involved: getting data from a server, displaying a table of data, allowing the user to see detail on a row item, and allowing the user to page through the table.

Having written a solution using this unfamiliar API, it seemed very verbose and clumsy, so I tried to figure out what I’d done wrong. I tried to figure out what the idiomatic way to do things using this API was and refine them. Having spent a lot of spare time on this exercise (and I was more-than-fully-employed at the time) it struck me that the effort I was spending to learn the API, and to hone my implementation, were far greater than the effort required to implement the same solution using an API I had written myself. So, for fun, I did that too.

Obviously, I had much less trouble using my API. Obviously, I had fewer bugs. Obviously I had no issues writing idiomatic code.

But, here’s the thing. Writing idiomatic code wasn’t actually making my original code shorter or more obvious. It was just more idiomatic.

To bind an arbitary data object to the DOM with my API, the code you write looks like this:

$(<some-selector>).bindomatic(<data-object>);

The complex case looks like this:

$(<some-selector>).bindomatic(<data-object>, <options-object>);

Assuming you’re familiar with the idioms of jQuery, there’s nothing new to learn here. The HTML you bind to needs to be marked up with metadata in a totally standard way (intended to make immediate sense even to people who’ve never seen my code before), e.g. to bind myDate to a particular paragraph you might write: “<p data-source=”.myDate”></p>”. If you wanted to make the date editable by the user and synced to the original data object, you would write: “<input data-bind=”.myDate”>”. The only complaints I’ve had about my API are about the “.” part (and I somewhat regret it). Actually the syntax is data-source=”myData.myDate” where “myData” is simply an arbitrary word used to refer to the original bound object. I had some thoughts of actually directly binding to the object by name, somehow, when I wrote the API, but Javascript doesn’t make that easy.

In case you’re wondering, the metadata for binding tabular data looks like this: “<tr data-repeat=”.someTable”><td data-source=”.someField”></td></tr>”.

My code was leaner, far simpler, to my mind far more “obvious”, and ran faster than the code using this other, famous and voguish, API. There’s also no question my API is far simpler. Oh, and also, my library solves all three of the stated problems — you do have to tell it if you have changes in your object that need to be synced to the UI — (without polluting the source object with new fields, methods, etc.) while this other library — not-so-much.

So — having concluded that a programming job that entailed working every day with the second API would be very annoying — I submitted both my “correct” solution and the simpler, faster, leaner solution to the second company and there you go. I could have been laid off by now!

Here’s my idea of what a good API looks like

  • It should be focused on doing one thing and one thing well.
  • It should only require that the programmer tell it something it can’t figure out for itself and hasn’t been told before.
  • It should be obvious (or as obvious as possible) how it works.
  • It should have sensible defaults
  • It should make simple things ridiculously easy, and complex things possible (in other words, its simplicity shouldn’t handcuff a programmer who wants to fine-tune performance, UX, and so on.

XCode and Swift

I don’t know if the binding mechanisms in Interface Builder seemed awesome back in 1989, but today — with all the improvements in both Interface Builder and the replacement of Objective-C with the (potentially) far cleaner Swift — they seem positively medieval to me, combining the worst aspects of automatic-code-generation “magic” and telling the left hand what the left hand is doing.

Let’s go back to the example of sticking myDate into a the UI somewhere. IB doesn’t really have “paragraphs” (unless you embed HTML) so let’s stick it in a label. Supposing you have a layout created in IB, the way you’re taught — as a newb — to do it this is:

  1. In XCode, drag from your label in IB to the view controller source code (oh, step 0 is to make sure both relevant things are visible)
  2. You’ll be asked to name the “outlet”, and then XCode will automagically write this code: @IBOutlet weak var myDate: UILabel!
  3. Now, in — say — the already-written-for-you viewDidLoad method of the controller you can write something like: myDate.text = _myDate (it can’t be myDate because you’re used myDate to hold the outlet reference).

Congratulations, you have now solved one of the three problems. That’s two lines of code, one generated by magic, the other containing no useful information, that you’ve written to get one piece of data from your controller to your view.

Incidentally, let’s suppose I wanted to change the outlet name from “myDate” to “dateLabel”. How do I do that? Well, you can delete the outlet and create a new outlet from scratch using the above process, and then change the code referencing the outlet. Is there another way? Not that I know of.

And how to we solve the other two problems?

Let’s suppose we’d in fact bound to an input field. So now my outlet looks like this: @IBOutlet weak var myDate: UITextField! (the “!” is semantically significant, not me getting excited).

  1. In XCode, drag from the field in IB to the view controller source code.
  2. Now, instead of creating an outlet, you select Action, and you make sure the type is UITextField, and change the event to ValueChanged.
  3. In the automatically-just-written-for-you Action code add the code _myDate = sender.text!

You’re now solved the last of the three problems. You’ve had a function written for you automagically, and you’ve written one line of retarded code. That’s three more lines of code (and one new function) to support your single field. And that’s two different things that require messing with the UI during a refactor or if a property name gets changed.

OK, what about the middle problem? That’s essentially a case of refactoring the original code so that you can call it whenever you like. So, for example, you write a showData method, call it from viewDidLoad, and then call it when you have new data.

Now, this is all pretty ugly in basic Javascript too. (And it was even uglier until browsers added documentQuerySelector.) The point is that it’s possible to make it very clean. How to do this in Swift / XCode?

Javascript may not have invented the hash as a fundamental data type, but it certainly popularized it. Swift, like most recent languages, provides dictionaries as a basic type. Dictionaries are God’s gift to people writing binding libraries. That said, Swift’s dictionaries are strongly typed which leads to a lot of teeth gnashing.

Our goal is to be able to write something like:

self.bindData(<dictionary>)

It would be even cooler to be able to round-trip JSON (the way my Javascript binding library can). So if this works we can probably integrate a decent JSON library.

So the things we need are:

  • Key-value-pair data storage, i.e. dictionaries — check!
  • The ability to add some kind of metadata to the UI
  • The ability to find stuff in the UI using this metadata

This doesn’t seem too intimidating until you consider some of the difficulty involved in binding data to IB.

Tables

The way tables are implemented in Cocoa is actually pretty awesome. In essence, Cocoa tables (think lists, for now) are generated minimally and managed efficiently by the following mechanism:

The minimum number of rows is generated to fill the available space.

When the user scrolls the table, new rows are created as necessary, and old rows disposed of. But, to make it even more efficient rather than disposing of unused rows, they are kept in a pool and reallocated as needed — so the row that scrolls off the top as you scroll down is reused to draw the row that just scrolled into view. (It’s more complex and clever than this — e.g. rows can be of different types, and each type is pooled separately — but that’s the gist.) This may seem like overkill when you’re trying to stick ten things in a list, but it’s ridiculously awesome when you’re trying to display a list of 30,000 songs on your first generation iPhone.

In order for this to work, there’s a tableDelegate protocol. The minimal implementation of this is that you need to tell the table how many rows of data you have and populate a row when you’re asked to.

So, for each table you’re dealing with you need to provide a delegate that knows what’s supposed to go in that specific table. Ideally, I just want to do something like self.bind(data) in the viewDidLoad method, how do I create and hook up the necessary delegates? It’s even worse if I want to use something like RootViewController (e.g. for a paged display) which is fiddly to set up even manually. But, given how horrible all this stuff is to deal with in vanilla Swift/Cocoa, that’s just how awesome it will be not to have to do any of it ever again if I can do this. Not only that, but to implement this I’m going to need to understand the ugly stuff really well.

Excellent.

Adding Metadata to IB Objects

The first problem is figuring out some convenient way of attaching metadata to IB elements (e.g. buttons, text fields, and so on). After a lot of spelunking, I concluded that my first thought (to use the accessibilityIdentifier field) turns out to be the most practical (even though, as we shall see, it has issues).

There are oodles of different, promising-looking fields associated with elements in IB, e.g. you can set a label (which appears in the view hierarchy, making the control easy to find). This would be perfect, but as far as I could tell it isn’t actually accessible at runtime. There’s also User Defined Runtime Attributes which are a bit fiddly to add and edit, but worse, as far as I’ve been able to tell, safely accessing them is a pain in the ass (i.e. if you simply ask for a property by name and it’s not there — crash!). So, unless I get a clue, no cigar for now.

The nice thing about the accessibilityIdentifier is that it looks like it’s OK for it to be non-unique (so you can bind the same value to more than one place) and it can be directly edited (you don’t need to create a property, and then set its name, set its type as you do for User Defined Runtime Attributes). The downside is that some things — UITableViews in particular — don’t have them. (Also, presumably, they have an impact on accessibility, but it seems to me like that shouldn’t be a problem if you use sensible names.)

So my first cut of automatic binding for Swift/Cocoa took a couple of hours and handled UITextField and UILabel.

class Bindery: NSObject {
    var view: UIView!
    var data: [String: AnyObject?]!
    
    init(view v:UIView, data dict:[String: AnyObject?]){
        view = v
        data = dict
    }
    
    func subviews(name: String) -> [UIView] {
        var found: [UIView] = []
        for v in view!.subviews {
            if v.accessibilityIdentifier == name {
                found.append(v)
            }
        }
        return found
    }
    
    @IBAction func valueChanged(sender: AnyObject?){
        var key: String? = nil
        if sender is UIView {
            key = sender!.accessibilityIdentifier
            if !data.keys.contains(key!) {
                return
            }
        }
        if sender is UITextField {
            let field = sender as? UITextField
            data[key!] = field!.text
        }
        updateKey(key!)
    }
    
    func updateKey(key: String){
        let views = subviews(key)
        let value = data[key]
        for v in views {
            if v is UILabel {
                let label = v as? UILabel
                label!.text = value! is String ? value as! String : ""
            }
            else if v is UITextField {
                let field = v as? UITextField
                field!.text = value! is String ? value as! String : ""
                field!.addTarget(self, action: "valueChanged:", forControlEvents: .EditingDidEnd)
            }
        }
    }
    
    func update() -> Bindery {
        for key in (data?.keys)! {
            updateKey(key)
        }
        return self
    }
}

Usage is pretty close to my ideal with one complication (this code is inside the view controller):

    var binder: Bindery
    var data:[String: AnyObject?] = [
        "name": "Anne Example",
        "sex": "female"
    ]

    override func viewDidLoad() {
        super.viewDidLoad()
        // Do any additional setup after loading the view, typically from a nib.
        
        binder = Bindery(view:self.view, data: data).update()
    }

If you look closely, I have to call update() from the new Bindery instance to make things work. This is because Swift doesn’t let me refer to self inside an initializer (I assume this is designed to avoid possible issues with computed properties, or to encourage programmers to not put heavy lifting in the main thread… or something). Anyway it’s not exactly terrible (and I could paper over the issue by adding a class convenience method).

OK, so what about tables?

Well I figure tables will need their own special binding class (which I shockingly call TableBindery) and implement it so that you need to use an outlet (or some other hard reference to the table) and then I use Bindery to populate each cell (this lets you create a cell prototype visually and then bind to it with almost no work). This is how that ends up looking like this (I won’t bore you with the implementation which is pretty straightforward once I worked out that a table cell has a child view that contains all of its contents, and how to convert a [String: String] into a [String: AnyObject?]):

    var data:[String: AnyObject?] = [
        "name": "Anne Example",
        "sex": "female"
    ]
    
    override func viewDidLoad() {
        ...
        tableBinder = TableBindery(table:table, array: tableData).update()
    }

In the course of getting this working, I discover that the prototype cells do have an accessibilityIdentifier, so it might well be possible to spelunk the table at runtime and identify bindings by using the attributes of the table’s children. The fact is, though, that tables — especially the sophisticated multi-section tables that Cocoa allows — probably need to be handled a little more manually than HTML tables usually do, and having to write a line or two of code to populate a table is not too bad.

Now imagine if Bindery supported all the common controls, provided a protocol for allowing custom controls to be bound, and then imagine an analog of TableBindery for handling Root view controllers. This doesn’t actually look like a particularly huge undertaking, and I already feel much more confident dealing with Cocoa’s nasty underbelly than I was this morning.

And, finally, if I really wanted to provide a self.bindData convenience function — Swift and Cocoa make this very easy. I’d simply extend UIView.

Electronic vs. Printed Books, Gun Control, and Moral Panic

Future Shock (book) cover — cover with rather anti-prophetic quotation

Back in the early days of the Internet you’d frequently find sensationalist stories along the lines of “teenager learns how to make bomb from internet” in the media. Similarly, we’ve seen headlines along the lines of “mass murderer played violent video games“, or “rapist had huge collection of porn” (no links because my search results are dominated by stories about pedophiles, which is a whole different topic), or — one of my favorites — “teenager who played D&D commits suicide“. Another very common example these days is “woman causes fatal car accident while texting“. All of these headlines tend to have some things in common:

  • The basic facts are true, e.g. the teenager who committed suicide did in fact play D&D.
  • The implied causal factor is something novel to, or stigmatized by, society, or both.
  • There is no actual attempt to determine if there is even a correlation, let alone causation (e.g. are D&D players more or less likely to commit suicide than non-D&D players?) — usually the stories don’t bother to even cite relevant statistics.

It’s also worth noting that you almost never see the opposite kind of story. E.g. “teenager who plays D&D volunteers at suicide help line,” even though there are doubtless plenty of examples, because they don’t fit the desired narrative of “thing you already don’t like and are kind of suspicious of is actually morally bad and here’s why”.

Back in the 90s, whenever I heard relatives quoting insane alarmist stories about the internet to me (and I was “the guy” in my extended family who “knew about computers” so I got a lot of this) I would say “replace the word internet in the story with books or letters and see whether it is equally applicable”. After all, there are plenty of stories about women being seduced by serial killers by mail, or people learning to do terrible things in a library. In a recent discussion with my very smart wife and some of her very smart colleagues, the question of electronic vs. printed books came up. All of the colleagues were young (well, younger than us), technically savvy, and had a background in either psychology or communications or both. All claimed there was a body of research showing the retention or comprehension rates are lower when you read electronic vs. printed material.

(You know, I remember when some programmers liked to print hard copies of their code for debugging because reading code on crappy CRTs sucked. But you don’t see that much any more.)

Later, there was discussion of how, for example, “things that girls do” tend to be trivialized and stigmatized by society (e.g. “texting” is something generally ascribed more to girls than boys), and the term “moral panic” came up. This is the general term for when “society” (e.g. the mass media, or whatever) encounters some new or marginalized phenomenon and — finding cases where it appears to be linked to some other bad thing (e.g. a fatal car accident) — loses its shit. The current vogue example is selfies (it was hard to find the linked story since most such stories are about people accidentally or deliberately shooting themselves while taking selfies — hooray for gun owners — but let’s not get ahead of ourselves).

I pointed out that the anti-eBook stance they were all taking was an example of moral panic.

First, I tried to find the research that shows lower comprehension rates when reading eBooks. Here’s an example. Let me quote the article:

“When you read on paper you can sense with your fingers a pile of pages on the left growing, and shrinking on the right,” said Mangen. “You have the tactile sense of progress, in addition to the visual … [The differences for Kindle readers] might have something to do with the fact that the fixity of a text on paper, and this very gradual unfolding of paper as you progress through a story, is some kind of sensory offload, supporting the visual sense of progress when you’re reading. Perhaps this somehow aids the reader, providing more fixity and solidity to the reader’s sense of unfolding and progress of the text, and hence the story”

If you read the article, they have two studies, one of 50 people and another of 72. In the first case the researcher mostly got null results (i.e. the readers of both media had equal success in answering comprehension questions) but discovered that the readers of eBooks had more difficulty placing events in order. In the second case the researcher found “students who read texts in print scored significantly better on the reading comprehension test than students who read the texts digitally”. Of course both tests were badly flawed. In the first test only two subjects were familiar with the software and hardware being used (everyone, of course, is familiar with printed material). In the second case the “digital” version was a PDF not, say, an ePub which is designed to be read on screen. Imagine if I did such a test comparing reading a web page with a printed version of that web page in whatever revolting format the browser elected to print it in.

So you could equally draw the conclusion that: “For most intents and purposes, even inexperienced eBook readers using terrible hardware and software have the same comprehension rates as readers of paper books.” Let me also suggest that based on the elaborate and completely unfounded theory that one of the researchers has for why these rather shaky results were obtained, I suspect we’re looking at confirmation bias. She had a theory that eBooks were somehow worse and then went looking for data.

Mechanism of Action

My biggest methodological problem with this research is that it is horribly artificial. Let’s suppose you have an actual task — e.g. you watched Ken Burns’s The Civil War and were incredibly impressed by Shelby Foote and want to read his magnum opusThe Civil War, A Narrative. Or you have to write a paper discussing a bunch of papers for your Art History class. How quickly and easily are you able to accomplish the task you set out to complete? How happy are you with the outcome? And how did this figure into a bigger picture? E.g. how much did you actually learn (not just about the text in question but in general?)

E.g. one of the things I treasure about reading electronically is how quickly I can skip over crap. E.g. if I read something interesting but wonder if it covers a specific point, I can quickly search for it and skip to the next article if not (or decide whether what the article has to say on that point prompts me to read the entire article more carefully). If you were testing my comprehension of the article, I would fare worse for an article that didn’t interest me — but I would have spent far less time deciding I didn’t need to read it; time I could spend reading something else.

I pick both of these examples from personal experience. In the former case I read The Civil War in paperback. I borrowed the three enormous books from a friend; it was absolutely riveting; and I enjoyed it very much. But, if I were to read it today I would far prefer to read it in electronic form and (a) I would be able to highlight or annotate anything I thought was interesting or which I didn’t understand without fear of damaging my friend’s books; (b) I would be able to instantly look up any word I didn’t understand or whose meaning I was unsure of without losing my spot; (c) I could jump to-and-from end-notes without losing my spot; and (d) I would be saved a great deal of back and neck pain caused by trying to read three trade-sized 1000+ page books in bed.

That is not to say that e-books are in all ways superior to printed books. E.g. the reproduction of images is often terrible; e-books often have typographical and layout errors that would be considered egregious in a printed book; and e-readers are simply not up to the task of displaying highly-visual books. But these aren’t intrinsic flaws of e-books, they are implementation flaws. It’s like complaining about printed books in a land of incompetent printers. Conversely, I won’t claim that fact that videos play badly on paper books is a point for e-books, even though that’s true and it’s unlikely to get better.

In the second case I had digital copies of all the papers I had to read and discuss. When I found things I didn’t understand I could instantly look them up, capture and download them when appropriate, find related or contrary pieces and download those, and quickly skip around to relevant pieces or find things I had read using text search (for the papers that were text PDFs rather than scanned documents, at least). I remember doing this kind of thing with paper manuscripts and it was horrible. Heck, the readings for Anthropology A01 were heavier than a laptop and far less useful.

When you’re trying to explain why something is better or worse you don’t just need data, you also need a mechanism of action. Data tells you something is going on, but mechanics tell you what to look for.  (Just as the researcher who likes how books feel and smell goes looking for evidence that this somehow improves reading comprehension, but I prefer some kind of more direct mechanism of action than that.) The “mechanics” the researchers claim make printed books better sound like cargo cultism or fetishism to me. E.g. the idea that the number of pages in your left hand versus your right somehow gives you cues to help keep track of the temporal order of events does not stand up to inspection. What if I am reading a short story in a large anthology? What if I am reading a temporally confusing book (such as Iain M Banks’s Use of Weapons) where chronological order does not in any way match page order? What about constantly having to shift posture to cope with reading left-vs-right pages in a heavy book? This is not going to show up in a study of a 28 page short story, but it’s a real consideration for actual books. I can show some real, tangible advantages to reading electronically, and it would be easy to devise experiments to test whether these advantages are real (although getting subjects to read 3000 page trilogies might make getting volunteer test subjects difficult).

How Wrong Can You Be?

Now, let’s pause and balance this out by considering the idea that the liberal consensus in favor of gun control might equally be a moral panic. This is similar to some of the arguments people who I somehow seem to know on Facebook have made against gun control. “Don’t blame the guns.” Let’s ignore the vapidity of that argument and assume that the more cogent argument (that this is a moral panic) was made.

First, let’s look at the degrees to which an assumed causation can be wrong. I hope you already know that “correlation does not equal causation”. E.g. if places with fewer murders happen to have fewer guns all you have is a correlation. It may be that in places where people hate each other they are more likely to buy weapons. The example my Stats professor liked to use was the birthrate in Stockholm is strongly correlated to the stork population.

When a correlation turns out not to reflect causation, the technical term used is that there are mediating factors — in this case season. In extreme cases like this, you’re likely to find that the mediating factor completely eliminates the original correlation (or perhaps even makes it slightly negative); e.g. we can probably find outliers where for some reason there were fewer storks but the birthrate did not dip and vice versa, so taking season into account the correlation between storks and babies becomes much lower and possibly negative.

So, assuming that correlation equals causation is a fairly common trap. You can be wrong, but you’ll find yourself in good company.

People who read text written in sans serif fonts have poorer comprehension than people who read text written in serif fonts. Therefore sans-serif typefaces cause poorer reading comprehension.

This “fact” was well-known when I was starting out in the usability business. (The suggested “mechanism of action” was that the serifs helped readers visually line up the letters or something — given that serifs were invented to prevent rain from eroding letters carved in stone, and then copied by typesetters to lend printed text the authority of Roman carvings, this idea seems fanciful.) In the nineties a lot of contrary evidence was showing up because very low resolution computer displays and not-especially-high-resolution laser printers both sucked at reproducing serif fonts. But it wasn’t long before the result stopped replicating even for documents printed at very high resolution. It turns out it was an example of this kind of trap. What seems to have been happening was that people who were brought up reading serif fonts (in the 1970s this was almost everyone) were better at reading serif fonts.

People who read text on an e-reader have poorer comprehension than people who read text on paper. Therefore e-readers cause poorer reading comprehension.

We’ve just seen that in the cases above, a specific e-reader was used and the users weren’t used to it. We also don’t know whether, for example, good typefaces were picked on the e-reader (e.g. Apple, Microsoft, and Google have each done a lot of work to maximize legibility of screen text; Amazon, not so much.) And, “comprehension” was only worse in one specific way (and the theorized reason for it could easily be addressed by improving the e-reader software, or using a better e-reader).

But again, moral panic is frequently guilty of assuming correlation itself.

Assuming correlation and then jumping to causation is a way of being even more wrong than actually finding a correlation and assuming causation.

And then, worst of all, there’s the availability heuristic: we hear lots of news about X therefore X happens a lot. This is a major reason why people are more afraid of terrorists than staircases and flying than driving. “Swimmer eaten by shark.” “D&D player commits suicide.” “Illegal immigrant is a serial killer.” “black guy kills cop.” The fact there is more news about something doesn’t mean there is more something. Sharks kill very few people. D&D players are less suicidal than non-D&D players. Illegal immigrants are more law abiding than pretty much anyone else. Violence against cops is at historic lows.

So the hierarchy of wrong goes something like this:

  • Incredibly wrong. We hear about it a lot, therefore it must be common. (Availability Heuristic.)
  • Wrong. We hear about A and B together a lot, therefore A must be related to B. (Assuming correlation.) Therefore A causes B. (And then jumping to causation.)
  • Wrong but there’s probably something there. We’ve found that A seems to occur with B. Therefore A causes B. (Assuming causation.)

Moral panics are usually more wrong than merely assuming causation. E.g. it turns out that D&D players are less likely to commit suicide than non-D&D players. Similarly, the evidence linking porn to rape is pretty weak (if anything, the ready availability of sexual outlets such as porn and prostitution appears to reduce rape).

Spot the Moral Panic

Can we apply the same arguments to the gun debate? Or texting while driving?

I’ll pick these two examples because I am in favor of gun control, but I think concerns about texting while driving are overblown.

Evidence in favor of gun control:

  • Countries with more gun control have fewer gun deaths. (Factual correlation.)
  • Mass shootings disappeared in Australia after ban on semi-automatic weapons. (Factual correlation.)
  • US States with stronger gun control laws have lower gun violence when you correct for economic factors, etc. etc. (Factual correlation that’s hard to explain to people, especially those who are disinclined to believe in Math or evidence in general.)
  • Black market prices for weapons soared in Australia after ban on semi-automatic weapons. (Factual correlation.)
  • It’s hard to shoot people when you don’t have a gun. (Mechanism of Action.)
  • Many mass shooters buy guns legally. (Mechanism of Action.)
  • Studies show that people with guns do not help (in fact make things worse) in active shooter situations. (Mechanism of Action.)

Evidence/Arguments against:

  • Switzerland. (Lots of guns, but very low gun violence. That said, Swiss gun control laws — which are national, standardized, and consistently enforced — would more than satisfy the most liberal gun control advocate in the US, and gun ownership in Switzerland is roughly half that in the US and declining.)

No, this isn’t a moral panic, it’s a very solid case. The one “exception” looks, upon examination, to be more evidence in favor of the case. This is probably why the NRA has done its best to block any scientific investigation of the subject.

Let’s try texting while driving:

Evidence/Arguments against:

This case is weaker, but there’s still something there. It’s not that texting while driving isn’t bad, it’s just that it’s not clear whether laws against it are much use. (And we might just as well have laws against eating, listening to engaging music (or NPR), or having conversations while driving.)

Back to Dead Trees

So, let’s circle around to e-readers vs. printed books:

Evidence in favor of printed books:

  • If you get 50 people to read a short story in printed form vs. a Kindle then the people who read on the Kindle are not as good at placing a list of events in the story in the correct order. (Factual correlation.)
  • If you get 72 Norwegian 10th-graders to read a text in printed form vs. a PDF then the students who read the PDF scored “significantly better in comprehension tests”. (Factual correlation.)

I should point out that this is an actively researched field, but based on my quick searches on Google Scholar, the more nuanced research is producing much more equivocal results. But the underlying problem remains: the population is still predominantly more familiar with printed books (remember serif typefaces) and the software is still primitive. The points of comparison are flawed and the population samples are biased. And, even then, the pro-book results are pretty weak.

Evidence/Arguments against fall into some basic categories:

  • The media being tested aren’t equally familiar. Everyone is familiar with printed books. Even those people familiar with e-readers probably aren’t using their own e-reader with their own favorite settings; it’s very hard to run a fair test. (Correlations are based on bad experiments.)
  • In each case, the test is unnecessarily compromised. E.g. a PDF is essentially an electronic representation of a paper document. An ePub is a digital document designed for reading on a screen. It is possible to create PDFs that are not horrible to read on screen, but I doubt this was done, and even the best designed PDF won’t allow the reader to choose their favorite typeface and font size and contrast. A blind person won’t get an accessible copy. (Advantages of electronic media are methodically ignored.)
  • The person is being asked to read a specific document, not complete a [realistic/representative] task. Even if the task is “read such and such a book” then, assuming it’s available as an e-book, in the real world the person with the electronic device is already hours or days ahead because they’ll have the book more-or-less instantly. If the task is “find the answer to X” then again the reader with the device will almost certainly find more information, be able to chase up more references, and will likely find more specifically applicable information. (The wrong questions are being asked.)

In the end, most of the arguments against one sees for or against electronic books are nothing to do with the relative quality of the media, e.g.:

“I am still a total paper book lover. It’s just satisfying curling up with a book, the smell of the pages, the heft of the book. And there is the classic “Three B test” – bath, bus, bed.”

If you’ve ever read a big, thick book in bed I doubt you’d consider paper books to have any real advantage over electronic. I personally end up turning over every other page to avoid keeping three pounds of paper suspended in the air. And by the same token it’s much easier to read an electronic book in a bus. As for the bath — get a waterproof case and your point is?

“I was Switzerland in this discussion, but the ebook I was reading told me I was 84% finished with the book when the book ended. The remaining 16% was excerpts from the author’s other books, an author interview, and a discussion guide. Paper books are far superior when it comes to letting you know your place in a book, and that’s why I prefer them.”

That is annoying. Of course, I first encountered this kind of crap in printed books, so I don’t really see how that’s a strike against digital. And that’s without garbage put in by the author him or her-self, e.g. the story part of Return of the King ends something like 75% of the way through; the next 25% of the book is appendices, most of which few would want to read.

It is refreshing to see someone talk about stuff other than whether they prefer the smell of paper to that of a kindle, e.g.:

“For me, it depends on the book—how visual it is (graphic novels I like in paper format), whether I’m more likely to race through it (a good novel) or linger and bounce around (poetry), how big it is (I wish the gigantic Robert Moses book was in eBook form), and how well the text was translated to Kindle (I heard bad things about the Game of Thrones digital versions, so went with paper for that).”

At last, some rationality. I personally would rather buy a graphic novel or a fine art book as paper because ebooks generally have very poor image quality (and even if they don’t it’s hard to tell before you buy them whether this book is the exception) and even if they had great image quality, there’s not really any good way to consume them (until the iPad Pro comes out, perhaps). But, if I knew that the images were going to be in great shape, and I had — say — an iPad that could project books at full resolution onto a 4K display, I would pick electronic books there as well.

It’s also worth noting that in many cases Kindle and other electronic editions will be perversely more expensive than their dead tree counterparts. (That’s why I went with dead trees for Game of Thrones, and it turned out to be a mistake because I lost the damn things.)

C# Part II

Galaxy Generated and Rendered in Unity 3D
Galaxy Generated and Rendered in Unity 3D

In the end, it took me four evenings to replicate the functionality in my Javascript prototype. I’d have to point out that I was able to do some really nice UI stuff (e.g. drag to rotate, mousewheel to zoom) in Unity that I hesitated to mess with in Javascript (the galaxy was rendered to a canvas in Javascript).

On the whole, I have some impressions.

First, C# is very strict about types, and while I can see some benefits, I think a lot of it is utterly wasted. E.g. having to constantly cast from one numeric type to another is simply a pain in the butt (I routinely expect to have to tease apart a bunch of cryptic type-related errors every time I compile a few lines of new code).

// given an array of weights, pick an index with corresponding weighted probability,
// e.g. [1,2,3] -> 1/6 chance of 0, 2/6 chance of 2, 3/6 chance of 3
public uint Pick( uint[] weights ){
    int s = 0;
    uint idx;
    foreach( uint w in weights ){
        s += (int)w;
    }
    s = Range (1, s); // returns a random int from 1 to s, inclusive
    for( idx = 0; idx < weights.Length; idx++ ){
        s -= (int)weights[idx];
        if(s <= 0){
            break;
        }
    }
    return idx;
}

And all of this rigor didn’t actually prevent or even help debug an error I ran into with overflowing a uint (I have a utility that picks weighted random numbers, but I overflowed the weights leading to an out-of-bounds error. (A simple fix is to change idx < weights.Length to idx < weights.Length – 1.) On the whole it would be nice if you could simply live in a world of “numbers” (as in Javascript) and only convert explicitly to a specific representation when doing something low level.

Second, there’s this weird restriction on naming classes within a namespace such that the namespace and class sometimes may not match and the class you want is often not in the namespace you expect. (E.g. I wanted to create the namespace LNM.PRNG and define the class PRNG in it, but this confuses the compiler, so I ended up calling the namespace LNM.Random — so code referring to this is “using LNM.Random” which mysteriously causes a class called PRNG to become available.) I don’t see why namespaces and class names can’t be the same.

Oddly enough in some cases I am allowed to name the class the same as the namespace, and I don’t know why. So LNM.Astrophysics implements the Astrophysics class, but I had to rename Badwords to BadwordFilter at some point because the compiler started complaining.

I’ve been using Monodevelop, which is an editor produced by the Mono folk and lightly customized to work with Unity. It’s a bit of a slug (if it’s developed using Mono, then it’s not a terrific advertisement for Mono). In particular, its autocomplete is great when it works, but utterly frustrating far more often. It fails to match obvious words (e.g. local variable names) and often makes it impossible to type something short (which matches something longer) on the first attempt. The autocomplete is darn useful, or I’d simply switch back to Sublime.

Flying my untextured spaceship around an undecorated, partially implemented solar system
Flying my untextured spaceship around an undecorated, partially implemented solar system

So the current reckoning is that I ended up producing the following:

  • Astrophysics.cs — defines the LNM.Astrophysics namespace and implements the Astrophysics utility class, some useful enumerations, and the Galaxy, Star, and Planet classes.
  • PRNG.cs — defines the LNM.Random namespace and implements the PRNG class which provides convenience functions for the Meisui.Random Mersenne Twister implementation I’m using.
  • Badwords.cs — defines the LNM.Badwords namespace and implements the BadwordFilter class which checks to see if any of a pretty comprehensive list of nasty words is present in a given string. (Used to prevent obscene star names from being generated.)
  • Billboard.cs — a Monobehavior that keeps a sprite facing the camera. It’s a little cute in that it tries to save CPU time by only updating a given star every 10 physics updates. There’s probably a better way.
  • FixedCamera.cs — a Monobehavior that implements a mouse/touch controlled camera that keeps a fixed view of a specified target unless explicitly moved. I’m planning on using this as my main view control throughout the game.
  • NewtonianScoutship.cs — a Monobehavior implementing an Asteroids-style player ship. I’ve also experimented with a “Delta Vee” style abstracted Newtonian ship controller but good old rotate and accelerate just feels better, and makes movement a pleasant challenge in and of itself. (It also makes becoming “stationary” in space almost impossible, which I think is a Good Thing.)
  • GalaxyGenerator.cs — a Monobehavior that instantiates a galaxy and renders it with sprites. (Right now every single star is a Draw Call, so I’m going to need to do some optimization at some point.)
  • Starsystem.cs — a Monobehavior that instantiates a Star (and its planets) and renders them, a navigation grid (to provide a sense of movement when passing through empty space) and orbits using a bunch of Prefabs.

So, while I continue to find C# quite tedious to develop with, I am making significant progress for the first time in over a year, and while C# feels much less productive than real Javascript, I do think it’s very competitive with Unityscript, and it’s been quite easy to get over the hump.

C#

Screen shot of my galaxy generator in action
Screen shot of my galaxy generator in action

I’ve been developing stuff with Unity in my spare time for something like eight years (I started at around v1.5). I was initially drawn in by its alleged Javascript support. Indeed, I like Unityscript so much, I defended it vocally against charges that using C# is better, and wrote this article to help others avoid some of my early stumbles. I also contributed significant improvements to JSONParse — although the fact that you need a JSON module for Unityscript tells you something about just how unlike Javascript it really is.

I’m a pretty hardcore Javascript coder these days. I’ve learned several frameworks, written ad unit code that runs pretty much flawlessly on billions of computers every day (or used to — I don’t know what’s going on with my former employers), created a simple framework from scratch, helped develop a second framework from scratch (sorry, can’t link to it yet), built services using node.js and phantom.js, built workflow automation tools in javascript for Adobe Creative Suite and Cheetah 3D, and even written a desktop application with node-webkit.

The problem with Unityscript is that it’s not really Javascript, and the more I use it, the more the differences irk me.

Anyway, one evening in 2012 I wrote a procedural galaxy generator using barebones Javascript. When I say barebones, I mean that I didn’t use any third-party libraries (I’d had a conversation with my esteemed colleague Josiah Ulfers about the awfulness of jQuery’s iterators some time earlier and so I went off on a tangent and implemented my own iteration library for fun that same evening).

Now, this isn’t about the content or knowledge that goes into the star system generator itself. It’s basic physics and astrophysics, a bit of googling for things like the mathematics of log spirals, and finding Knuth’s algorithm for generating gaussian random distributions. Bear in mind that some of this stuff I know by heart, some of it I had googled in an idle moment some time earlier, and some of it I simply looked up on the spot. I’m talking about the time it takes to turn an algorithm into working code.

So the benchmark is: coding the whole thing, from scratch, in one long evening, using Javascript.

Now, the time it took to port it into Unityscript — NaN. Abandoned after two evenings.

I’m about halfway through porting this stuff to C# (in Unity), and so far I’ve devoted part of an afternoon and part of an evening. Now bear in mind that with C# I am using the Mono project’s buggy auto-completing editor, which is probably a slight productivity win versus using a solid text editor with no autocomplete for Javascript (and Unityscript). Also note that I am far from fluent as a C# programmer.

So far here are my impressions of C# versus Javascript.

C#’s data structures and types are a huge pain. Consider this method in my PNRG class (which wraps a MersenneTwister implementation I found somewhere in a far more convenient API):

// return a value in [min,max]
public float RealRange( double min, double max ){
    return (float)(mt.genrand_real1 () * (max - min) + min);
}

I need to cast the double (that results from mt.genrand_real1 ()). What I’d really like to do is pick a floating point format and just use it everywhere, but it’s impossible. Some things talk floats, others talk double, and of course there are uints and ints, which must also be cast to and fro. Now I’m sure there are bugs caused by, for example, passing signed integers into code that expects unsigned, but seriously. It doesn’t help that the Mono compiler generates cryptic error messages (not even telling you, for example, what it is that’s not the right type).

How about some simple data declarations:

Javascript:

var stellarTypes = {
    "O": {
        luminosity: 5E+5,
        color = 'rgb(192,128,255)',
        planets = [0,3]
    },
    ...
};

C#:

public static Dictionary<string, StellarType> stellarTypes = new Dictionary<string, StellarType> {
    {"O", new StellarType(){
        luminosity = 50000F,
        color = new Color(0.75F,0.5F,1.0F),
        minPlanets = 0, 
        maxPlanets = 3
    }},
    ...
};

Off-topic, here’s a handy mnemonic — Oh Be A Fine Girl Kiss Me (Right Now Smack). Although I think that R and N are now referred to as C-R and C-N and have been joined by C-H and C-J so we probably need a replacement.

Note that the C# version requires the StellarType class to be defined appropriately (I could have simply used a dictionary of dictionaries or something, but the declaration gets uglier fast, and it’s pretty damn ugly as it is. I also need use the System.Collections.Generic namespace (that took me a while to figure out — I thought that by using System.Collections I would get System.Collections.Generic for free).

Now I don’t want to pile on C#. I actually like it a lot as a language (although I prefer Objective-C so far). It’s a shame it doesn’t have some obvious syntax sugar (e.g. public static auto or something to avoid typing the same damn type twice) and that its literal notation is so damn ugly.

Another especially annoying declaration pattern is public int foo { get; private set; } — note the lack of terminal semicolon, and the fact that it’s public/private. And note that this should probably be the single most common declaration pattern in C#, so it really should be the easiest one to write. Why not public int foo { get; }? (You shouldn’t need set at all — you have direct internal access to the member.)

I’m also a tad puzzled as to why I can’t declare static variables inside methods (I thought I might be doing it wrong, but this explanation argues it’s a design choice — but I don’t see how a static method variable would or should be different from an instance variable, only scoped to the method. So, instead I’m using private member variables which need to be carefully commented. How is this better?

So in a nutshell, I need to port the following code from Javascript to C#:

  • astrophysics.js — done
  • badwords.js — done; simple code to identify randomly generated names containing bad words and eliminate them
  • iter.js — C# has pretty good built-in iterators (and I don’t need most of the iterators I wrote) so I can likely skip this
  • mersenne_twister — done; replaced this with a different MT implementation in C#; tests written
  • planet.js — I’ve refactored part of this into the Astrophysics module; the rest will be in the Star System generator
  • pnrg.js — done; tests written; actually works better and simpler in C# than Javascript (aside from an hour spent banging my head against weird casting issues)
  • star.js — this is the galaxy generator (it’s actually quite simple) — it basically produces a random collection of stars offset from a log spiral using a gaussian distribution.
  • utils.js — random stuff like a string capitalizer, roman numeral generator, and object-to-HTML renderer; will probably go into Astrophysics or be skipped

Once I’ve gotten the darn thing working, I’ll package up a web demo. (When Unity 5 Pro ships I should be able to put up a pure HTML version, which will bring us full circle.) Eventually it will serve as the content foundation for Project Weasel and possibly a new version of Manta.