Savage Worlds

When I mentioned to a friend that I’d been looking at D&D 4th Edition, he mentioned he’d bought a copy of Savage Worlds (from Pinnacle Entertainment Group) which was — shock, horror — skill-based. There’s a fully playable free version of Savage Worlds available for download on the website — I recommend it to you. The thing which I really like about Savage Worlds is that — like ForeSight — it’s designed for shorthand, and you can play with “all the rules you can remember” and whatever’s written on character sheets. And, unlike ForeSight, you don’t need to remember quite so many rules.

Savage Worlds is one whole level more abstract than ForeSight, making it a lot simpler, and on top of that it’s designed to reduce pretty much everything (explicitly including settings and adventures) to bullet points. It’s easy to imagine you could design a character in five minutes, and prepare to run a module in ten. Here’s the nutshell version:

Attributes and Skills are represented by a die type, e.g. D4 is the worst, D6 is average, and so on up to D12 (this effectively gives a range of five possible values). All resolution rolls are open-ended — if you roll maximum, roll again and add on. So if you have a D4 ability you can roll 4, 4, 2 and get 10. A typical task requires a 4 to succeed, which means a D6 ability has a 50/50 chance of success. Each attribute covers a number of skills, and it’s cheap to buy a skill up to the attribute, and then more expensive to go beyond. So if Shooting is based on Agility and your agility is D8, then it’s cheap to buy Shooting up to D8, but gets more expensive to get it to D10 or whatever. Degree of success is measured by the number of 4’s by which you succeed.

Powers represent supernatural (or perhaps just zany) elements, they draw from a points pool (you start with a 10 point pool and get back one point per hour), and they’re abstracted out from settings. So a power called “blast” might allow you to fire off magic missiles, cones of fire, and fireballs (yes, powers are quite versatile) — or psionic stabs, psychic waves, and telekinetic explosions. What a power does is covered by rules, how it’s flavored is a matter of setting, GM whim, etc.

Edges and hindrances are Savage Worlds‘s equivalent of Hero System/GURPS Advantages and Disadvantages or Fallout‘s Perks. In general, I like this system better than any similar system I’ve seen in a paper game (although Fallout’s Perks remain supreme), although many of the hindrances are things I think a player should take on voluntarily rather than be bribed to do. What I like about Fallout’s Perks is that the “bummer” is generally tied to the “goody” in a sensible way (“I’m a gifted healer but I hesitate to inflict harm on others”), versus the “I’ll be afraid of spiders so I can buy expert marksman” tradeoffs you inevitably see in these systems.

Oh, and as in any sane system, armor blocks damage, skillful people are harder to hit.

From the game master’s point of view, things like hit points are pretty much abstracted out. Unless an NPC is major (e.g. the main villain or a named henchman), he/she can be alive, “shaken”, or out of the action (wounded, incapacitated, dead — doesn’t matter). An ability called toughness determines how hard it is to one-shot an “extra”, and how likely they are to recover from (or expire from) being shaken. Major characters (“Wild Cards”) only get “wounded” when other folks would be eliminated — each “wound” inflicts a -1 modifier to all actions. A typical NPC is all D6s except for anything requiring special attention.

And finally, players and the DM get “bennies” — the equivalent of James Bond 007‘s “Hero Points” — which allow extra rolls in tight spots (you always use the better roll). Each player gets three bennies per session (they are lost if not spent), and the DM gets one for each player.

How does it compare to D&D4?

Unlike any version of D&D, Savage Worlds is a game that lets you think up the character you want to play, build that character, and play it right now (modulo perhaps being more or less capable). Relative to D&D4, the amount of book-keeping required (by players or the GM) is hugely reduced. And the designers of Savage Worlds are aware of the criticality of delivering key information when it’s needed, versus the information overload in every aspect of D&D4.

Probably my only real misgiving about Savage Worlds is the rather crude resolution system, but it’s better to have a crude resolution system that’s at least pointing in the right direction, than a finer-grained system that’s simply wrong (e.g. AC makes you harder to hit).

Best of all, Savage Worlds is designed to fit into your setting, and be complete by and of itself, versus D&D4 which imposes its will on your setting (unless you’re willing to do a humungous amount of work) and requires you to buy a ridiculous number of over-priced rulebooks.

How does it compare to ForeSight?

ForeSight is a much more complex game than Savage Worlds, and it’s intended to be more “realistic”. Aside from that, the two games are quite similar in terms of the way they bind to narrative (the level of abstraction). There are some things in Savage Worlds that I would steal for the next edition of ForeSight — the general-purpose power framework is absolutely brilliant, for example, and the way edges and hindrances work is pretty good — and while I think that any “goodies and bummers” system is open to massive abuse by rules lawyers, Savage Worlds is not a game that really lends itself to rules lawyering.

Whenever I see a game that’s fundamentally well thought out and much simpler than ForeSight it makes we wonder what the complexity in ForeSight really buys me. One interesting point made in the article in which the designer describes how he went about designing Savage Worlds is that he was annoyed by the slow pace of combat in every game system he’d used, citing D20 system in particular. A fight involving 20 NPCs might easily take a couple of hours (I’ve personally been stuck in D&D fights that occupied an entire night, and they weren’t terribly interesting fights either). One area where ForeSight excels is speed. Even large, complex fights in ForeSight tend to be over with pretty quickly. (One reason for this is that ForeSight was designed very much for people who can’t be bothered with miniatures.)

Savage Worlds is way ahead of ForeSight in terms of its streamlining. There’s no question that ForeSight can express a much more detailed and complex thing than Savage Worlds — everything is described both in more terms and in a finer-grained way. But there’s a huge difference between a game that old married people play and games than college students and teenagers play. We don’t need fine-grained item definitions and skill progression if we’re only going to manage one gaming session every two months — all this detail is really only useful if you’re gaming often enough to care.

Finally, the rough-and-ready nature of Savage Worlds‘s game mechanics lends itself very well to a less-than-serious settings. ForeSight‘s detail and expressiveness lend themselves to more serious settings. In this sense, the two game systems complement each other.

What would I do differently?

Simply put, I’d replace the resolution system and then rescale everything to match the replacement system.

Because the system is so reliant on the crazy polyhedral dice, it doesn’t have a very large sliding window of probability to play with. How can you represent — for example — differences within similar populations with a large gap between them (e.g. normal people and superheroes?). What’s more, open-ended die rolls are a bit of a mess when it comes to analysis, and it’s even worse when each different kind of die has a different chance of an add-on roll. Also I hate D4s. They simply don’t roll.

But there’s a much more important and — unfortunately — lethal argument against using the existing system: it’s hard to make things that are hard for idiots to do but reasonably routine for competent people to do. Let’s suppose we decide that a computer system is pretty tough to hack, and give it a difficulty of 7. Then a “world-class” (D12) hacker has a 1/2 chance of success (less, presumably, if the hacker is getting a blowjob at the time…), while a complete idiot (D4) hacker has a 1/8 chance of success. If I wanted to give a D8 hacker a 50% chance of success I’d need to make the difficulty 5, which is only one notch above “typical”, and this gives a moron a 25% chance of success. This makes it virtually impossible to create a challenge that will probably be handled by a “pretty competent” character (D8 or D10) but almost certainly not by a gumby — yet this is the exact kind of challenge you need for adventure stories, otherwise why be good at anything?

Note: while revisiting this post I noticed there are also some pretty bad “poverty traps” (situations in which having more makes you worse off) in the resolution system. E.g. a D4 skill has a better chance of performing a difficulty 6 task (3/16) than a D6 skill (1/6). The fact that higher skill dice are less likely to reroll also means that having higher skills isn’t much of an advantage when attempting truly difficult tasks (e.g. the difference between attempting a difficulty 25 task with skill D12 vs. D10 is minuscule (0.64% vs 0.6%), whereas the difference for a difficulty 10 task is enormous (25% vs. 10%)). This is the opposite of what one would expect and is likely to make the game feel wrong.

I think that a better alternative would be to base everything on D10s (my favorite kind of dice — they roll better than D6s or D8s and, unlike D20s, can actually stop rolling on carpet) and use simple additive modifiers. E.g. attributes could range from 3 to 10 or more (i.e. ForeSight’s range), and be added to D10 resolution rolls. You could leave rolls open-ended, but you’d only have one lumpy distribution to worry about. It would follow from such a system that if (say) 2 is average for normal folk, and 4 is average for heroes, that a roll of 10 means success (a really nice round number :-).), a normal person succeeds 30% of the time, and a hero 50% of the time (very much like the rules as they are). This gives you twice as much “grain” to work with with respect to weapon damage, situational modifiers, character advancement, and the like, with no additional complexity. As for raises — make them 5 (another nice round number).

If we return to our hacker example, we give the system a difficulty of 12, say, which gives our “world class” (+10) hacker a 90% chance of success, and our idiot (+1) hacker a 0% chance of success and a reasonably competent (+6) hacker a 50% chance of success. Now this is starting to sound useful.

Bare success and failure are very useful concepts (dramatically!), and can be implemented by making any success where the roll was exactly the number needed a bare success, and “missed by one” a bare failure. Each is guaranteed to be something that happens 10% of the time at most, which is very nice.

The system of “raises” in Savage Worlds serves the role of “crits”, but there’s a question as to the (very lumpy) distribution. E.g. a D8 skill vs. difficulty 4 has exactly a 1/8 chance of achieving 1 or more raises, and in fact has a 3/64 chance of getting one raise, a 4/64 chance of getting two raises, and a 1/64 chance of getting three or more raises. This is just weird. Indeed, based on whether you can get a raise without a reroll, the distribution of probabilities fluctuates wildly. E.g. D10 vs. 4 has a 21% chance of one raise, a 4% chance of two raises, a 3% chance of getting three raises, and so on — so a D8 vs difficulty 4 has a better chance of two raises than D10 vs difficulty 4. This is just one example — open-ended die rolls will do this everywhere.

My alternative: going back to the D10 resolution system, if you roll a (solid) success, you roll for success again at -2 (cumulative) to get a “raise”. So if you’re a decent (6) hacker working on a pretty tough (10) network, you have 20% chance of failure, a 10% chance of a “bare failure”, 10% chance of a “bare success”, 60% chance of a solid success (which has a 40% chance of getting a raise). I haven’t put all this into a spreadsheet, but it looks to me like a system with legs, although the -2 modifier may need some fiddling.

Simple example: our +6 hacker vs. the difficulty 10 system has a 30% chance of failure, a 10% chance of a bare success, a 30% chance of a solid success, a 21% chance of one raise, and a 9% chance of two or more raises. (I’m assuming a bare success on a roll to “raise” terminates.) Increase the difficulty to 12 and the hacker has a 50% chance of failure, 10% chance of bare success, 28% chance of solid success, a roughly 11% chance of one raise, and a 1% chance of two or more raises. Now increase the difficulty to 15 and we have a 10% chance of bare success, 9% chance of a solid success, and a 1% chance of one or more raises. As you can see we have a pretty nice “rolling window” of probability (pretty easy to very hard is a range of six) with nice probability distributions all round (nicer than ForeSight’s, I’d say).

Critical failure is also a very useful concept, and here we do the same thing in reverse: if you fail solidly you roll again to succeed at +2 (cumulative), and so forth. The opposite of a “raise” we can call a “deuce”, and the more deuces the more horrible the failure. Since this is exactly the “dual” of critical success, we know that if we get the first mechanic to work just right, this one will too.

Next there’s the whole problem of open-ended die rolls. Consider for a moment the D6 skill attempting the difficulty 6 task. Success chance is 1/6. What’s the success chance against difficulty 7? Oh, it’s 1/6. What’s the success chance against difficulty 8? It’s 5/36 (a shade less than 1/6). And so on. If you draw the graph of this it’s pretty diabolical, and it means that if you’re shooting at a typical (difficulty 4) target while running (-2) you might as well use your off-hand (-4). If you want everyone to act like complete idiots — and you might — then this is exactly the resolution system you should use. But I’m guessing it’s not exactly what the designers intended. From a lot of points of view, open-ended die rolls are cute — they generate excitement, and extend the “long tail” of the resolution system, allowing people to do amazing things occasionally (hmm, not that occasionally). But it’s bad enough when there’s a 1/10 chance of zaniness every time you roll the dice (i.e. the D20 system where a 1 or 20 has special effects) — it’s going to be a lot worse if it’s 1/6.

Returning to the D10 replacement rule — a simple option is to replace any D10 roll with a new roll at +5 — if the player wants it. (If you’ve already succeeded, there’s no point.) Since this has exactly a 50% chance of exceeding the original roll it guarantees that the long tail will thin out (albeit rather chunkily). So if you think of our (6) hacker, the success probabilities as the system gets harder from 12 are 50%, 40%, 30%, 20%, 10%, 5%, 4%, 3%, 2%, 1%, 0.5% and so on. I think you’ll agree that this is enormously preferable.

With this “live organ transplant” I think Savage Worlds, or something a lot like it, could be a very solid system. And you won’t need any of those stupid D4s, D8s, and D12s.

The way skills have been defined is a wee bit silly. E.g. one skill — shooting — covers all forms of ranged combat, while another — fighting — covers all forms of melee combat, but there are three vehicle skills (driving, boating, piloting) and riding. This means to be Death On Wheels you only need two skills, but to be The Driver you need four.

I’m also a bit put off by the choice of attributes. Basically you get Strength, Smarts, Agility — fine so far — then Vigor and Spirit. Vigor is essentially used for avoiding from wounds — no skills are driven by it. Now, not dying is pretty important, but that’s kind of lame. Spirit seems to be Willpower (it drives guts, intimidation, and persuasion). OK, I’m fine with having a willpower attribute, but if you’re going to have five stats, I’d probably pick something other than “vigor” to be the fifth. I think I’d probably split combat skills into Unarmed, Melee, Shooting, and Archery.

I’d have four attributes: physique, coordination, intellect, and — ok, let’s call it spirit. Physique combines “strength” and “vigor”, Coordination combines “dexterity” and “agility”, Intellect combines “intelligence” and “perception”, and Spirit combines willpower and charm. Then you can use perks/edges to tweak the balance (make someone unusually strong vs. healthy, or willful vs. charming). There’s a symmetry to this, with there being — in essence — four attributes, physical and intellectual, power and finesse — with the finesse attributes tending to drive skills and the power attributes tending to drive output (provide mana pools, prevent you from dying, etc.). This also further streamlines sketching out NPCs (now you only need to worry about four attributes).

Finally, I have a (fairly minor) quibble with the way experience points are spent: characters are given an overall Rank (ranging from Novice through to Legendary) based on how much experience they have, which is in essence their point cost. But because of the way attributes and skills interact, it’s possible that a character who bought skills early and attributes later will be less capable with a given amount of experience when compared with a character who bought attributes first and skills later. The obvious solution would be to refund skill points when an attribute is improved, or reduce the character’s “seniority” by the “lost” skill points. Either is kind of clumsy. If attributes simply limited skill advancement (e.g. you can’t raise a skill more than one rank beyond its attribute), as in ForeSight (or James Bond 007 from which the mechanic was taken), this problem would not occur.


I really like almost everything in Savage Worlds — if only the resolution system weren’t broken.  But at least it’s pointing in the right direction. I think I’ll try to design a simple RPG along the same lines and publish it as a short pamphlet (much in the spirit of the very first version of ForeSight).

Post Script

I’ve done a lot of thinking and experimenting with the “live organ transplant” described above and come to two important conclusions: rerolling at -2 (or any such scheme) for criticals produces an absolutely terrible distribution); and the kind of open-ended roll which leaves holes in the distribution (e.g. on a D10 roll you can’t roll 10) is The Devil.

So I’ve come up with a new open-ended D10 roll (it works, by extension, for any kind of die) — on a “natural 10” you reroll at +5 and replace the original roll if the new roll is higher (and this is recursive, so if you roll 10 and then 10 + 5 the second 10 is rerolled at +5 and replaced if higher and so on). 1s go in the opposite direction the same way. This produces a pretty nice* distribution (e.g. the probability of rolling 9 is 10%; 10 is 5%, 11, 12, 13, and 14 are 1%, 15 is 0.5%, and so forth — vs. Savage Worlds’s system which is: 10% for 9, 0 for 10, 1% for 11, and so on).

Because this distribution has no lumps and groups probabilities, a “success by 5s” critical system doesn’t do anything horrible (indeed the probability of each extra level of success is one tenth the previous). In fact, the whole system appears to produce a nicer set of outcomes than James Bond 007‘s Ease Factor / Quality Rating system (of which ForeSight uses a modified form) and has the advantage of being completely open ended (so superheroes work).

The D6 version works just as nicely, but has the expected issue of being cruder in the middle. I’d suggest that open-ended dice (as described here) along with “success/failure by N” (where N is half the size of the die) is a very satisfactory resolution system for tabletop gaming, and you simply need to match the die size to the level of detail you desire, although — frankly — 3D6 is a nicer distribution if you don’t need a large or open-ended window for modifiers. The probability tails will always go 1/N, N/(2N^2), 1/N^2 … at the edges which is pretty darn nice.

As regards the D20 system, making D20s open-ended in this way, and then using “success by N” the criterion for critical hits would probably be a much better alternative to “any 20 that isn’t a miss is a crit”. (When fighting people who can barely hit you it’s not exactly comforting to know that if you do get hit it will be a crit.)

Dungeons & Dragons 4th Edition

Back when I designed the original edition of ForeSight I considered myself a “scholar” of role-playing game rules (in the rather strong sense that Gordon R. Dickson uses the word “scholar” in Tactics of Mistake). I’d pored through virtually every set of RPG rules ever published (such a thing was still possible then) and at least had a pretty good idea of how each game system “worked” … or, more frequently, failed to.

D&D's mechanics were originally based on TSR's "Chainmail"
D&D's mechanics were originally based on TSR's "Chainmail"

The original role-playing game was Dungeons & Dragons. It shipped as three booklets in a small format box — I’m guessing because TSR (the publisher) was able to publish small-format booklets especially cheaply. The three booklets were called Men & Magic, Monsters & Treasure, and Wilderness Adventures. As I recall, the first book more-or-less covered character creation and game mechanics (most of which weren’t explained at all — you needed to be taught to play the game by someone who already knew, or get a copy of the legendary “Perrin Conventions” — it’s possible that someone who knew how to play Chainmail — the game it was based on — would have understood the rules better, but most of TSR’s rules were utterly incomprehensible, so this is doubtful). the second covered (as you’d expect) monsters and treasure, and the third book was basically useless. (Maybe it had random encounter tables or something.)

This is the cover of "Men & Magic", one of three booklets that made up the original D&D rules.

These books were followed by four expansions (thicker booklets in the same format): Greyhawk was the first and by far the most useful as it expanded the rules to the point of usefulness: different weapons did different damage; in the original rules all weapons did d6 damage; virtually all the “classic” magic items (e.g. “vorpal blade”) were introduced; the most popular character classes (e.g. Paladin); the best known monsters (e.g. various kinds of dragon, beholders, and so on). Blackmoor was essentially two new character classes (monk and assassin) and a not-very-interesting dungeon, Eldritch Wizardry (with a naked woman on the cover) added druids, psionics, and demons, and finally Gods, Demigods, and Heroes added exactly what you’d expect. At this point — and assuming you could figure out how to play the game at all — D&D was pretty much complete, and everything since then has been reorganization and rewriting.

The Original "Dungeon Masters Guide" (the cover art looks pretty bad at this size; it's positively awful close up; of the first three books, only the "Players Handbook" had professional quality cover art)

Advanced Dungeons & Dragons started out as the Monster Manual — essentially an alphabetic reorganization of around 300 various “monsters” from the original rules and some other places (e.g. Dragon magazine). It was almost certainly not planned because a huge portion of what ought to have gone in the Monster Manual ended up as appendices of the Dungeon Masters Guide. The Players Handbook described all the core character classes, and Deities & Demigods completed AD&D’s replacement of the original rules. Eventually TSR realized that to make more money it would need to publish more rulebooks, and ideally these books should give anyone who owned them cooler stuff than people who didn’t own them got, so we got a bevy of new rulebooks featuring ridiculously overpowered character classes, stupidly conceived monsters, magic items, and so on.

AD&D was — if anything — more disorganized, self-contradictory, and incomprehensible than the original rules, and the huge array of add-ons only made it worse. Meanwhile, a rival group within TSR created a different series of Dungeons & Dragons rules — culminating in a version considered by many to be the best version of the rules ever), Second Edition AD&D, and D&D 3rd Edition, D&D Edition 3.5, and now D&D 4th Edition. There appears to be a pretty complete history on Wikipedia (of course).

Through all of these incarnations, D&D has more-or-less retained some core concepts, such as character class, level, armor class, d20 resolution, polyhedral dice, alignments, experience through killing, stealing from the dead for income and upgrades. Some concepts have been added (feats, secondary skills), and some have been changed hither and thither (multi-classing), but D&D was and remains a game primarily inspired by Tolkien (and secondarily by Howard, Leiber, and Vance) where any character from Tolkien (or Howard, Leiber, or Vance) is impossible to legally represent or create. Indeed, even Gary Gygax’s own fictional characters won’t legally fit into D&D.

Oops, I’m getting ahead of myself.

What is a role-playing game?

A role-playing game is ostensibly a game in which one puts oneself in the place of an imaginary character in a story. The story is laid out by the “Dungeon Master” (or “DM”, or “Game Master” in any other game) and one or more players each take the role(s) of one or more characters. The idea is that the DM describes the situation that characters are in, the players — imagining themselves to be those characters — describe what they want to do, the DM then decides how the situation changes (using the rules or just making it up) and we repeat.

It follows as a corollary of this that a set of role-playing rules is — effectively — simulating the “physics” of the imaginary world the DM and players are sharing. So, to “place onself in the role of a character” in that imaginary world, one must either assume the world acts much like some world (like the real world, or a fictional world) we understand pretty well, or one must know the rules inside out and be able to imagine the world they simulate (whatever that happens to be).

The problem with D&D was and continues to be that the rules in no way, shape, or form simulate the world as we know it, or any well-known fictional setting (e.g. Middle Earth) and furthermore the rules are ridiculously convoluted (the word “complex” is not strictly applicable — they’re simple, but there are a buttload of them and they are highly prone to be contradictory and/or badly written) so divining what the world they simulate must be like is virtually impossible. What D&D tends to be, as a result, is a world that looks — superficially — like a fictional world we might know and love, but where no sane person in that world would behave as expected — given how the world actually works — but they do anyway.

But let us put all of this aside and ponder the question of whether D&D4 is actually a good game.


One of D&D’s core ideas is that of character classes. Every character is a member of a character class which essentially determines what that character can do. Originally this was enormously simplifying — each character class had a very limited and well-defined set of abilities, and there was little or no flexibility. As of AD&D 2nd Edition there was virtually no difference in capabilities between any two characters of a given class and level aside from their magic items.

The key problem with character classes is that players want to feel their character is more than a generic blob festooned with magic items. Worse, if you actually want to create a particular character from fiction or your imagination, the character you want to play is seldom well-represented by a character class.

Over the years (and editions) D&D has tried to address this in two ways — increasing the number of character classes to better match various stereotypes (e.g. the “ranger” class was added to better represent someone like “Aragorn”), and making character classes more flexible (e.g. allowing characters to learn skills and feats, multi-classing, two-classing, and mix-and-match classing). Indeed, many other game systems have tried to make character classes “work” (as in allow you to create the characters you want while keeping things simpler than pure skill-based systems), and they’ve all pretty much failed — skill-based systems like RuneQuest, Hero System, GURPS, and DragonQuest are (or were, in the case of DQ) enormously simpler and more flexible than D&D3 or later, Chivalry & Sorcery (any edition), or Rolemaster (and Rolemaster simply uses character classes to give you hit points and skill points). This is because all of them end up bolting one or more skill systems onto the side of a class system — guaranteeing extra complexity.

D&D4’s approach to classes is to provide a pretty large number (the core rules provide eight classes, and this doesn’t include monk, druid, illusionist, or bard) and to make them pretty flexible (e.g. you can obtain feats which allow you to mix-in abilities from other classes) but not as flexible or fiddly as D&D3’s mix-and-match classes (which didn’t work very well) or as flat out broken as multi-classing and two-classing were in earlier editions.

More than any earlier edition of D&D, all classes in D&D4 work pretty much exactly the same way. The difference between a fighter and a wizard is that a fighter casts “spells” that require a weapon while a wizard casts “spells” that require an “implement” (the designers of D&D4 have an absolute tin ear for naming).

For me, there’s a huge problem with the way the rules now work in that conceptually skills and spells have always been very different. If I — say — know how to bonk someone on the head with a shield it stands to reason that I can bonk as many heads with my shield as I like until I drop from exhaustion (or friends of my victims restrain me). But in D&D4, such a “skill” would be represented by a “spell”-like ability which could only be employed once per encounter or day. It’s pretty bad — one of the highest level abilities available to rogues (level 29, once per day) is essentially a hamstring attack. (Yup, at level 29 wizards get meteor swarm and rogues get a hamstring attack.) The only way to tell high level spells apart from high level abilities is that the “spells” are named “spell” or “prayer” or somesuch, and the “abilities” suck.

Hit Points & Healing Surges

One very simple and welcome improvement in D&D4 is that level has a much smaller (and non-random) effect on hit points. Your starting hit points are simply constitution + a value based on class (e.g. 12 for rogues and rangers). And you simply gain a (fairly small) fixed number of hit points when you go up a level (e.g. 5 for rogues and rangers). Damage output doesn’t tend to scale dramatically upward either, so — for example — meteor swarm does 8d6 + Int bonus damage (versus some stupendously higher amount in earlier editions).

Correction: monsters cannot as a rule use healing surges. The following paragraphs have been changed to eliminate incorrect comments based on my missing a vital paragraph in the rules.

One thing that outright shocked me looking at the sample adventure in the DM’s Guide was the hit point values for low level monsters. Level 1 kobolds with 29 hit points! If you look at a typical (shield-using) fighter, he/she will be doing d8 + 3 damage (maybe) on a hit, and hitting maybe 50% of the time. It’s going to take four hits to take down one kobold (an offensive fighter build will do better). (There are 1hp kobold “minions” to make hordes of easily killed kobolds a possibility — somewhat reminiscent of Bushido and Aftermath’s “extras”.)

“Healing Surges” are a new (as far as I know) concept in D&D4, and one of its most MMORPGish features. A typical character gets a sizable number of healing surges per day — 5-9 + Con bonus, with all kinds of extras from feats and the like. A healing surge basically restores 25% of your max hp, and you can perform one per encounter (fight) — by performing a “second wind” standard action (which also confers +2 defense) — and as many as you like between fights.

So, it looks like D&D4 fights are going to be drawn out. This is perhaps good if you like wargaming fights (and D&D4 seems like a pretty solid wargame — more on this later) but not so great if you view fights as being essentially a necessary evil that interrupts plot advancement. But, I guess if you look at role-playing that way you probably aren’t interested in D&D.

Constitution used to have a huge impact on character hp totals. It now has very little effect — one point of Con gives you one hp (vs. level/2 hps). But the real advantage of Con is you get one extra healing surge (i.e. 25% of your hp total as self-healing) per 2 points of Con. This probably makes Con even more important than it was in the past — i.e. one point of Con is now worth +12.5% hps vs. level/2 hps.

Almost all healing has been redefined in terms of healing surges, e.g. clerics can hand out “free healing surges” pretty much like candy during fights. One of the interesting side effects of this redesign is that a low level healer can be of huge benefit to a high level character (a 10th level fighter is likely to have 90 or so hit points, so a healing surge is worth 22hp) — whereas in the past a low level healer would have been useless to a high level character.

On the other hand it would seem that healers are much less necessary than they used to be (although clerics are so overpowered, why wouldn’t you want them anyway?). To begin with, any player character can regain full health in a day without healing (in the “good old days” it used to take months to recover full health without healers or potions) — this would make the implied D&D world very strange indeed. (Even trash mobs have one healing surge per day, so a mortally wounded nobody can be full health in four days. Strange world. Maybe we assume that healing surges only exist for player characters and anyone near them… but then PCs could make good money just by visiting sick people…)

Note: the rules are not explicit on the user of healing surges outside of combat by NPCS and monsters, so it’s a bit of an open question (perhaps resolved somewhere I haven’t found) as to the effects of healing surges on the “implied universe”. However, it’s very easy to pick up the healing skill (any character can grab it as a feat at level 2), so we can assume that healers are common and so the availability of healing surges to the general populace is high.

What D&D4 conspicuously fails to do is make higher level characters better at preserving their hit points. You still can’t parry or “actively resist” worth a damn. Oh well, maybe D&D5…

Attributes, Powers, “Spells”, Skills, Feats

D&D4 divides the things a character can do into — broadly speaking — four categories.

Attributes (Strength, Dexterity, etc.) are your traditional six numbers, which determine your bonuses. But, in a radical departure from earlier editions, instead of key capabilities (the ability to hit things, saving throws, etc.) being implemented as “virtual skills” which are determined by a character’s class(es) and level(s), all actions are attribute rolls which receive the attribute bonus ( (score-10)/2 ) plus level/2.

(The way attributes work has been further modified to pair the attributes — Str with Con, Dex with Int, and Wis with Cha — so that each feeds into one of the three defenses — Fortitude, Reflex, and Will. Fortitude, Reflex, and Will work exactly like AC now, so hitting someone with a sword is more like hitting someone with a spell, making the character classes seem even more flavorless, and making defense even more passive. It also has the rather undesirable side-effect of penalizing classes — notably fighter-types — who need two attributes in one pair (Str + Con).)

Powers are things you can do by virtue of being a member of your character class. These seem to be fairly perfunctory on the whole, and indeed the designers seem to have gone to some lengths to come up with things they can call “powers” to put in those sections of the class descriptions. E.g. clerics get a power called “healing word” which confers a wisdom bonus on health restored by any spell with the word “healing” in its name. Surely it would have been simpler to put the bonus into the spell definitions. Now, arguably, someone attempting to use feats to mix-in cleric abilities to their class might now need an extra feat to obtain that bonus, but … it just seems unnecessarily convoluted to me.

Spells (note: I use the word “spells” facetiously — they’re referred to variously as “class features”, “exploits”, “prayers”, etc. based on class, but they all work the same way, which is pretty much the way “spells” worked in older editions of D&D) are the meat of each character class description. Instead of being levels 1 through 9, they’re now listed as the level at which they become available: 1, 2, 3, 5, 6, 7, 9, 10, 13, 15, 16, 17, 19, 22, 23, 25, 27, 29. (Note that this is much finer-grained than in “the good old days” — a Good Thing — and that the designers give out new abilities on odd-numbered levels to make up for new feats arriving on even-numbered levels, attribute bonuses arriving on levels which are multiples of 4, and (massive!) tier-bonuses on levels 11 and 21. So you get something nice every time you go up a level.

Spells are available “at will” (you can just do them as often as you like), once per “encounter”, once after “resting” (a slightly more restrictive version of once per encounter), and once per “day”. Orthogonal to how often you can use spells, some are classified as “utility” meaning they are picked from a different pool of points. The provision of “at will” spells is a huge improvement over earlier editions — yes, you can now be like “Tim the Enchanter” and just nuke stuff constantly… — but you only ever get two and they don’t improve with level (an obvious omission — why not allow characters to use “encounter” abilities ten levels lower “at will”… By level 10 a character will have 2 “at will”, 3 “encounter”, 3 “utility”, and 3 “daily” spells. (And again, note that this applies to every class, not just casters.)

Skills are broadly defined sets of abilities (much broader than in the past — think DragonQuest not RuneQuest) that characters can obtain based on class. E.g. the “Thievery” skill covers picking locks, disarming traps, and picking pockets, while “Athletics” covers jumping, swimming, and climbing (yes, at last, fighters can actually climb and sneak!). All abilities within a skill are driven by the same bonus, so all athletics tasks are driven by Strength (yup, weightlifters make the best climbers)… It’s pretty annoying that rogues can’t climb as well as fighters, and of course strength driving climbing is just stupid, but this isn’t GURPS — we left realism behind a long time ago so there’s no point picking nits. Sadly, skills get very short shrift from the rules — Diplomacy receives one paragraph and no concrete examples at all.

Instead of buying individual skills and skill levels (a promising trend in D&D3), skills are once again something you have or have not, and you simply get to use attribute checks to use them (receiving a +5 bonus for actually having the skill). This makes higher level characters pretty good at everything (I mean … ridiculously good at everything) even without the “Jack of All Trades” feat (+2 at all untrained skills). Much like Chivalry & Sorcery (at least, the last edition I looked at), there’s no point trying to be really good at anything unless you’re also ridiculously high level — an Int 18 chef with the cooking skill (if there were a cooking skill) will get +9 at level 1, while a level 20 with Int 9 who has never seen the inside of a kitchen will get +10. This, of course, horribly undermines the premise that “even a level 1 character is heroic” compared to ordinary people.

Finally, feats are essentially masturbation with the game system. Each feat tends to allow you to diddle with the rules in some slightly beneficial way (a +1 modifier here or there, get an extra this or swap X for Y, or avoid this penalty in this situation) or allow you to do some cute trick that a more “realistic” game system lets pretty much anyone do.

If all of this sounds (a) really complicated, and (b) disturbingly similar to MMORPGs, I’d have to agree, and then point out the rules for respeccing retraining built right into the character advancement system. Each time you go up a level you’re able to swap out an ability slot (forget thievery, learn diplomacy). (Imagine explaining such a change in a story… “Well I used to know how to pick locks, but I’m afraid I completely forgot how when I decided to learn the intricacies of diplomacy upon ascending to level 8.”)


D&D4 allows for thirty levels of character advancement divided — for no particularly good reason — into three “tiers” named “heroic” (levels 1-10 — that’s right, everyone’s a hero), “paragon” (levels 11-20), and “epic” (levels 21-30). This entire idea is at once central to the game rules (e.g. when you become a “paragon” you get to further specialize your character class, and most “at will” abilities get much better when you hit level 21) and something of an afterthought (a lot of “epic” abilities seem pretty humdrum — e.g. most level 29 fighter/ranger/rogue abilities seem like something any reasonably competent guy could do).

Armor Class

I’ve heard that there’s been a long standing desire on the part of D&D’s designers to fix the single stupidest rule in D&D (armor makes you harder to hit) but every time they try it, it gets rejected by players. D&D4 does continue the trend of balancing agility (wear light armor and dodge) over simply dressing head to foot in plate (you now not only get no Dex bonus if you’re wearing medium or heavier armor, but there are skill check penalties for wearing heavier kinds of armor). This is both cruder and less annoying than D&D3’s “Dex cap” (where each kind of armor had a maximum bonus for Dexterity — now you just get to pick whether you’re gonna dodge or not). Shields continue to have a negligible effect on AC (although — interestingly — they now have the same negligible effect on Reflex defense).

Overall, though, I’d have to say that turning saving throws into another form of Armor Class is something of a retrograde step — and it underlines what a silly mechanic Armor Class is in the first place.

Saving Throws

Fortitude, Reflex, and Will are the (saner) replacements for “save vs poison”, “save vs. wands”, and “save vs. spell” in the original D&D and AD&D rules. Characters now have a passive defense value (10 + Fortitude bonus for Fortitude) and can make saving throws in the same category. In general, you use the passive defense value when being attacked, and saving throws to get out of something you’re in, or shrug off an effect that’s already on you. It follows that there might be some cases where you might make an “Armor Check” (i.e. actively roll using you armor bonus) to resolve some things, and this does appear to be the case. It follows that AC, Fortitude, Reflex, and Will all work exactly the same way now. This may not be a Good Thing in terms of realism, but it does reduce the number of mechanics you need to learn, and the number of die rolls needed during fights.

The Wargame

If you accept D&D’s basic mechanics — in particular their limitations and lack of realism — for what they are, the combat sequence of play in D&D4 is essentially a cleaned up version of what they were trying to do in D&D3.x.

Each round every character gets a turn in initiative order (Dex rolls — highest goes first). Each turn comprises a standard action, a move action, and a minor action. A standard action may be converted into a move action, and a move action can be converted into a minor action (so you can instead perform two moves and a minor action, one move and two minor actions, a standard action and two minor actions, or three minor actions). A standard action is typically an attack or the use of a special ability of some kind (e.g. casting most spells). A move action is what you’d think. And a minor action is something like unsheathing a weapon. Moving away from or past an armed character exposes you to opportunity attacks.

Where D&D4 shines is in its solid definitions and explanations of pre-emptive and reactive interruption, triggers, and opportunity actions. It could probably be better, but it’s pretty darn good. This is a really solid sequence of play that can cope with delicate situations (e.g. hostages at gun … er crossbow … point).

D&D4 also handles cover, the distinction between line-of-sight and line-of-attack (e.g. you can blind fire through smoke, etc.), and rigidly defines everything in terms of an ugly-but-convenient 5′ grid. (I’d argue that Foresight’s 1m grid is better for one important reason — it lets you represent the extra reach of long weapons better. Very few melee weapons — and the sarissa and lance aren’t sensible weapons for adventurers to use — let you strike someone 7.5-10′ away, but many let you strike at someone 1.5-2m away. Hex grids are obviously superior to square grids, but they’re also a pain in the ass.)

Perhaps my least favorite feature of the D&D combat system is that it abstracts facing out of the game. Your facing is simply ignored. I can see the argument that if a guy is being attacked by two assailants on opposite sides he/she won’t simply face one and not the other, but whirl around and try to handle both. OK. But what if he/she regards one as being simply irrelevant and does choose to just deal with one (e.g. he/she may think one is just an illusion, or dismiss one as relatively harmless, or simply think it safer to concentrate on taking one down)?

It seems to me that D&D4’s combat system is still one notch too “abstract” for my taste, but — core mechanics aside — it’s a very solid combat system for what it is.

Game Balance

The classic game balance argument in D&D has been (a) which is more ridiculously overpowered, clerics or monks? and (b) just how f*cked are rogues/assassins?

Based purely on reading the rules, it seems to me pretty clear that clerics are as overpowered as ever (e.g. the two level 29 cleric “daily” attack spells are an AoE that does more damage than meteor swarm (the most powerful standard wizard attack) and a melee attack that does as much damage as the equivalent fighter attack — all this while being able to toss around heals like candy). I’m not sure I’m going to pay for the rulebook that contains monks to find out if they’ve made out as well.

As for rogues/assassins — most of their best abilities are now available to other character classes (indeed, via feats, they all are — you can make a backstabbing paladin if you want to). On the other hand, rogues can drive both their AC and attacks with Dexterity, which means a rogue will no longer (a) be unable to hit anything (as per pretty much every earlier edition of D&D) or (b) be very easy to hit. And the change in hit point formulae means that rogues will tend to have roughly 80% of the health of fighters.

The Dungeon Master’s Guide

When AD&D was published, of the three core books the DMG came out last. As a result it contained everything they forgot to put into the first two books. This meant that, among other things, huge amounts of information pretty much vital to players (e.g. notes on how pretty much every spell worked) were in a book forbidden to players.

A friend of mine once said that a set of role-playing rules comprises three kinds of things. Rules of representation (what does X in your imaginary world look like in terms of game mechanics?), rules of resolution (what happens when X tries to do Y?), and “vague waffle about role-play”. This was not to denigrate the third item, indeed some of the best RPG rules ever written have been notable in large part for their excellent vague waffle about role-play (I’m thinking of RuneQuest 2nd Edition and James Bond 007).

The latest DMG is very much what it ought to be — a book describing how to be a “Dungeon Master”. In other words, it’s almost entirely “vague waffle about role-play”. It includes some pretty neat stuff, including how to put together different kinds of challenging set pieces (e.g. how to handle a delicate negotiation, or a street chase), shortcuts for customizing monsters (e.g. making a “leader” version of a type of monster), and some reasonably well put together sample content (the D&D4 rules are conspicuously short of examples in general). This alone represents a huge advancement over the original rules (I’m not sure how big an advancement it is over D&D3, since I never paid much attention to the D&D3 DMG).

If you have a pretty good idea about how to DM (or GM) a role-playing game, you can probably make do without the DMG at all, although you may find it interesting even so.

The Basic Flaw, Revisited

The basic problem with D&D has always been that it’s virtually impossible to recreate characters from the source material (e.g. Aragorn from Lord of the Rings, or the Gray Mouser from Lankhmar) legally using the game rules. This despite the fact that Aragorn was clearly the inspiration for the “ranger” class and the Mouser was clearly the inspiration for the thief (now rogue) class. It seems like this problem has finally been addressed in D&D4 which, alone, seems like a major improvement. On the other hand, the amount of complexity D&D has added in order to allow character classes to be squeezed into some kind of useful shape seems like a bit of a disaster. As I read the D&D4 rules I was continually struck by how much simpler just ditching character classes and levels would make everything.

After D&D first appeared, the second major RPG to be published was RuneQuest. (The main game designer was Steve Perrin, author of the “Perrin conventions” which allowed people to make sense of the original D&D rules.) RuneQuest more-or-less kept D&D’s attributes, but dropped character classes, levels, and alignments and replaced them with skills, spells, and a detailed background (which included religions and shamanism).

Not only did RuneQuest’s approach allow you to easily represent any character you wanted to (especially once the setting was generalized out of existence) but it let you start playing the character you wanted to play pretty much straight away. (In D&D you start out playing a total wimp and can end up playing a ridiculously powerful caricature — whether you ever get to play the character you originally wanted to play is doubtful.)

One of the most obvious differences between RuneQuest and D&D is that while a more powerful character tends to be slightly tougher, slightly stronger, slightly more agile, and so forth than a weak character, the real difference lies in skill. Not only is a powerful character better at hitting things, he/she is better at not getting hit. This is a case of realism making things simpler — D&D has horrendous amounts of complexity in it purely to work around the core mechanic of “going up levels gives you more hit points but doesn’t really make you harder to hurt”. Similarly, RuneQuest tends to use realistic rules to prevent silliness (wearing metal armor makes it harder to sneak around) rather than arbitrary “because I said so” restrictions (“thieves may not wear heavy armor”).

How Much of This Crap Do I Need To Buy?

It’s worth mentioning that the Player’s Handbook contains pretty much all the rules you need to play D&D4, although for “monsters” you’ll need the Monster Manual.

One of the annoying aspects of D&D4 is that all the core rulebooks have been split into two parts. Player’s Handbook 2, for instance, contains the rules for Monks, Bards, Druids, etc. (three two of my favorite character classes), and Monster Manual 2 contains many “standard” monsters (like frost giants). I’m used to the secondary books having useless half-assed crap that I can live without in them (the original Monster Manual 2 was utterly worthless, for example, and all of the extra PH-style books were pretty terrible). What’s next — a psionics rulebook that isn’t completely broken?

It follows that a complete set of core rules is going to set you back around $120-150 (much more outside the US, I expect). That’s before you fork out for miniatures and dungeon tiles (and the vastly tightened up combat rules virtually beg you to start using maps and miniatures).

What I’d Change (based on what I’ve seen so far)

Obviously, I’d eliminate alignments. To the extent that there’s the concept of “a sword that can only be wielded by a good person” or “a ring that gradually makes its wearer evil” — some concrete mechanics would be necessary, but they’d be far preferable to having alignments.

I’d eliminate character classes. I might keep the idea of “levels”, but when you go up a level you’d simply get a pool of points to spend, and possibly access to perks and attribute bonuses.

Most feats would simply disappear, since they either give you abilities everyone should have or mess with idiotic game mechanics. Those remaining would either become “abilities” (along with “spells”, “powers”, and “skills”) or “perks” (as per Fallout). Abilities would have prerequisites and many would be presented as “trees”.

I’d eliminate the concepts of encounter and at will abilities and put in the concept of fatigue. Everything has a fatigue cost. You get fatigue back by resting. Some “daily” abilities would simply cost a lot of fatigue, others would be “exhausting”. (I’d have rules for getting exhausted and being exhausted.) I might add a separate “mana” pool, if nothing else to give magical abilities a different set of dependencies than mundane.

AC et al must be replaced with a more sensible system. Armor mitigates hits (i.e. reduces the damage you take from hits). Dexterity and skill make you harder to hit. Fighting styles/stances should confer offensive or defensive advantages and options.

I’d make weapons do more damage and give characters more ability to defend themselves. Parries, ripostes, dodges, and active resistance would be new actions available in the combat system.

And, finally, I have an idea for making d20 resolution work better in general, but more on that in another post.

Provisional Conclusion

I haven’t played D&D in a very long time, so the fact that I plan to play some D&D4 says a great deal of itself. (I went looking for miniatures today!) Based on what I’ve read thus far, it seems to me that D&D4 looks like a pretty good game. It’s not much of a role-playing game (but D&D never was), but it’s a pretty solid coop wargame (where your “opponent” isn’t especially interested in killing you) and of course there’s all those great magic items to collect and abilities to figure out. The combat rules afford real opportunities to do cute things — there are even benefits to maneuvering in combat — and characters have an interesting (and pretty huge) assortment of abilities to play with, some of which look like fun.

In the sense of “rules as content” the D&D4 rules probably have more new doohickeys to play with than any other edition of D&D since Greyhawk. It’s a shame — given how radical a departure D&D4 is from earlier versions of D&D — that they didn’t have the guts to actually fix its stupidest flaws.

Dragon Age

I’ve been playing Dragon Age moderately obsessively for the last few days (since I found it selling for $40 at Target just after I finished Liberty City Stories).

Dragon Age is virtually a direct descendant of Dungeons & Dragons, which is a little sad because Bioware has been struggling to escape from the D&D vortex, on and off, for over ten years. Given that they’d prefer not to pay Wizards of the Coast royalties for a D&D license when most gamers buy stuff for their logo first and foremost, they have designed their own game rules — essentially an even more annoying variant of the Mass Effect game rules with a fantasy “skin”.

If you like Mass Effect (I did) you may like Dragon Age (I do) — although Dragon Age is considerably uglier in most respects than Mass Effect (I suspect that, at a low level, the current generation of texture compression schemas available for console programmers is not as sympathetic to gritty detailed “fantasy” textures as it is to the cleaner “Star Warsy” graphics in Mass Effect). Personally, I loved the look of Neverwinter Nights and Knights of the Old Republic and think that when Bioware jumped up to the next level of graphic quality (NWN2, et al) they made the mistake of going for too realistic a look, and have never recovered. So far, of the “current generation” Bioware games, only Mass Effect doesn’t look like ass. At least with Bethesda — their games have always looked like ass.

First of all, Dragon Age is pretty hard (and I’m only playing on “normal”) — in both the “challenging” sense (a lot of the bad guys can do a lot of area effect damage really fast, and there are lots of stunning and immobilization effects) and, unfortunately, the “annoying and tedious sense”. The combat system is very fast paced (it’s exactly the same system as in Mass Effect, and conceptually similar to everything Bioware has done since Baldur’s Gate), and I would frequently find myself pausing every second or two and going through entire fights character by character making sure everything was OK. For one fight — so far — I switched down to “casual” difficulty because it was such a pain to win (I think I made the mistake of attempting a specific side-quest way earlier than the designers intended). A few fights I’ve had to repeat half a dozen times to get through. One fight I’ve been unable to win and set aside for “later”. (Hint: it involves a dragon.) And it doesn’t help that many potentially deadly fights tend to be against humans, and it’s hard to tell whether the four thugs you’re fighting are a nuisance or Death Incarnate. Once you get past the early quests you really can’t afford to treat any fight casually.

Unfortunately, I’d say that much of the time the reason combat is difficult is that the UI is often infuriating (e.g. when you target a lot of spells you drop out of command mode — argh!) you can only control one of your four party members at a time and the Artificial Stupidity is pretty damn strong (e.g. the AIs have absolutely no cognizance of AoE spells, and will cheerfully charge at enemies sitting in the middle of earthquakes and lightning storms; similarly, they will cheerfully walk into marked traps the moment a fight starts — which often means hitting “Load Game” immediately as half your party goes down in a fraction of a second). In essence, the only way I’ve found to win tough fights is to switch to “hold” and micro-manage everyone’s positioning, which slows fights to a crawl (even though in “real time” most fights are over in very short order — very much not like D&D). A micro-managed group is around 2-5x more effective than just letting your idiots fight on their own.

Unlike Mass Effect, money has so far been very tight, which means I can’t afford to deck my characters in cool equipment. That said, there seem to be far fewer gear upgrades than Mass Effect, so the endless shifting around of gear because you found a slightly better assault rifle has been substantially reduced (also, gear is much less interchangeable, so the fact that Bob got a new mace doesn’t tend to have so many ripple effects). And, yes, it’s another case “try to save the world while cobbling together enough cash to buy healing poultices”. At least, in this case, the reason most people are selling you gear is that they don’t know the world needs saving.

One thing I really like is the idea that a character taken out during a fight is “injured” rather than “dead”. Once the fight is over, they get up again — somewhat the worse for wear. (They need special healing to recover — well you need to click an “injury kit”.) This avoids the conceptual morass of the guys who’ve been raised from the dead thirty times in the course of their careers — but would play better if the game really treated them as “injured” rather than dead (not representing them with a skull icon would be a good start). Frankly, I’d have preferred the fights to be a little bit easier, but the consequences of injury to be much worse (e.g. you might have to go to a special healer to get patched up). In one dungeon I ran out of injury kits and each injury became a serious problem (well, at least conceptually — I didn’t really detect any major downside to carrying injuries) — if the entire game felt like that, then fights could be hard without frequent wipes being nature’s way of telling you those guys were pretty tough.

Second, while Dragon Age is no more conceptually advanced than Bioware’s first RPG (Fallout) the setting is — by fantasy standards — pretty original, the writing good, and the quests interestingly designed. The voice acting is merely OK, though. Long-distance travel in Dragon Age is handled exactly as in Fallout (enter world map, click on destination, dot moves across map — zoom in to small generic location for random encounters).

In terms of quest complexity and moral gray areas, Dragon Age is perhaps the worthiest successor to Fallout that I’ve played (including Fallout 3). For example… Spoiler Alert! (Select to read.) I’m currently less than half-way through (as far as I can tell) and I’m currently trying to get the dwarves to join my alliance, but to do this I need to solve their succession crisis (which appears to have at least two possible outcomes) — and I’ll need to figure out which guy I want to back and then how to back him. (I usually tend to be “goody two shoes” in RPGs, and I’m trying to play selfish and ruthless, but the dwarvish caste system is irking me so I may end up erring on the side of niceness yet again — although, interestingly, the guy most likely to tear down the caste system appears to be more of an asshole.) The point is, this is not a distinction with no difference, and the setting is engaging enough for me to care which path I pick. End Spoiler Alert.

Third, as I have already implied, Bioware have done themselves no favors in the game design department. As in Mass Effect it’s very hard to figure out exactly which skills are useful, and unlike Mass Effect there are way more of them (instead of having one skill with N levels which gives you special benefits at certain levels, you get sets of four distinct abilities which are thematically related, but each completely independent), so you tend to waste a lot of skill points. (And I don’t particularly want to read “guides”, use cheats/walkthroughs, or restore from save constantly.)

There’s a huge amount of repetition and flavorless redundancy in the spells and abilities (e.g. shield pummel vs. shield bash vs. overpower vs. assault — all basically the same thing with different cooldown timers — I might add that the abilities often seem to have effects that make no sense relative to their name, e.g. “riposte” is that another “whack + stun” ability, not a counter-attack following a parry). Often you’ll be motivated to get a new ability not because it adds anything new but simply because it’s just like some other ability you have, but on a different cooldown timer — which is just stupid since you’re already limited by stamina/mana and execution time. (Why is it faster to cast two lightning and two freeze spells than four of one or the other?)

And finally, unlike Mass Effect which had three orthogonal character classes (soldier, tech, and psy) and then three hybrid classes, Dragon Age has ditched the “hybrids” — you just get warrior, rogue, and mage (i.e. the same classes with a fantasy skin) — and you can specialize each class to resemble pretty much any typical fantasy cliche you like — rogues can be bards or assassins, mages can be healers or shapeshifters (no necromancers though), fighters can be paladins, berserkers, etc.. I do like the fact that there’s one caster class that can be anywhere on the dps/buff/heal continuum you want, rather than treating the healer and mage as distinct and then giving them a huge overlap, but I don’t see why fighter and rogue couldn’t be similarly blurred (especially since it’s exactly what I will play in virtually any RPG when given a chance).

On the whole, I’d say that the original Fallout had the best game design (especially for character development) Bioware has done thus far. The problem with the system devised for Dragon Age is that it’s way too complex and non-orthogonal to grok given the time investment. (It’s not like an MMORPG where you’ll be playing the game for six months and (a) have the desire to figure out whether it’s better to spend a point on “slam” or “smash” or “butt-whack”, and (b) you probably have some mechanism for switching your points around if you change you mind, and (c) a bunch of game designers are employed full-time to keep things balanced.) To provide a simple example: you will often have the choice of several different spells which all do single-target damage, but no clue as to which one does more, is harder to resist, has longer range, or stuns as a side-effect (and each opens up a new spell which makes choosing even harder). For a more complex example: you will often have a choice between reducing stamina/mana costs, increasing stamina/mana regeneration, or getting a whole new ability with a different cooldown timer. And then there’s the “mode” system (you can be in one “mode” at a time) which makes everything even harder to analyze.

Contrast this with Fallout et al where you could opt to “be tougher”, “shoot faster”, “shoot more accurately”, etc. (And there’s nothing remotely like Fallout’s “perks” which were one of my favorite features.)

You do get a wide variety of NPCs to play with and can at least sample what’s possible — it seems to me that a lot of the replayability (if there is any) will be out of a desire to create a character with a less fracked up skill tree the second time around. It is also annoying how specialized a character has to be — if you want your fighter to be a tank you’ll need to burn so many skill points in shield skills that you can’t switch to a two handed sword and wreak havoc when the mood takes you. (At least not at level 12.)

As an aside, I’d have to say that the obsession with specialization in RPGs — it started in computer RPGs but has bled back into paper– really ticks me off. I’m sorry, but a good fantasy story doesn’t involve a guy who is so specialized in tanking that he can’t use a bow. Gandalf wore chainmail and wielded a sword (as did Turjan of Mir). Conan could sneak and climb walls. Fafhrd could dual wield and the Gray Mouser could cast spells. How did we get from this inspiration to guys who obssess over threat generation, mitigation, avoidance, and hit points? And it’s not even a game balance issue since you can’t use your shield skills when you’re wielding a two-handed sword, and when you’re sneaking or climbing it doesn’t really matter that you’re a kick-ass musician.

If you’re a computer RPG player the chances are you already know you will or won’t buy Dragon Age because you either do or don’t like Bioware’s stuff. So the bottom line is that — relative to other Bioware offerings and adjusting for time and technology — it’s up there with Fallout in terms of back story, writing, and plot, but the game mechanics are annoying and the graphics are meh.

Go (The Programming Language)


There’s a companion post to this piece in which I pretty much lay out my experience of programming languages. If you’d prefer to avoid reading through all that crap, I can summarize by repurposing one of my favorite lines from Barry Humphries (“I’m no orator, far from it, I’m an Australian politician.”) — I’m no systems programmer, far from it I’m a web developer. Go is explicitly a systems programming language — a tool for implementing web servers, not web sites. That said, like most programmers who live mainly in the world of high level / low performance languages, I would love to program in a lower level / higher performance language, if I didn’t feel it would get in the way. At first blush, Go looks like it might be the “Holy Grail” of computer languages — easy on the eye and wrist, readable, safe, and fast. Fast to compile, and fast at runtime.

What’s my taste in languages? Something like JavaScript that compiles to native binary and has easy access to system APIs. Unity’s JavaScript is pretty damn close — except for the system APIs part (it’s also missing many of my favorite bits of JavaScript, and not just the stuff that’s hard to write compilers for). Actually if I really had my druthers it would be JavaScript with strict declarations (var foo), optional strong typing (var foo: int), case-insensitive symbols, pascal’s := assignment, = a binary comparison operator, syntactic sugar for classic single inheritance, and ditch the freaking semicolon.

Of course, I am not — as I have already said — a systems programmer, so things like the value of concurrency haven’t really been driven home to me, and this is — apparently — one of Go’s key features.


  • Clean syntax — it looks easy to learn, not too bad to read, and certainly easy to type. They seem to have managed to climb about as far out of the C swamp as is possible without alienating too many people (but I’m a lot less allergic to non-C syntaxes than most, so maybe not).
  • Explicit and implicit enforcement of Good Programming Style (via Gofmt in the former case, and clever syntactic tricks such as Uppercase identifiers are public, lowercase are private in the latter).
  • Compiler is smart enough to make some obvious ways of doing things that turn out to be serious bugs in C into correct idioms.
  • Safety is a design goal. (E.g. where in C you might pass a pointer to a buffer, in Go you pass an array “slice” of specified size. Loops use safely generated iterators, as in shiny but slow languages like Ruby.)
  • Very expressive (code is clear and concise, not much typing, but not too obscure — not as self-documenting as Obj-C though).
  • Orthogonality is a design goal.
  • Seems like a fairly small number of very powerful and general new “first class” concepts (e.g. the way interfaces work generalizes out both inheritance and (Java-style) interfaces, channels as a first class citizen are beautiful).
  • It really looks like fun — not hard work. Uncomplicated declarations mean less staring at a bunch of ampersands and asterisks to figure out WTF you’re looking at. Fast (and, I believe, simple) builds. It makes me want to start working on low level crap, like implementing decent regex… Makes me wonder how well Acumen would work if it were implemented as a custom web server, how much effort this would be, and how hard it would be to deploy…
  • What look like very fast compiles (and in his demo video (above) Pike recompiled more code in less time on a Macbook Air) — but then I’ve seen insanely fast C++ compiler demos that didn’t pan out to much in the Real World.
  • Really nice take on OO, especially interfaces (very elegant and powerful).
  • Almost all the things you want in a dynamic language — closures, reflection — in a compiled language.
  • Really, really nice support for concurrency and inter-process communication. Simple examples — that I could understand immediately — showed how you could break up tasks into huge numbers of goroutines, perform them out of order, and then obtain the results in the order you need to, without needing to perform mental gymnastics. (Lisp, etc., coders will of course point out that functional programming gives you these benefits as well in a simpler, cleaner way.) The 100k goroutine demo was pretty damn neat.


  • To quote a friend, “The syntax is crawling out of the C swamp”. So expect the usual nonsense — impenetrable code that people are proud of.
  • Concurrency is way better supported in the 6g, 8g, 5g version than the GCC (the first muxes goroutines into threads, the latter uses one thread per goroutine).
  • They promise radical GC, but right now it’s a promise (and no GC in the gcc version yet).
  • Pointers are still around (although, to be fair, largely avoidable, and reasonably clearly declared).
  • No debugger (yet), obviously very immature in many ways (weak regex, and see the point about GC, above).
  • The syntax seems a bit uncomfortably “open” (nothing but spaces where you are accustomed to seeing a colon, or something), but it’s a matter of familiarity I suppose.

Unknowns (at least to me)

  • How well does it compare to evolutionary improvements, such as Obj-C 2.0 (which is garbage-collected) combined with Grand Central Dispatch (for concurrency support and hardware abstraction, but at library rather than language level) and LLVM?
  • Will GC actually work as advertised?
  • Is someone going to step up and write a good cross-platform GUI library? (It doesn’t need to be native — it just needs to not suck — something that has conspicuously escaped both Java and Adobe.)
  • Will we have a reasonable deployment situation within — say — six months? E.g. will I be able to ship a binary that runs on Mac / Linux / Windows from a single source code base?
  • Will it get traction? There’s a lot of other interesting stuff around, albeit not so much targeting this specific audience (and it seems to me that if Go delivers it will appeal to a far broader audience than they’re targeting since it’s essentially very JavaScript / Python / PHP like with — what I think is — obviously bad stuff removed and better performance.


I’m not quite ready to jump on the Go bandwagon — I have some actual projects that have to ship to worry about — but it does look very, very attractive. More than any other “low level” language, it actually looks like fun to work with. The other newish language I’m attracted to — for quite different reasons — is Clojure. Because it lives on the Java VM it’s a lot more practical now than Go is likely to be for some time, and because it’s essentially “yet another Lisp” it affords a chance for me to really build some competence with a fundamentally novel (to me) programming paradigm. In a sense, Go is more attractive because it’s closer to my comfort zone — which speaks well for its chance of success. Clojure because of its conceptual novelty is almost guaranteed to remain obscure — the latest Lisp variant that Lisp adherents point to when complaining about <insert name of language people actually use here>. Time will tell.

Olympus E-P1

At last, a small camera that takes DSLR quality images.
At last, a small camera that takes DSLR quality images.

Olympus has finally announced its first micro 4/3 camera. Unlike the Panasonic GH-1 it “only” does 720p video (but the samples I’ve seen are gorgeous), has no viewfinder or built-in flash, has a low resolution (but large) fixed LCD, is actually quite small, and is priced competitively (MSRP $699 body only, $799 with a 14-42mm Zuiko lens).

One really intriguing feature of the camera is that in addition to being able to share lenses with all other micro 4/3 cameras (e.g. Panasonic’s 14-140mm and 7-14mm lenses, and various Leica lenses), it can use normal 4/3 lenses via an adapter, and it could potentially use pretty much any 35mm or APS-C (or even Leica M-series) lens via adapters (Olympus is providing an OM-system adapter for starters). This makes it almost the perfect “backup” camera for serious photographers since it could easily share lenses with any other camera one happens to use.

This camera is almost pocketable, offers similar low-light performance to the Canon T1i (i.e. slightly inferior to the D5000/D90, slightly better than the GH-1), has built-in image stabilization, dual control dials, full manual control, all-metal construction — indeed the same feature set as Olympus’s E-620 and E-30 DSLRs. And it looks lovely.

If it had a viewfinder, or even a higher resolution LCD, I think it would be ridiculously compelling. If, as many expect, its street price quickly drops below $400 it will be ridiculously compelling anyway. I, for one, don’t much care about the lack of flash, since I avoid flash (especially built-in flash) as much as possible.

Oh, and it’s perhaps the most physically beautiful digital camera I’ve seen.