\n\n\n\n
I first remember André Braugher from his performance in Glory where he played perhaps—low key—the most important role in the movie. He played the person with the most to lose and the least to gain by joining the army and fighting to end slavery (something the movie later acknowledges is pretty much a fool's errand). He plays the person we—the viewer comfortably separated from the events portrayed by circumstances, time, and knowledge of what will happen—should be but almost certainly won't be. (No more details: watch the movie if you haven't seen it.)
\n\n\n\nMost people will know him either from his role in Brooklyn Nine-Nine, an above average a great sitcom of recent years, or Homicide: Life on the Street, the best police procedural ever made, based on a fantastic non-fiction book by David Simon. (I revised this paragraph after conferring with numerous colleagues and discovering that my daughters' opinion is widely held; I am outvoted!)
In Homicide he is again playing someone who stands for justice despite his own self-interest. He is the black man with obvious intellect and education who chooses to work as a Homicide detective when there are so many better options for him, it ruins his marriage, and it is killing him. He works within a corrupt and under-resourced system and with colleagues he pretty much despises trying to make the tiniest difference when and where he can, and usually to his own disadvantage.
\n\n\n\nAnd, despite its being a comedy, as Raymond Holt in Brooklyn Nine-Nine he somehow again plays someone pretty much in this situation except that, now an older man and a captain, he has somehow navigated an earlier phase of life in which all of… this… was much worse, and today is comfortable enough that the horribleness is purely and not always darkly comic.
\n\n\n\nHomicide is one of my favorite TV shows of all time. Brooklyn Nine-Nine is my daughter's favorite TV show of all time.
\n\n\n\nAndré Braugher is already missed.
\n","$updatedAt":"2024-06-05T09:10:30.273+00:00",path:"andr-braugher-rip",_created:"2024-07-09T20:28:30.241Z",id:"8106",_modified:"2024-07-09T20:28:30.241Z","$id":"8106",_path:"post/8106"},{date:"2023-12-02T14:24:54.000+00:00",summary:"What vector graphic editing software produces clean output with minimal control points, specifically for creating symmetrical shapes and performing boolean operations, as explored by the author who has experience with multiple applications including Affinity Designer 2, Sketch, Vectornator, Graphic, Inkscape and Adobe Illustrator?",keywords:["inkscape","svg","boolean operations","vector graphics","sketch","affinity designer","vectornator","graphic","babylonjs","xinjs"],author:"Tonio Loewald","$createdAt":"2024-06-05T09:10:30.285+00:00","$databaseId":"665dfafe000014726b3d",title:"Pi 5 follow-up","$collectionId":"665dfb040030d12ada24","$permissions":[],content:"\n\n\n\n\nI don't have a working version of Illustrator any more, but I strongly suspect Illustrator produces perfect output in this case. In the end, I hand edited the bezier curves in my logo, but it looks like I possibly could have saved time by using Inkscape to do the booleans (and then going back to Sketch to produce clean output).
\n","$updatedAt":"2024-06-05T09:10:30.285+00:00",path:"pi-5-follow-up",_created:"2024-07-09T20:28:30.928Z",id:"8088",_modified:"2024-07-09T20:28:30.928Z","$id":"8088",_path:"post/8088"},{date:"2023-12-01T23:40:30.000+00:00",summary:"What is my experience with setting up and using the Raspberry Pi 5 compared to the Meta Quest 3?",keywords:["raspberry pi 5","quest 3","nodejs","nwjs","chromium","babylon3d","electron","mathml","svg","raspberry pi"],author:"Tonio Loewald","$createdAt":"2024-06-05T09:10:30.273+00:00","$databaseId":"665dfafe000014726b3d",title:"Pi 5","$collectionId":"665dfb040030d12ada24","$permissions":[],content:"\n\n\n\n\nOne of the things I did early during the COVID shutdown was buy myself a Raspberry Pi 400 (the one built into a keyboard) along with the camera module and some lenses. I did not realize that the Pi 400 did not have the required hardware interface to work with the camera (if I recall, the 8GB Pi 4 was already sold out, because a lot of people decided to play with Raspberry Pi devices during the lockdown).
\n\n\n\nAnyway, I never got to play with the camera module and in any event I think I lost track of it during my move to Finland. Maybe it will show up.
\n\n\n\nThe Pi 4 was pretty much perpetually out of stock ever since, with scalpers reselling the device for steep markups on Amazon. But, the Pi 5 seems to be easy to get, at least for the moment. As I type this, my microSD image is being verified…
\n\n\n\nWhen I got my previous Raspberry Pi, I was working at Google which means I was spending a lot of time using Linux, so messing around with the Pi was fun and easy. I got b8rjs working on it and played around. I've since tested xinjs on my old Raspberry Pi, and even found a bug (if I recall correctly, I assumed browsers supported MathML and the Pi's browser does not).
\n\n\n\nFirst thing, I received the Raspberry Pi 5 kit in a ridiculously large, nearly empty box that was mostly full of padding paper. Next, it was hard to open the white cardboard boxes without tearing them, so I just gave up.
\n\n\n\nThe case doesn't include screws (which it seems designed for) or instructions, so I googled the instructions and they were a bit poor (e.g. they told me to make sure the fan was plugged into the socket marked \"FAN\" rather than providing a diagram (it's not in the obvious place and it comes with a piece of plastic blocking it, so it doesn't look like a socket. Luckily I had a set of tools for mucking around with computers that includes a good set of tweezers.
\n\n\n\nAnyway, it assembles very easily (I think I slightly misaligned the heat sink… oh well).
\n\n\n\nFirst nice surprise is that the keyboard is actually, like old wired Mac keyboards, a USB hub. And in fact it one ups Apple by providing three extra USB sockets (although it loses points for having a mini-or-micro-USB socket vs. a type-C socket for the cable coming from the computer. Is that even allowed in the EU these days?
\n\n\n\nThe first type I had to type an \"@\" symbol I had a \"wow this is super spongy\" reaction to keyboard. It may be a nice USB HUB but it's not a great keyboard.
\n\n\n\nI plugged it in and the Pi 5 immediately powered on (and the fan started spinning, so I'm relieved not to have to spend upwards of two minutes disassembling and reseating the connector). What's a nice contrast to my Pi 400 experience was that I assumed that once I plugged in the monitor, keyboard, and mouse I'd need to reboot because I seem to recall that my old Raspberry Pi didn't send a signal to the monitor if it didn't have a monitor plugged in during boot. But, no, as soon as the monitor was plugged in (still micro-HDMI sockets) everything Just Worked.
\n\n\n\nThe Mac-like menubar at the top of the screen has three icons in the top-left corner, an app menu, a browser button, and a terminal button. Perfect.
\n\n\n\nOh yeah and when I created the image for the machine on my Mac it offered to copy my WiFi credentials onto the image (and triggered a security dialog when I said yes) and it Just Worked. This was a conspicuous pain point for the Quest, and I let it slide because I assumed that Meta must have had some issue with Apple's security stuff that stopped them smoothing it over. But, apparently, Raspberry Pi can do it (and their imager tool looks far more polished than the Meta support apps for the Quest 3 do).
\n\n\n\nI quickly got into my Google and Apple iCloud accounts thanks to the new Passkey stuff which isn't an option for the Quest (and of course Meta hasn't put any effort into helping with this because it's the kind of thing anyone seriously using their product would quickly get frustrated by, and no-one internally seems to be using their product much).
\n\n\n\nSo I was up-and-running much faster than with my Quest 3. Also the thing seems way faster than the Quest 3… it certainly dealt with iCloud Drive and Google Photos very nicely. So now I have a nice desktop picture. Super important.
\n\n\n\nMy next step was to install NodeJS and another nice surprise is that it runs nodejs 20.x (I also note that the Chromium that is preinstalled was v116.x which is pretty recent. I imagine at some point I'll have to do a massive update (apt tells me there's a lot of stuff to update, and I can't be bothered right now). I'm looking forward to seeing if I can build out electron or nwjs apps.
\n\n\n\nI do find the partially transparent Chromium window to be a bit nasty looking.
\n\n\n\nA quick dip into the Chromium inspector shows that MathML and SVG are there. ui.xinjs.net and b8rjs.com both load and run their most challenging demos pretty decently (the babylon3d demo on the b8rjs.com site is a bit sluggish, but reflections and shadows are working). Also timezones.xinjs.net runs very nicely (and that's a pretty gnarly collection of SVGs).
\n\n\n\nb8rjs.com has stress tests which I ran and it seems to be about 25% as fast as my 2021 Macbook Pro M1 Max on the create and render 10k table rows (~1300ms vs. ~350ms), and 20% as fast at the create 100k rows with the virtual data-table test (~1800ms vs. ~360ms).
\n\n\n\nSo I'm about to hit the sack, but overall a much better initial experience that with the Quest 3, despite this being very much not a product for ordinary consumers. Not having to deal with Meta is a huge bonus, of course. Given how well all the Raspberry Pi stuff works, Meta's Quest team should really should hang their heads in shame.
\n","$updatedAt":"2024-06-05T09:10:30.273+00:00",path:"pi-5",_created:"2024-07-09T20:28:31.644Z",id:"8085",_modified:"2024-07-09T20:28:31.644Z","$id":"8085",_path:"post/8085"}],blogDataTimestamp:"2025-01-17T23:20:21.221Z",blogVersion:4,"post/path=large-language-models-a-few-truths":{keywords:[],title:"Large Language Models — A Few Truths",path:"large-language-models-a-few-truths",content:"\n\nLLMs and derivative products (notably ChatGPT) continue to generate a combination\nof excitement, apprehensions, uncertainty, buzz, and hype. But there are a few\nthings that are becoming increasinfly clear, in the two-and-a-bit years since\nChatGPT 3 came out and, basically, aced the Turing Test \nwithout (as Turing had pointed out when he called \"the Imitation Game\") \nnecessarily thinking.\n\nThe Imitation Game asks if a machine can imitate the behavior of someone\nwho thinks (whatever that means) by reducing that to once specific behavior,\ncarrying on a conversation. \n\nJust as a Skinnerian\nbehaviorist might suggest that \"I can't tell if you're in pain, but you seem\nto be exhibiting pained behavior\", Turing basically said \"I can't tell if\na machine can think, but I can test whether it seems to exhibit a specific \nthinking behavior\"\n\n## LLMs aren't thinking, or reasoning, or doing anything like that\n\nYou can quickly determine that LLMs cannot as a class of object think by looking\nat how they handle iterative questioning. If you ask an LLM to do something\nit will often (a) fail to do it (even when it would be extremely easy for it\nto check whether it had done it) and then (b) tell you it has done exactly\nwhat it was asked to do (which shows it's very good at reframing a request as\nresponse).\n\nThis should be no surprise. The machines are simply manipulating words. They\nhave connection between, say, the process of writing code and seeing the results\nthan they do between Newton's equations of motion and the sensations one feels\nwhen sitting on a swing.\n\nSo when you say \"hey can you write a hello world application for me in Rust?\"\nthey can probably do it via analysis of many source texts, some of which \nquite specifically had to do with that exact task. But they might easily\nproduce code that's 95% correct but doesn't run at all because, not having any\nreal experience of coding they don't \"know\" that when you write a hello world\nprogram you then run it and see if it works, and if it doesn't you fix the\ncode and try again.\n\nThey are, however, perfectly capable of reproducing some mad libs version of\na response. So, they might tell you they wrote it, tested it, found a bug,\nfixed it, and here it is. And yet it doesn't work at all.\n\nAnd that's just simple stuff.\n\nAnd look, for stuff where \"there is no one correct answer\" they generally\nproduce stuff that is in some vague way more-or-less OK. If you ask them to\nread some text and summarize it, they tend to be fairly successful. But they\ndon't exhibit sound judgment in (for example) picking the most important\ninformation because, as noted, they have no concept of what any of the words\nthey are using actually mean.\n\nAll that said, when a process is essentially textual and there is a huge corpus\nof training data for that process, such as patient records with medical histories,\ntest results, and diagnoses, LLMs can quickly outperform humans, who lack both\nthe time and inclination to read every patient record and medical paper online.\n\n## There is no Moat\n\nThe purported value of LLMs and their uses are a chimera. No-one is going to\nsuddenly get rich because \"they have an LLM\". Companies will go out of business\nbecause they don't have an LLM or workflows that use LLMs.\n\nLLMs will doubtless have enormous economic impacts (probably further hollowing \nout what is left of the Middle Class, since most people have jobs where \"there is\nno one correct answer\" and all they have to do is produce stuff that is in\nsome way more-or-less OK… Just consider asking a question of a sales person in an\nelectronics store or a bureaucrat processing an application based on a form and\na body of regulations—do you think a human is better than the average AI? Given\na panel of test results and patient history, general purpose LLMs already\noutperform doctors at delivering accurate diagnoses in some studies).\n\nBut, UC Berkeley's Sky Computing Lab—UC Berkeley having already reduced the price of operating systems to zero \nby clean-room-cloning AT&T UNIX and then \nopen-sourcing it all… basically kickstarting the open source movement\njust released Sky-T1-32B-Preview\nan open source clone of not just ChatGPT but the entire \"reasoning pipeline\"\n(which OpenAI has not open-sourced) that it trained for $451.\n\nSo if you just invested, say, $150M training one iteration of your LLM six months\nago, the value of that investement has depreciated by about 99.999997%.\n\nAnd the sky-high valuations of nVidia are predicted mainly on the costs of\ntraining models, not running them. People don't need $1B render farms to\nrun these models. Very good versions of them can run on cell phones, and the\nfull versions can run on suitably beefed up laptops, which is to day, next\nyear's consumer laptops.\n\n\n\n",date:"2025-01-17T20:25:13.992Z",summary:"LLMs, like ChatGPT, excel at manipulating language but lack true understanding or reasoning capabilities. While they can produce acceptable responses for tasks with no single correct answer, their lack of real-world experience and understanding can lead to errors. Furthermore, the rapid pace of open-source development, exemplified by projects like Sky-T1-32B-Preview, suggests that the economic value of LLMs may be short-lived, as their capabilities can be replicated and distributed at a fraction of the initial investment.",author:"Tonio Loewald",_created:"2025-01-17T09:54:17.152Z",_modified:"2025-01-17T19:36:35.342Z",_path:"post/k1p7e2gn552k"},"post/path=adventures-with-chatgpt":{keywords:[],date:"2025-01-17T19:55:24.473Z",summary:"ChatGPT excels at mundane coding tasks, akin to a bright intern who reads documentation and StackOverflow but lacks creativity and testing. While useful for automating repetitive tasks, its code requires refinement and testing.",path:"adventures-with-chatgpt",author:"Tonio Loewald",_created:"2024-08-27T07:57:31.670Z",title:"Adventures with ChatGPT",_modified:"2025-01-17T10:14:49.289Z",content:"## Or, how I learned to stop worrying and just teach ChatGPT to code withI should add that while Sky-T1-32B-Preview allegedly outperformed ChatGPT-o1 on a battery of tests, I played with a 10GB quantized version on my laptop just now, and it produced far worse results, far more slowly than Meta Llama 7B.
\n
xinjs
…\n\n\n\nI've been slow to jump on the ChatGPT bandwagon in large part because a lot of the code I write is very hard and somewhat novel, and ChatGPT isn't really good at that kind of thing. What it's really good at is grokking lots of examples, including documentation, and then adapting it to a specific request, often with mistakes.\n\nE.g. I have gotten various LLMs to write me code to do something very mundane, such as traverse all the documents in a directory and pass them through a function in Node.js. This isn't hard to do, but it's tedious and time-consuming, and in the end, the code I would have written won't be markedly superior to something any decent coder would write.\n\nChatGPT (et al.) excels at this kind of task. Think of it as a pretty bright intern who will read any amount of source code and documentation and delve into StackOverflow to figure out how to solve a problem, but won't come up with anything new and won't actually test their code at all before handing it to you.\n\nNow, I'm sure there are workflows which you can fairly easily assemble that would get the resulting code, try to wrap it in tests, try to run it, and so forth, and then iterate until they get through the flow without exploding.\n\n## OK, so I have an intern!\n\nYesterday I decided to play around with ChatGPT because it had been a minute.\n\nThe first thing I tried to do was get it to help me improve the TypeScript declaration of the xin proxy in xinjs by explaining what it was intended to do, and it seems to me that ChatGPT did a great job there, although the code it produced was a toy, and I need to spend some time figuring out if I can apply the strategy to the actual code.\n\nIn any event, a lot quicker and more effective than the time I've spent trying to spelunk the TypeScript documentation, miscellaneous blogs, or StackOverflow.\n\nSo, I got curious.\n\nOne of the suggested prompts was \"make me a personal website\" or something along those lines. So I tried it and it was… fine.\n\nThen I decided to explain the way xinjs's elements proxy works, and asked it to write the code using elements. Here's what I told ChatGPT:\n\n\n\n\nAnd ChatGPT immediately produced this:\n\n```\nimport { elements } from 'xinjs';\n\n// Destructure the element factories from the proxy\nconst { header, nav, h1, section, a, div, p, ul, li, img, footer } = elements;\n\n// Create the HTML structure using element factories\nconst pageContent = div({ class: 'container' }, [\n section({ id: 'bio', class: 'bio' }, [\n h2('About Me'),\n p('I am a software developer with over thirty years of experience. My expertise spans full-stack development, with a career-long passion for usability and creating intuitive user experiences.')\n ]),\n section({ id: 'portfolio', class: 'portfolio' }, [\n h2('Portfolio'),\n h3('Technical Projects'),\n p('Here are some examples of my technical work:'),\n ul([\n li(a({ href: '#' }, 'Project 1: Description')),\n li(a({ href: '#' }, 'Project 2: Description')),\n li(a({ href: '#' }, 'Project 3: Description'))\n ]),\n h3('Artistic Projects'),\n p('Some of my artistic work:'),\n img({ src: 'example-artwork.jpg', alt: 'Example Artwork' })\n]),\n...\n```\n\n> I should note here that there's a major error in this fragment I didn't pick \n> until I started proof-reading this post, weeks after starting it. Code from\n> LLMs is rife with idiotic errors that often aren't immediately apparent.\n\nI got so excited I tried explaining how xinjs's Component class works to see if I could get it to produce web-components the xinjs way. This took a few attempts, and one thing I learned from this is that ChatGPT seems better at learning from simple, clear examples than more complex examples.\n\nI started out by pointing it at an example in GitHub and asking it to build the components along the same lines. This was a disaster, especially in that it looked sorta, kinda right but wouldn't have worked at all.\n\nI spent a bunch of time refining my prompts, and the most helpful prompts provided focused examples, e.g. I used this prompt after the third or fourth attempt to explain howI'm using a library that provides a proxy named
\nelements
. Its properties are element factories, soelements.div()
produces an HTMLDivElement.The factories will take their arguments as children (strings are treated as text nodes) or bags of properties (e.g.
\n{class: 'foo bar', title: 'baz'}
will add those attributes.E.g. this Javascript
\n\nimport { elements } from 'xinjs'\nconst { label, span, input } = elements\n\n
document.body.append(label(span('Date'), input({ type: 'date' })))\n
Would add the following to document.body
\n\n<label>\n <span>Date</span>\n <input type=\"date\">\n</label>\n\n
Could you rewrite the HTML code using the elements proxy?\n
elementCreator
was a static method of Component
that did x and y, and it produced incorrect code with incorrect explanations of what it was doing.\n\n\n No. Here's a simple example that should help.\n\n\n\nAfter a few prompts like this, ChatGPT provided pretty flawless results. But a\nlot of the time it really feels like you're just training a conscientious but\neager fool to imitate examples. It's kind of like how you wished regexp worked,\nwith the understand that, just as with regexp, you really want to verify that\nit didn't blow up in some subtle but exciting way.\n\n## Aside: \"artificial intelligence\"? it's not even artificial stupidity…\n\nIf you're just dealing with text, it can take a while using ChatGPT to realize \nthat ChatGPT isn't \"intelligent\" at all. It is predictive text completion with\nan insane knowledge base, but it simply doesn't understand anything at all.\n\nBut if you start working with image generation, it's immediately obvious that\n(1) the image generator doesn't understand what it's being asked to do, and\n(2) ChatGPT doesn't understand the image it's been given.\n\n\n\nIf you look at this example (and this is literally its response to my initial\nrequest) it does a great job of transforming the prompt into a concept and\nthen generating a prompt to produce that concept. The text portion of the \npipeline is fabulous, and it really feels like it understands. The fact the\ntext engine is not even stupid doesn't become immediately apparent.\n\nBut the image is just horribly wrong in so many ways, and as I iterated on\nthe prompt trying to improve the results it just got worse and worse. E.g.\nI never even suggested inserting a React Logo but it started putting them\neverywhere. (Like a lot of programming tools out there, ChatGPT just freaking\nloves React.)\n\nSo, the image generator just kind of free associates on the input text and\nproduces something but it doesn't know that, for example, you can't push\nsomething up a hill if it's already at the top of the hill. It doesn't know\nthat the stuff on the hill and the stuff on the boulder need to be different\nthings and just puts anything you ask for in both places or neither.\n\n",_path:"post/ui4rtvaqo6vv"},"post/path=apple-intelligence-image-playground":{keywords:[],title:"Apple Intelligence—Image Playground",path:"apple-intelligence-image-playground",content:"\n\n\nexport class MyThing extends Component {\n content = () => div('my thing is here')\n }\n\n export const myThing = MyThing.elementCreator({tag: 'my-thing'})\n
The first bit of Apple Intelligence™ I've really played with is the new\nImage Playground (beta) that appeared on my Macbook Pro with the latest\nsystem update.
\nThe first things that come to mind about it are:
\nThis is the default \"animation\" style:
\n\n\nHere's the \"illustration\" style:
\n\n\nRight now, it's a really cute toy. But while it does produce consistently\nlovely results, they're basically all the same—pictures of a single person\nor animal, pretty much front-on, with some minor modifications allowed\n(e.g. \"scarf\"), some general atmosphere / background stuff (\"sci fi\", \"mountains\"),\nand either from head to toe, head and chest, or portrait.
\nI think this has potential if a lot more stuff is added. It's reliable \nand fast—it just doesn't do a whole lot.
\n",date:"2025-01-15T18:14:38.350Z",summary:"Apple's new Image Playground is focused, and easy to use. If you want to produce cute \"Pixar-style\" people and animals, it quickly churns out consistent, but very limited, results. My M3 Max rendered images in seconds, but right now it's more of a cute toy than a useful tool\n",author:"Tonio Loewald",_created:"2025-01-15T11:14:18.592Z",_modified:"2025-01-15T11:14:47.871Z",_path:"post/xohqdwdgjlxa"},"post/path=bambulabs-p1s-initial-review":{keywords:[],title:"Bambulabs P1S Initial Review",path:"bambulabs-p1s-initial-review",content:"\n\nThe BambuLabs P1S is, I think, the latest printer from Bambulabs, a Chinese 3D Printer\ncompany that is trying to be \"the Apple of 3D printing\". (Other previous would-be Apples of\n3D Printing would include Formlabs.)
\nPerhaps \"labs\" is a bad suffix for a company planning to be the Apple of anything.
\nThe printer itself is heavier than I expected (the box was 22kg according to Postti,\nand I believe them). There's a lot of packaging but not more than needed. Included\nin the main box were three spools of PLA (green, red, and glossy white \"support\").\nAll three are standard spools but with maybe 25% of a full load of material. (I\nbought a bunch of different materials at the same time and those are all much\nfuller).
\nWhile there is a \"quick start\" guide, visible (if not accessible) upon opening\nthe box, once you somehow tear it out multiple layers of padding and\nplastic it assumes you've gotten the printer out of the box, which is no mean feat.
\nAnyway, having pulled all the packing material out from around the printer and \nsqueezed my arms down around the package and found something I could hold without\nfeeling like I might break something off, I did get the printer out of the box\nand onto a table.
\nThe instructions for removing packing materials, which is similar to unpacking\na printer or scanner, which often comes with disposable shims and bits of foam\nhidden in its inner workings, are fine once you realize that there is, in fact,\na ridiculously long allen key in the box of stuff, and that it can remove the rather\ninconveniently placed screws you need to remove fairly easily.
\nI should note that 90% of my issues came from the fact that the quick start\nguide is, like the ones you get with Apple products these days, printed in minuscule\ntext and my eyesight isn't what it once was, so I didn't read the packing list\ncarefully. That would have saved me a few minutes.
\nEven so, assembling the printer requires access to its back and front and good light.
\nBut while it probably took me 30 minutes to have it ready to print, it might have\ntaken someone with a less messy office and better eyesight 15 minutes as advertised.
\nIf you have aspirations to be the \"Apple\" of anything, you may want to hire\nan actual UI designer and a proof reader.
\nSoftware is what lets the Bambu experience down. if you're going to be the Apple\nof anything you need to not suck at software, and the software for this thing \nis a mixture of great when it works, confusing as hell, half-assed, and perplexing.
\nTo begin with the quick start guide firsts gets you to download the iOS app and\nbind it to the printer. This did not work as advertised and requires you to go\nfind numbers deep in the printer's crappy UI. (This is a recurring theme.)
\nFirst things first, because I couldn't get the App to sync with the printer using\nBluetooth (it kept timing out), I figured out how to print one of the pre-loaded\nmodels (good old benchy) from the printer's small front-panel. (Installing that\npanel was the least well explained step in the setup process—luckily most electronic\nconnectors are designed so that if you look hard at them and aren't a complete idiot\nyou can usually guess the correct orientation—good luck actually plugging the damn\nthings into each other though.)
\nIn the end I think the big problem was that the app requires you to set up the\nWifi for the printer on your phone using bluetooth rather than allowing you \nto configure wifi on the printer itself. (Probably because entering a typical\nwifi password on the front panel would be torture, while doing it on your phone\ncould be effortless but is merely nasty.) Because the phone app's dialog is poorly\ndesigned it tried to join the network without a password and then hung.
\nMeanwhile…
\nThe benchy test print was both fast and pretty close to perfect. Given that the number\nof good or near perfect prints I've gotten from any of my two previous 3d printers \nis—generously speaking—three, that's a very good start.
\nEventually I solved my software problems. The quick start guide wants you\nto print benchy from the desktop software but the screenshots look nothing like\nthe software I downloaded and the instructions start once you've done stuff I hadn't\ndone. If the software sucked less it wouldn't need instructions, but as it is, the\ninstructions you get aren't adequate and the failure modes are horrific. (E.g.\nbecause I wasn't logged into the cloud service, I couldn't see my printer on the \nnetwork. Adding the printer involved one of two processes, neither of which worked\nand both of which were poorly documented and involved navigating the printer's\nawful front-panel interface. In the end, the solution was to figure out how to log\ninto the cloud at which point everything Just Worked.)
\nOh, the first thing the software wanted me to do was detail my printer's configuration\n(which it apparently didn't know). What is the diameter of my hot end? I don't know.
\nAgain, the software is the problem. First of all, the Mac version (which judging from\nwhat I've seen, is actually the more popular choice for their target market) is missing\na mysterious \"repair\" function for models. So the software refuses to print your model\nuntil it's repaired with the Windows version of the software or a third-party website.
\nDudes. Fix this.
\nSecond, by default supports are disabled and anything requiring supports won't print.\nOnce you find the supports panel, the default supports are horrific (not the \"tree\"\nsupports everyone uses these days) and that basically ruined my second print.
\nThat said, once I figured these things out, every print I have made (I think I'm up to\nsix, total) has turned out at least 90% perfect (and 100% is unattainable with FDM as\nfar as I can tell).
\nThe main reason I'm interested in 3d printers is tabletop games, and every printer\nI've tried has been a miserable failure until now.
\nThe first serious thing I tried to print with the Bambulabs P1S was an interlocking\nhex tile. Of course I only printed one because I'm an idiot. So I printed it again.\nThe goal was to figure out how much I needed to tweak the model to make them snap\ntogether. They snapped together perfectly first time.
\nSo I printed four more in a different color. They all fitted perfectly. I am a very\nhappy formerly frustrated game designer.
\n\n\nThe only \"failed\" print I've had so far is a space marine\nminiature that requires a crazy amount of support and where\nI had not realized the default supports are horrible. I haven't\ntried printing it again, yet, because the software is hazing me, \nbut that's anazing. Right now, 3D printing is still in WAY too primitive a state\nfor an \"Apple of 3d printing\" to even make sense. But this thing is so much better\nthan anything I've owned or used it's incredible.
\nI just sketched out a new game and I am so excited.
\nI bought the AMS bundle which includes a device for switching materials automatically.\nEven if you only print in one color at a time (printing multi-colored models is very\neasteful because each time the printer changes color it has to rewind the current\ncolor, then waste a certain amount of the new color, before continuing—even allowing\nfor the minor gain you might get from doing the next layer's portion of the current\ncolor before swapping, the difference in material consumption between one and two\ncolors is enormous).
\nThat said, the AMS is fantastic just for being able to swap from one material to\nanother between prints. Even so, it's not without issues. E.g. I believe it's supposed\nto automatically identify materials if I get them from Bambulabs, which I did, but it's\nnot working thus far. I definitely got it because I want to produce multi-colored\nmodels, but where possible I'm printing in a single color at a time and planning to\nsnap things together.
\nAnyway, best 3D printer ever.
\n",date:"2025-01-03T14:19:55.667Z",summary:"Unboxing a BambuLabs P1S 3D printer was a surprisingly involved process, and the journey didn't get much easier from there. This Chinese contender for the \"Apple of 3D printing\" boasts impressive hardware, but the software experience can be pretty frustrating. From fiddling with hidden allen keys to battling Bluetooth connectivity, the setup was a minor ordeal, and navigating the app was a test of patience. But the truly game-changing moment? Snapping together perfectly printed hex tiles on the first try.",author:"Tonio Loewald",_created:"2025-01-03T10:40:28.102Z",_modified:"2025-01-03T14:28:04.128Z",_path:"post/wl9ydyiwzvzl"},"post/path=design-for-effect":{keywords:[],title:"Design for effect",path:"design-for-effect",content:"\n\nI just stumbled across a video on YouTube commenting about the Star Citizen 1.0 announcement, and I\nwondered if I had stepped into a time warp and somehow Star Citizen was nearing an actual release.
\nI needn't have worried, it's just another vaporware announcement, but what's really concerning is\nwhat they've told us what they're now aiming to do.
\nFirst of all, I'm not one of those people who had high hopes for Star Citizen. I certainly had\nhopes for it. It is a \"science fiction\" game in some loose sense. And it is, after all, the \nbest-funded game development in history, that I know of… Who knows, GTA VI might actually be \nbetter-funded at this point since it is virtually guaranteed to make billions of dollar.
\nStill, the \"genius\" behind all this is the creator of the Wing Commander franchise, a series of\nbasically bad space-fighter simulations that had ludicrously complicated controls (each game came\nwith a keyboard overlay along the same lines as professional studio software like Avid Composer)\nand featured pretty much linear \"groundhog day\" stories with badly acted cut scenes.
\nHaving left the game industry to pursue a career making mediocre movies, he returned with\ngreat fanfare to his true love—making overly complicated space flight sims. But now, instead of\nhaving a linear plot which cliched dialog he was switching to some variation of \"simulate \neverything, it will all work out\".
\nAnd so in around 2015 we got three of these things. Elite Dangerous the heir of the OG space\nfighter simulation with a procedural-generated universe and a simple economic system that let\nyou make money from trade, bounty hunting, and piracy. Star Citizen the heir of the OG\ndo a space fighter mission, get a cut scene, repeat empire. And No Man's Sky, a game based\non the idea thar \"if we give players a spaceship, a crafting system, and a procedurally generated \nuniverse, it will be awesome.\"
\nAnd now, nearly ten years later:
\nWhat none of these is is a good game. Sure Elite was a good game in 1983, but Elite Dangerous\nis actually quite a bit more tedious than Elite and generally modern games are\na lot less tedious than 1980s games.
\nSure No Man's Sky has more to do in it now than it did in 2015, but that's a\nlow bar.
\nAmd now they have announced Star Citizen 1.0 and it seems to be \"we're only going to\nsimulate crafting and trade and somehow it will become a game\". In fact, Star Citizen 1.0 \nsounds a lot like a simulation where you get to do real, boring work for fake money, and\nwhere your reward is spending real money on a really expensive PC and having it crash constantly.
\nHow did we get here?
\nAll of these people have forgotten a brilliant concept called \"design for effect\" that\nwas used, and possibly coined, by the original designer of Squad Leader which made\nit one of the best board wargames ever. (Advanced Squad Leader, its descendant, is way,\nway too complicated. Don't get confused.)
\nThe basic idea of \"design for effect\" is that rather than try to simulate everything,\nyou figure out what you want the result to be and then design a game whose mechanics\ngive you that result.
\nElite Dangerous comes closest to doing this of the \"big three\" space sims being discussed.\nThe original Elite was about flying a tiny spaceship from planet to planet, trading\nto make money, fighting aliens and pirates (or being a pirate) to survive and make money, and\noccasionally undertaking a mission (usually to blow up a specific enemy ship).
\nSo with these being the desired effects, Elite Dangerous simply built current tech\nimplementations of the gameplay necessary to make things work. The economy doesn't need\nto be realistic, it just needs to create interesting, somewhat predictable, opportunities\nto make money. You don't need to simulate a society where piracy makes sense, you just\nspawn enough pirates to make life dangerous and interesting.
\nNow you have a game.
\nIf you were to simulate a universe there's no guarantee anything interesting will happen\nin it. In No Man's Sky the universe is so big that people have almost no chance of\nbumping into each other. Even if the game supported player dogfighting (it might, for all\nI know) relying on it to generate your gameplay would be very sad. Oh, you won that fight?\nGreat, now you can spend 200h preparing for the next one by mining and synthesizing fuel\nand ammo. Sounds like fun.
\nIt's quite hard to build a really good system for procedurally generating planets with\nresources that are interesting to prospect for and exploit, and then a crafting system\nthat simulates mining them, manufacturing bits out of them, and then making those\npieces assemble into interesting objects one might care about. Making something like\nthat which is detailed and realistic is even harder. The people who are really good\nat stuff like this have better things to do than make games. If you can simulate an\neconomy really well, there are ways of making real money out of that skill.
\nEven if you do a great job of simulating all of this in some way, there's not only\nno guarantee that this will lead to interesting gameplay situations. It's not even\nlikely.
\nAnyway, No Man's Sky is a very boring game to play, and Star Citizen has now\nannounced its new plan it to release something in 2026 or 2027 that sounds a lot like No Man's Sky was in 2015. Given the money and talent on tap, if they were starting from scratch right now\nthat might not be unrealistic. But turning the buggy pile of garbage they have\nand the expectations of an existing player-base into that seems impossible, and to my mind it's\nnot even a desirable outcome.
\n",date:"2024-12-31T10:09:36.000Z",summary:"Star Citizen's 1.0 announcement has me scratching my head. The creators seem hellbent on simulating everything, ignoring a fundamental truth about game design—simulating reality doesn't guarantee good gameplay.\n",author:"Tonio Loewald",_created:"2024-12-31T10:07:42.634Z",_modified:"2024-12-31T10:09:39.153Z",_path:"post/x4ut4tag0iil"},"post/path=taste-is-the-secret-sauce":{keywords:[],title:"Taste is the secret sauce",path:"taste-is-the-secret-sauce",content:"\n\nIn 1984 I had my first direct experience of the Mac. One of my takes was that one of the skills I thought\nset me apart from other people, my artistic talent, had suddenly become a lot less valuable. Sure, MacPaint\ndidn't give you the ability to draw, but just doing things like drawing tidy diagrams or lettering (people \nwould ask me to design protest signs and posters just because I could do large lettering) was something\nthis computer let anyone do effortlessly.\n\nI needn't have worried. It turned out that the Mac actually was part of a trend that made artistic talent\nmuch more valuable. The Mac let people with taste produce tasteful stuff much more easily. And it let\nthose without taste produce monstrosities effortlessly.\n\nTo be sure, there are some skills that the Mac made obsolete. People who typeset musical scores were unemployed within a few years. Mathematical typesetters were displaced by the Mac and TEX. Most tech support tasks became irrelevant \nfor Mac users, and people whose livings depended on the inability of ordinary people to use their \ncomputers for writing or creating simple graphics were—rightly—terrified of the Mac, and successfully\nkept it out of most large organizations for decades.\n\nAnd it seems to me that AI is a lot like the Mac.\n\nAI is very good at producing bad poetry, bad art, and bad code. Learning how to hand-hold AI into \nproducing better output, and recognizing when it is conspicuously failing takes taste.\n\n",date:"2024-12-30T18:35:28.600Z",summary:"The Mac, in its early days, seemed to diminish the value of artistic skill. But the reality was more nuanced. It amplified *taste*, making beautiful things achievable for those with taste, and terrible things easy for everyone else.\n",author:"Tonio Loewald",_created:"2024-12-30T18:33:17.167Z",_modified:"2024-12-30T18:35:41.719Z",_path:"post/3y2c5bcp8tyl"}}