\n\n\n\n
I first remember André Braugher from his performance in Glory where he played perhaps—low key—the most important role in the movie. He played the person with the most to lose and the least to gain by joining the army and fighting to end slavery (something the movie later acknowledges is pretty much a fool's errand). He plays the person we—the viewer comfortably separated from the events portrayed by circumstances, time, and knowledge of what will happen—should be but almost certainly won't be. (No more details: watch the movie if you haven't seen it.)
\n\n\n\nMost people will know him either from his role in Brooklyn Nine-Nine, an above average a great sitcom of recent years, or Homicide: Life on the Street, the best police procedural ever made, based on a fantastic non-fiction book by David Simon. (I revised this paragraph after conferring with numerous colleagues and discovering that my daughters' opinion is widely held; I am outvoted!)
In Homicide he is again playing someone who stands for justice despite his own self-interest. He is the black man with obvious intellect and education who chooses to work as a Homicide detective when there are so many better options for him, it ruins his marriage, and it is killing him. He works within a corrupt and under-resourced system and with colleagues he pretty much despises trying to make the tiniest difference when and where he can, and usually to his own disadvantage.
\n\n\n\nAnd, despite its being a comedy, as Raymond Holt in Brooklyn Nine-Nine he somehow again plays someone pretty much in this situation except that, now an older man and a captain, he has somehow navigated an earlier phase of life in which all of… this… was much worse, and today is comfortable enough that the horribleness is purely and not always darkly comic.
\n\n\n\nHomicide is one of my favorite TV shows of all time. Brooklyn Nine-Nine is my daughter's favorite TV show of all time.
\n\n\n\nAndré Braugher is already missed.
\n","$updatedAt":"2024-06-05T09:10:30.273+00:00",path:"andr-braugher-rip",_created:"2024-07-09T20:28:30.241Z",id:"8106",_modified:"2024-07-09T20:28:30.241Z","$id":"8106",_path:"post/8106"},{date:"2023-12-02T14:24:54.000+00:00",summary:"What vector graphic editing software produces clean output with minimal control points, specifically for creating symmetrical shapes and performing boolean operations, as explored by the author who has experience with multiple applications including Affinity Designer 2, Sketch, Vectornator, Graphic, Inkscape and Adobe Illustrator?",keywords:["inkscape","svg","boolean operations","vector graphics","sketch","affinity designer","vectornator","graphic","babylonjs","xinjs"],author:"Tonio Loewald","$createdAt":"2024-06-05T09:10:30.285+00:00","$databaseId":"665dfafe000014726b3d",title:"Pi 5 follow-up","$collectionId":"665dfb040030d12ada24","$permissions":[],content:"\n\n\n\n\nI don't have a working version of Illustrator any more, but I strongly suspect Illustrator produces perfect output in this case. In the end, I hand edited the bezier curves in my logo, but it looks like I possibly could have saved time by using Inkscape to do the booleans (and then going back to Sketch to produce clean output).
\n","$updatedAt":"2024-06-05T09:10:30.285+00:00",path:"pi-5-follow-up",_created:"2024-07-09T20:28:30.928Z",id:"8088",_modified:"2024-07-09T20:28:30.928Z","$id":"8088",_path:"post/8088"}],blogDataTimestamp:"2025-01-21T10:23:04.006Z",blogVersion:4,"post/path=peak-ai":{keywords:[],title:"Contrary to popular belief, AI may be peaking",path:"peak-ai",content:"\n\nOn the one hand, they're running out of human-generated text to train on.\nOn the other hand, they're thinking of mining the text in online videos and\nusing \"synthetic content\" to get more data.\n\nIn 2005 if you shoswed someone who hadn't used Google how good Google was\nand said, \"and it will only get better over time\", they would have been\nblown away and nodded numbly in agreement.\n\nInstead we got SEO-hacking and Google AdWords.\n\nAs with most things tech, including the flaws, I'm an early adopter. Back in\nthe late 90s when my father was diagnosed with prostate cancer, I searched the\ninternet for information of prognoses and cures. Literally 90% of the results\nmentioned \"proton beam therapy\", and they all seemed pretty reputable. I think\nAltavista found something like 50,000 web pages on it.\n\nProton beam therapy is a real thing, and while there are\npages devoted to it on most big health sites, it's not even explicitly \nmentioned in the Mayo Clinic's page of prostate cancer treatment).\nA lot of the time, it doesn't really need to be treated.\n\n## If it's valuable, it's worth perverting\n\nThe reason 2005 or so was \"peak google\" was that Google started out as an end-run\naround the sort of \"SEO-hacking\" that had been successful before Google existed.\nOriginally, search engines tended to trust that websites were about what they\nsaid they were about. Then they got clever and started looking at the text\non the site.\n\nSo, in the late 90s, a search engine assumed that if the keywords for your \nweb page (which you created) included \"best\" \"prostate\" \"cancer\" \"treatment\"\nthen, to quote Stefon, \"look no further\".\n\nA more sophisticated search engine might actually decide to see if the words\non the page also contained those words. And that was about it. It was common\npractice in the mid aughts to include click-bait text in invisible divs on pages\nwhich tricked most search engines.\n\nGoogle, on the other hand, basically tried to figure out if sites that seemed\nto be about X pointed to your page, which suggested your page, too, was about X.\nAnd then the link-farming and other hacks began, and combined with business\nmodel Google chose to adopt (inserting sponsored search results assigned by\nauction above actual search results), Google got worse and worse.\n\n## AI is just as hackable, and it's already happening\n\nIgnore the placeholder image! This video is directly on point (and discusses \nthe article Adobe’s Generative AI Jumps The Shark, Adds Bitcoin to Bird Photo).\n\n\n\nAI's generative tools used to work pretty well most of the time, but they're\nstarting to hallucinate. In the linked example, a photographer deleted an ugly\nhighlight and Photoshop inserted a bitcoin logo. Maybe this is intentional\ncontent-flooding by someone, or maybe it's unintentional. The fact is that \nAI-generated content produced with ill intent has infected the AI Adobe \npresumably pretty carefully husbands.\n\nI see evidence of unintentional hacking all the time. When software platforms\nget successful, there's a network effect. If you want your rank on Stackoverflow\nto go up (for some reason) then answer questions on this framework. If you want\nto burnish your web dev cred on medium, write articles on it, and so on. As a\nresult of this, the vast majority of front-end code on the public web is\nReact-related, often tacitly assuming React imports, declarations, and patterns \nare being used. This is so pernicious that it's almost impossible to convince\nChatGPT not to write code like this. I ask it to write vanilla js, it writes\nReact. I ask it to read my documentation, it writes React.\n\nAnd it's not good React code either. (Hilariously, I've heard OpenAI has just opened up\na ton of ReactJS front-end dev positions.)\n\nStanford \nconducted a study showing that ChatGPT outperformed doctors in \nperforming diagnoses based on patient histories and test results. Even if \ndoctors have healthy skepticism \nabout this result, it's been widely reported\nand people are going to use ChatGPT (given these stories) given that they \ncheerfully used WebMD for years when everyone told them not to.\n\nConvincing doctors to change their prescribing habits is difficult and expensive.\nIn most civilized countries, including the US until the late 90s / early 2000s \nwhen Big Pharma found a loophole) it is illegal for drug companies to advertise\nprescription drugs to consumers. Pharma sends reps to doctors, hospitals, and\npharmacists to convince them to prescribe or dispense their drugs using means\nboth fair and foul. It's a big business. Just writing software to help those \nreps market a few drugs for a single company in Australia paid me handsomely for\na decade.\n\nNow imagine you can just generate content that is designed and A|B-tested to\ninfluence AIs to recommend your drug. Would you do it? More importantly, would\na multi-billion dollar company do it? A company that already spends buckets of money \non pens, post-it notes, cardboard signs, \"medical conferences\" in tropical \nparadises, cruises, politicians, 24 year old communications grads who look \nincredible in stilettos and pencil skirts, TV ads, mentions in movies and \nmedical dramas. I mean, they have some ethical standards, I'm sure. We just \nhaven't found them yet.\n\nAs easy as it is to dumb down the populace by destroying the education system, \ninventing reality television and social media, etc. etc. AIs can do it faster\nand in real time.\n\n\n\n\n## Curation is a Partial Fix\n\nI work for a company that processes invoices using custom AI models. The training\ndata is highly curated and so everything basically just tends to get better over\ntime. Find a mistake? It probably originated in the training dataset so we go\nfix the training data and then see if the retrained AI still makes the mistake.\nBut we (a) control the data and (b) know what the correct answer is. This is\nvery much not how LLMs work. \n\nThe people who build LLMs kind of control the data, but their goal is\nquantity first, quality second, and they have no idea what the question is, let alone what the correct answer is. The answer you'll get is just some kind of statistically generated likely answer that the random contributors to the dataset might give.\n\nThis isn't just a problem with generative AI, although it seems to be an \nespecially bad problem for generative AI. When it comes to front-end coding,\n\"AI\" is currently \"artificial stupidity\". This is what happens when curation \nfails at scale.\n\nAll of this doesn't fully justify my clickbait title. Obviously, AI (and software as a whole) will continue to progress. But the thing that's currently generating all the hype—LLMs and generative AIs powered by all the content they can scrape— those may be peaking as they themselves are used to pollute the very resource they depend on at scale.\n\n",date:"2025-01-21T19:47:46.193Z",summary:"Is artificial intelligence actually getting *smarter*, or just more easily manipulated? This post delves into the surprising ways AI systems can be tricked, revealing a disturbing parallel to the SEO shenanigans of the early 2000s. From generating dodgy medical advice to subtly pushing specific products, the potential for AI to be used for nefarious purposes is not only real but the effects are already visible.",author:"Tonio Loewald",_created:"2025-01-21T08:45:58.238Z",_modified:"2025-01-21T09:09:09.538Z",_path:"post/pcczhmzc23hk"},"post/path=large-language-models-a-few-truths":{keywords:[],title:"Large Language Models — A Few Truths",path:"large-language-models-a-few-truths",content:"\n\nLLMs and derivative products (notably ChatGPT) continue to generate a combination\nof excitement, apprehensions, uncertainty, buzz, and hype. But there are a few\nthings that are becoming increasinfly clear, in the two-and-a-bit years since\nChatGPT 3 came out and, basically, aced the Turing Test \nwithout (as Turing had pointed out when he called \"the Imitation Game\") \nnecessarily thinking.\n\nThe Imitation Game asks if a machine can imitate the behavior of someone\nwho thinks (whatever that means) by reducing that to once specific behavior,\ncarrying on a conversation. \n\nJust as a Skinnerian\nbehaviorist might suggest that \"I can't tell if you're in pain, but you seem\nto be exhibiting pained behavior\", Turing basically said \"I can't tell if\na machine can think, but I can test whether it seems to exhibit a specific \nthinking behavior\"\n\n## LLMs aren't thinking, or reasoning, or doing anything like that\n\nYou can quickly determine that LLMs cannot as a class of object think by looking\nat how they handle iterative questioning. If you ask an LLM to do something\nit will often (a) fail to do it (even when it would be extremely easy for it\nto check whether it had done it) and then (b) tell you it has done exactly\nwhat it was asked to do (which shows it's very good at reframing a request as\nresponse).\n\nThis should be no surprise. The machines are simply manipulating words. They\nhave connection between, say, the process of writing code and seeing the results\nthan they do between Newton's equations of motion and the sensations one feels\nwhen sitting on a swing.\n\nSo when you say \"hey can you write a hello world application for me in Rust?\"\nthey can probably do it via analysis of many source texts, some of which \nquite specifically had to do with that exact task. But they might easily\nproduce code that's 95% correct but doesn't run at all because, not having any\nreal experience of coding they don't \"know\" that when you write a hello world\nprogram you then run it and see if it works, and if it doesn't you fix the\ncode and try again.\n\nThey are, however, perfectly capable of reproducing some mad libs version of\na response. So, they might tell you they wrote it, tested it, found a bug,\nfixed it, and here it is. And yet it doesn't work at all.\n\nAnd that's just simple stuff.\n\nAnd look, for stuff where \"there is no one correct answer\" they generally\nproduce stuff that is in some vague way more-or-less OK. If you ask them to\nread some text and summarize it, they tend to be fairly successful. But they\ndon't exhibit sound judgment in (for example) picking the most important\ninformation because, as noted, they have no concept of what any of the words\nthey are using actually mean.\n\nAll that said, when a process is essentially textual and there is a huge corpus\nof training data for that process, such as patient records with medical histories,\ntest results, and diagnoses, LLMs can quickly outperform humans, who lack both\nthe time and inclination to read every patient record and medical paper online.\n\n## There is no Moat\n\nThe purported value of LLMs and their uses are a chimera. No-one is going to\nsuddenly get rich because \"they have an LLM\". Companies will go out of business\nbecause they don't have an LLM or workflows that use LLMs.\n\nLLMs will doubtless have enormous economic impacts (probably further hollowing \nout what is left of the Middle Class, since most people have jobs where \"there is\nno one correct answer\" and all they have to do is produce stuff that is in\nsome way more-or-less OK… Just consider asking a question of a sales person in an\nelectronics store or a bureaucrat processing an application based on a form and\na body of regulations—do you think a human is better than the average AI? Given\na panel of test results and patient history, general purpose LLMs already\noutperform doctors at delivering accurate diagnoses in some studies).\n\nBut, UC Berkeley's Sky Computing Lab—UC Berkeley having already reduced the price of operating systems to zero \nby clean-room-cloning AT&T UNIX and then \nopen-sourcing it all… basically kickstarting the open source movement\njust released Sky-T1-32B-Preview\nan open source clone of not just ChatGPT but the entire \"reasoning pipeline\"\n(which OpenAI has not open-sourced) that it trained for $451.\n\nSo if you just invested, say, $150M training one iteration of your LLM six months\nago, the value of that investement has depreciated by about 99.999997%.\n\nAnd the sky-high valuations of nVidia are predicted mainly on the costs of\ntraining models, not running them. People don't need $1B render farms to\nrun these models. Very good versions of them can run on cell phones, and the\nfull versions can run on suitably beefed up laptops, which is to day, next\nyear's consumer laptops.\n\nAnd I'm talking about humans using AIs to corrupt other AIs, test whether it's working, and then doing the stuff that works more and faster and harder. This is what SEO did to Google.
\n\n",date:"2025-01-17T20:25:13.992Z",summary:"LLMs, like ChatGPT, excel at manipulating language but lack true understanding or reasoning capabilities. While they can produce acceptable responses for tasks with no single correct answer, their lack of real-world experience and understanding can lead to errors. Furthermore, the rapid pace of open-source development, exemplified by projects like Sky-T1-32B-Preview, suggests that the economic value of LLMs may be short-lived, as their capabilities can be replicated and distributed at a fraction of the initial investment.",author:"Tonio Loewald",_created:"2025-01-17T09:54:17.152Z",_modified:"2025-01-17T19:36:35.342Z",_path:"post/k1p7e2gn552k"},"post/path=adventures-with-chatgpt":{keywords:[],date:"2025-01-17T19:55:24.473Z",summary:"ChatGPT excels at mundane coding tasks, akin to a bright intern who reads documentation and StackOverflow but lacks creativity and testing. While useful for automating repetitive tasks, its code requires refinement and testing.",path:"adventures-with-chatgpt",author:"Tonio Loewald",_created:"2024-08-27T07:57:31.670Z",title:"Adventures with ChatGPT",_modified:"2025-01-17T10:14:49.289Z",content:"## Or, how I learned to stop worrying and just teach ChatGPT to code withI should add that while Sky-T1-32B-Preview allegedly outperformed ChatGPT-o1 on a battery of tests, I played with a 10GB quantized version on my laptop just now, and it produced far worse results, far more slowly than Meta Llama 7B.
\n
xinjs
…\n\n\n\nI've been slow to jump on the ChatGPT bandwagon in large part because a lot of the code I write is very hard and somewhat novel, and ChatGPT isn't really good at that kind of thing. What it's really good at is grokking lots of examples, including documentation, and then adapting it to a specific request, often with mistakes.\n\nE.g. I have gotten various LLMs to write me code to do something very mundane, such as traverse all the documents in a directory and pass them through a function in Node.js. This isn't hard to do, but it's tedious and time-consuming, and in the end, the code I would have written won't be markedly superior to something any decent coder would write.\n\nChatGPT (et al.) excels at this kind of task. Think of it as a pretty bright intern who will read any amount of source code and documentation and delve into StackOverflow to figure out how to solve a problem, but won't come up with anything new and won't actually test their code at all before handing it to you.\n\nNow, I'm sure there are workflows which you can fairly easily assemble that would get the resulting code, try to wrap it in tests, try to run it, and so forth, and then iterate until they get through the flow without exploding.\n\n## OK, so I have an intern!\n\nYesterday I decided to play around with ChatGPT because it had been a minute.\n\nThe first thing I tried to do was get it to help me improve the TypeScript declaration of the xin proxy in xinjs by explaining what it was intended to do, and it seems to me that ChatGPT did a great job there, although the code it produced was a toy, and I need to spend some time figuring out if I can apply the strategy to the actual code.\n\nIn any event, a lot quicker and more effective than the time I've spent trying to spelunk the TypeScript documentation, miscellaneous blogs, or StackOverflow.\n\nSo, I got curious.\n\nOne of the suggested prompts was \"make me a personal website\" or something along those lines. So I tried it and it was… fine.\n\nThen I decided to explain the way xinjs's elements proxy works, and asked it to write the code using elements. Here's what I told ChatGPT:\n\n\n\n\nAnd ChatGPT immediately produced this:\n\n```\nimport { elements } from 'xinjs';\n\n// Destructure the element factories from the proxy\nconst { header, nav, h1, section, a, div, p, ul, li, img, footer } = elements;\n\n// Create the HTML structure using element factories\nconst pageContent = div({ class: 'container' }, [\n section({ id: 'bio', class: 'bio' }, [\n h2('About Me'),\n p('I am a software developer with over thirty years of experience. My expertise spans full-stack development, with a career-long passion for usability and creating intuitive user experiences.')\n ]),\n section({ id: 'portfolio', class: 'portfolio' }, [\n h2('Portfolio'),\n h3('Technical Projects'),\n p('Here are some examples of my technical work:'),\n ul([\n li(a({ href: '#' }, 'Project 1: Description')),\n li(a({ href: '#' }, 'Project 2: Description')),\n li(a({ href: '#' }, 'Project 3: Description'))\n ]),\n h3('Artistic Projects'),\n p('Some of my artistic work:'),\n img({ src: 'example-artwork.jpg', alt: 'Example Artwork' })\n]),\n...\n```\n\n> I should note here that there's a major error in this fragment I didn't pick \n> until I started proof-reading this post, weeks after starting it. Code from\n> LLMs is rife with idiotic errors that often aren't immediately apparent.\n\nI got so excited I tried explaining how xinjs's Component class works to see if I could get it to produce web-components the xinjs way. This took a few attempts, and one thing I learned from this is that ChatGPT seems better at learning from simple, clear examples than more complex examples.\n\nI started out by pointing it at an example in GitHub and asking it to build the components along the same lines. This was a disaster, especially in that it looked sorta, kinda right but wouldn't have worked at all.\n\nI spent a bunch of time refining my prompts, and the most helpful prompts provided focused examples, e.g. I used this prompt after the third or fourth attempt to explain howI'm using a library that provides a proxy named
\nelements
. Its properties are element factories, soelements.div()
produces an HTMLDivElement.The factories will take their arguments as children (strings are treated as text nodes) or bags of properties (e.g.
\n{class: 'foo bar', title: 'baz'}
will add those attributes.E.g. this Javascript
\n\nimport { elements } from 'xinjs'\nconst { label, span, input } = elements\n\n
document.body.append(label(span('Date'), input({ type: 'date' })))\n
Would add the following to document.body
\n\n<label>\n <span>Date</span>\n <input type=\"date\">\n</label>\n\n
Could you rewrite the HTML code using the elements proxy?\n
elementCreator
was a static method of Component
that did x and y, and it produced incorrect code with incorrect explanations of what it was doing.\n\n\n No. Here's a simple example that should help.\n\n\n\nAfter a few prompts like this, ChatGPT provided pretty flawless results. But a\nlot of the time it really feels like you're just training a conscientious but\neager fool to imitate examples. It's kind of like how you wished regexp worked,\nwith the understand that, just as with regexp, you really want to verify that\nit didn't blow up in some subtle but exciting way.\n\n## Aside: \"artificial intelligence\"? it's not even artificial stupidity…\n\nIf you're just dealing with text, it can take a while using ChatGPT to realize \nthat ChatGPT isn't \"intelligent\" at all. It is predictive text completion with\nan insane knowledge base, but it simply doesn't understand anything at all.\n\nBut if you start working with image generation, it's immediately obvious that\n(1) the image generator doesn't understand what it's being asked to do, and\n(2) ChatGPT doesn't understand the image it's been given.\n\n\n\nIf you look at this example (and this is literally its response to my initial\nrequest) it does a great job of transforming the prompt into a concept and\nthen generating a prompt to produce that concept. The text portion of the \npipeline is fabulous, and it really feels like it understands. The fact the\ntext engine is not even stupid doesn't become immediately apparent.\n\nBut the image is just horribly wrong in so many ways, and as I iterated on\nthe prompt trying to improve the results it just got worse and worse. E.g.\nI never even suggested inserting a React Logo but it started putting them\neverywhere. (Like a lot of programming tools out there, ChatGPT just freaking\nloves React.)\n\nSo, the image generator just kind of free associates on the input text and\nproduces something but it doesn't know that, for example, you can't push\nsomething up a hill if it's already at the top of the hill. It doesn't know\nthat the stuff on the hill and the stuff on the boulder need to be different\nthings and just puts anything you ask for in both places or neither.\n\n",_path:"post/ui4rtvaqo6vv"},"post/path=apple-intelligence-image-playground":{keywords:[],title:"Apple Intelligence—Image Playground",path:"apple-intelligence-image-playground",content:"\n\n\nexport class MyThing extends Component {\n content = () => div('my thing is here')\n }\n\n export const myThing = MyThing.elementCreator({tag: 'my-thing'})\n
The first bit of Apple Intelligence™ I've really played with is the new\nImage Playground (beta) that appeared on my Macbook Pro with the latest\nsystem update.
\nThe first things that come to mind about it are:
\nThis is the default \"animation\" style:
\n\n\nHere's the \"illustration\" style:
\n\n\nRight now, it's a really cute toy. But while it does produce consistently\nlovely results, they're basically all the same—pictures of a single person\nor animal, pretty much front-on, with some minor modifications allowed\n(e.g. \"scarf\"), some general atmosphere / background stuff (\"sci fi\", \"mountains\"),\nand either from head to toe, head and chest, or portrait.
\nI think this has potential if a lot more stuff is added. It's reliable \nand fast—it just doesn't do a whole lot.
\n",date:"2025-01-15T18:14:38.350Z",summary:"Apple's new Image Playground is focused, and easy to use. If you want to produce cute \"Pixar-style\" people and animals, it quickly churns out consistent, but very limited, results. My M3 Max rendered images in seconds, but right now it's more of a cute toy than a useful tool\n",author:"Tonio Loewald",_created:"2025-01-15T11:14:18.592Z",_modified:"2025-01-15T11:14:47.871Z",_path:"post/xohqdwdgjlxa"},"post/path=bambulabs-p1s-initial-review":{keywords:[],title:"Bambulabs P1S Initial Review",path:"bambulabs-p1s-initial-review",content:"\n\nThe BambuLabs P1S is, I think, the latest printer from Bambulabs, a Chinese 3D Printer\ncompany that is trying to be \"the Apple of 3D printing\". (Other previous would-be Apples of\n3D Printing would include Formlabs.)
\nPerhaps \"labs\" is a bad suffix for a company planning to be the Apple of anything.
\nThe printer itself is heavier than I expected (the box was 22kg according to Postti,\nand I believe them). There's a lot of packaging but not more than needed. Included\nin the main box were three spools of PLA (green, red, and glossy white \"support\").\nAll three are standard spools but with maybe 25% of a full load of material. (I\nbought a bunch of different materials at the same time and those are all much\nfuller).
\nWhile there is a \"quick start\" guide, visible (if not accessible) upon opening\nthe box, once you somehow tear it out multiple layers of padding and\nplastic it assumes you've gotten the printer out of the box, which is no mean feat.
\nAnyway, having pulled all the packing material out from around the printer and \nsqueezed my arms down around the package and found something I could hold without\nfeeling like I might break something off, I did get the printer out of the box\nand onto a table.
\nThe instructions for removing packing materials, which is similar to unpacking\na printer or scanner, which often comes with disposable shims and bits of foam\nhidden in its inner workings, are fine once you realize that there is, in fact,\na ridiculously long allen key in the box of stuff, and that it can remove the rather\ninconveniently placed screws you need to remove fairly easily.
\nI should note that 90% of my issues came from the fact that the quick start\nguide is, like the ones you get with Apple products these days, printed in minuscule\ntext and my eyesight isn't what it once was, so I didn't read the packing list\ncarefully. That would have saved me a few minutes.
\nEven so, assembling the printer requires access to its back and front and good light.
\nBut while it probably took me 30 minutes to have it ready to print, it might have\ntaken someone with a less messy office and better eyesight 15 minutes as advertised.
\nIf you have aspirations to be the \"Apple\" of anything, you may want to hire\nan actual UI designer and a proof reader.
\nSoftware is what lets the Bambu experience down. if you're going to be the Apple\nof anything you need to not suck at software, and the software for this thing \nis a mixture of great when it works, confusing as hell, half-assed, and perplexing.
\nTo begin with the quick start guide firsts gets you to download the iOS app and\nbind it to the printer. This did not work as advertised and requires you to go\nfind numbers deep in the printer's crappy UI. (This is a recurring theme.)
\nFirst things first, because I couldn't get the App to sync with the printer using\nBluetooth (it kept timing out), I figured out how to print one of the pre-loaded\nmodels (good old benchy) from the printer's small front-panel. (Installing that\npanel was the least well explained step in the setup process—luckily most electronic\nconnectors are designed so that if you look hard at them and aren't a complete idiot\nyou can usually guess the correct orientation—good luck actually plugging the damn\nthings into each other though.)
\nIn the end I think the big problem was that the app requires you to set up the\nWifi for the printer on your phone using bluetooth rather than allowing you \nto configure wifi on the printer itself. (Probably because entering a typical\nwifi password on the front panel would be torture, while doing it on your phone\ncould be effortless but is merely nasty.) Because the phone app's dialog is poorly\ndesigned it tried to join the network without a password and then hung.
\nMeanwhile…
\nThe benchy test print was both fast and pretty close to perfect. Given that the number\nof good or near perfect prints I've gotten from any of my two previous 3d printers \nis—generously speaking—three, that's a very good start.
\nEventually I solved my software problems. The quick start guide wants you\nto print benchy from the desktop software but the screenshots look nothing like\nthe software I downloaded and the instructions start once you've done stuff I hadn't\ndone. If the software sucked less it wouldn't need instructions, but as it is, the\ninstructions you get aren't adequate and the failure modes are horrific. (E.g.\nbecause I wasn't logged into the cloud service, I couldn't see my printer on the \nnetwork. Adding the printer involved one of two processes, neither of which worked\nand both of which were poorly documented and involved navigating the printer's\nawful front-panel interface. In the end, the solution was to figure out how to log\ninto the cloud at which point everything Just Worked.)
\nOh, the first thing the software wanted me to do was detail my printer's configuration\n(which it apparently didn't know). What is the diameter of my hot end? I don't know.
\nAgain, the software is the problem. First of all, the Mac version (which judging from\nwhat I've seen, is actually the more popular choice for their target market) is missing\na mysterious \"repair\" function for models. So the software refuses to print your model\nuntil it's repaired with the Windows version of the software or a third-party website.
\nDudes. Fix this.
\nSecond, by default supports are disabled and anything requiring supports won't print.\nOnce you find the supports panel, the default supports are horrific (not the \"tree\"\nsupports everyone uses these days) and that basically ruined my second print.
\nThat said, once I figured these things out, every print I have made (I think I'm up to\nsix, total) has turned out at least 90% perfect (and 100% is unattainable with FDM as\nfar as I can tell).
\nThe main reason I'm interested in 3d printers is tabletop games, and every printer\nI've tried has been a miserable failure until now.
\nThe first serious thing I tried to print with the Bambulabs P1S was an interlocking\nhex tile. Of course I only printed one because I'm an idiot. So I printed it again.\nThe goal was to figure out how much I needed to tweak the model to make them snap\ntogether. They snapped together perfectly first time.
\nSo I printed four more in a different color. They all fitted perfectly. I am a very\nhappy formerly frustrated game designer.
\n\n\nThe only \"failed\" print I've had so far is a space marine\nminiature that requires a crazy amount of support and where\nI had not realized the default supports are horrible. I haven't\ntried printing it again, yet, because the software is hazing me, \nbut that's anazing. Right now, 3D printing is still in WAY too primitive a state\nfor an \"Apple of 3d printing\" to even make sense. But this thing is so much better\nthan anything I've owned or used it's incredible.
\nI just sketched out a new game and I am so excited.
\nI bought the AMS bundle which includes a device for switching materials automatically.\nEven if you only print in one color at a time (printing multi-colored models is very\neasteful because each time the printer changes color it has to rewind the current\ncolor, then waste a certain amount of the new color, before continuing—even allowing\nfor the minor gain you might get from doing the next layer's portion of the current\ncolor before swapping, the difference in material consumption between one and two\ncolors is enormous).
\nThat said, the AMS is fantastic just for being able to swap from one material to\nanother between prints. Even so, it's not without issues. E.g. I believe it's supposed\nto automatically identify materials if I get them from Bambulabs, which I did, but it's\nnot working thus far. I definitely got it because I want to produce multi-colored\nmodels, but where possible I'm printing in a single color at a time and planning to\nsnap things together.
\nAnyway, best 3D printer ever.
\n",date:"2025-01-03T14:19:55.667Z",summary:"Unboxing a BambuLabs P1S 3D printer was a surprisingly involved process, and the journey didn't get much easier from there. This Chinese contender for the \"Apple of 3D printing\" boasts impressive hardware, but the software experience can be pretty frustrating. From fiddling with hidden allen keys to battling Bluetooth connectivity, the setup was a minor ordeal, and navigating the app was a test of patience. But the truly game-changing moment? Snapping together perfectly printed hex tiles on the first try.",author:"Tonio Loewald",_created:"2025-01-03T10:40:28.102Z",_modified:"2025-01-03T14:28:04.128Z",_path:"post/wl9ydyiwzvzl"},"post/path=design-for-effect":{keywords:[],title:"Design for effect",path:"design-for-effect",content:"\n\nI just stumbled across a video on YouTube commenting about the Star Citizen 1.0 announcement, and I\nwondered if I had stepped into a time warp and somehow Star Citizen was nearing an actual release.
\nI needn't have worried, it's just another vaporware announcement, but what's really concerning is\nwhat they've told us what they're now aiming to do.
\nFirst of all, I'm not one of those people who had high hopes for Star Citizen. I certainly had\nhopes for it. It is a \"science fiction\" game in some loose sense. And it is, after all, the \nbest-funded game development in history, that I know of… Who knows, GTA VI might actually be \nbetter-funded at this point since it is virtually guaranteed to make billions of dollar.
\nStill, the \"genius\" behind all this is the creator of the Wing Commander franchise, a series of\nbasically bad space-fighter simulations that had ludicrously complicated controls (each game came\nwith a keyboard overlay along the same lines as professional studio software like Avid Composer)\nand featured pretty much linear \"groundhog day\" stories with badly acted cut scenes.
\nHaving left the game industry to pursue a career making mediocre movies, he returned with\ngreat fanfare to his true love—making overly complicated space flight sims. But now, instead of\nhaving a linear plot which cliched dialog he was switching to some variation of \"simulate \neverything, it will all work out\".
\nAnd so in around 2015 we got three of these things. Elite Dangerous the heir of the OG space\nfighter simulation with a procedural-generated universe and a simple economic system that let\nyou make money from trade, bounty hunting, and piracy. Star Citizen the heir of the OG\ndo a space fighter mission, get a cut scene, repeat empire. And No Man's Sky, a game based\non the idea thar \"if we give players a spaceship, a crafting system, and a procedurally generated \nuniverse, it will be awesome.\"
\nAnd now, nearly ten years later:
\nWhat none of these is is a good game. Sure Elite was a good game in 1983, but Elite Dangerous\nis actually quite a bit more tedious than Elite and generally modern games are\na lot less tedious than 1980s games.
\nSure No Man's Sky has more to do in it now than it did in 2015, but that's a\nlow bar.
\nAmd now they have announced Star Citizen 1.0 and it seems to be \"we're only going to\nsimulate crafting and trade and somehow it will become a game\". In fact, Star Citizen 1.0 \nsounds a lot like a simulation where you get to do real, boring work for fake money, and\nwhere your reward is spending real money on a really expensive PC and having it crash constantly.
\nHow did we get here?
\nAll of these people have forgotten a brilliant concept called \"design for effect\" that\nwas used, and possibly coined, by the original designer of Squad Leader which made\nit one of the best board wargames ever. (Advanced Squad Leader, its descendant, is way,\nway too complicated. Don't get confused.)
\nThe basic idea of \"design for effect\" is that rather than try to simulate everything,\nyou figure out what you want the result to be and then design a game whose mechanics\ngive you that result.
\nElite Dangerous comes closest to doing this of the \"big three\" space sims being discussed.\nThe original Elite was about flying a tiny spaceship from planet to planet, trading\nto make money, fighting aliens and pirates (or being a pirate) to survive and make money, and\noccasionally undertaking a mission (usually to blow up a specific enemy ship).
\nSo with these being the desired effects, Elite Dangerous simply built current tech\nimplementations of the gameplay necessary to make things work. The economy doesn't need\nto be realistic, it just needs to create interesting, somewhat predictable, opportunities\nto make money. You don't need to simulate a society where piracy makes sense, you just\nspawn enough pirates to make life dangerous and interesting.
\nNow you have a game.
\nIf you were to simulate a universe there's no guarantee anything interesting will happen\nin it. In No Man's Sky the universe is so big that people have almost no chance of\nbumping into each other. Even if the game supported player dogfighting (it might, for all\nI know) relying on it to generate your gameplay would be very sad. Oh, you won that fight?\nGreat, now you can spend 200h preparing for the next one by mining and synthesizing fuel\nand ammo. Sounds like fun.
\nIt's quite hard to build a really good system for procedurally generating planets with\nresources that are interesting to prospect for and exploit, and then a crafting system\nthat simulates mining them, manufacturing bits out of them, and then making those\npieces assemble into interesting objects one might care about. Making something like\nthat which is detailed and realistic is even harder. The people who are really good\nat stuff like this have better things to do than make games. If you can simulate an\neconomy really well, there are ways of making real money out of that skill.
\nEven if you do a great job of simulating all of this in some way, there's not only\nno guarantee that this will lead to interesting gameplay situations. It's not even\nlikely.
\nAnyway, No Man's Sky is a very boring game to play, and Star Citizen has now\nannounced its new plan it to release something in 2026 or 2027 that sounds a lot like No Man's Sky was in 2015. Given the money and talent on tap, if they were starting from scratch right now\nthat might not be unrealistic. But turning the buggy pile of garbage they have\nand the expectations of an existing player-base into that seems impossible, and to my mind it's\nnot even a desirable outcome.
\n",date:"2024-12-31T10:09:36.000Z",summary:"Star Citizen's 1.0 announcement has me scratching my head. The creators seem hellbent on simulating everything, ignoring a fundamental truth about game design—simulating reality doesn't guarantee good gameplay.\n",author:"Tonio Loewald",_created:"2024-12-31T10:07:42.634Z",_modified:"2024-12-31T10:09:39.153Z",_path:"post/x4ut4tag0iil"}}