I first remember André Braugher from his performance in Glory where he played perhaps—low key—the most important role in the movie. He played the person with the most to lose and the least to gain by joining the army and fighting to end slavery (something the movie later acknowledges is pretty much a fool's errand). He plays the person we—the viewer comfortably separated from the events portrayed by circumstances, time, and knowledge of what will happen—should be but almost certainly won't be. (No more details: watch the movie if you haven't seen it.)
In Homicide he is again playing someone who stands for justice despite his own self-interest. He is the black man with obvious intellect and education who chooses to work as a Homicide detective when there are so many better options for him, it ruins his marriage, and it is killing him. He works within a corrupt and under-resourced system and with colleagues he pretty much despises trying to make the tiniest difference when and where he can, and usually to his own disadvantage.
\n\n\n\n
And, despite its being a comedy, as Raymond Holt in Brooklyn Nine-Nine he somehow again plays someone pretty much in this situation except that, now an older man and a captain, he has somehow navigated an earlier phase of life in which all of… this… was much worse, and today is comfortable enough that the horribleness is purely and not always darkly comic.
\n\n\n\n
Homicide is one of my favorite TV shows of all time. Brooklyn Nine-Nine is my daughter's favorite TV show of all time.
\n\n\n\n
André Braugher is already missed.
\n","$updatedAt":"2024-06-05T09:10:30.273+00:00",path:"andr-braugher-rip",_created:"2024-07-09T20:28:30.241Z",id:"8106",_modified:"2024-07-09T20:28:30.241Z","$id":"8106",_path:"post/8106"}],blogDataTimestamp:"2025-02-06T16:01:20.418Z",blogVersion:4,"post/path=futures-so-bright-i-want-to-wear-ar-glasses":{keywords:[],title:"The future's so bright… I want to wear AR glasses",path:"futures-so-bright-i-want-to-wear-ar-glasses",content:"\n\n
The collapse of Democracy, loss of trust in institutions,\nthe corruption of social norms, the rise of ignorant bigotry and wilful \nstupidity. All of this sucks.
\n
It also sucks that the dictator of Russia thinks he's some kind of historic\ntsar destined to have \"the great\" after his name, if he can only commit enough\npointless atrocities, or at least keep his population distracted from his own \nabject incompetence as a leader. Imagine if he'd spent all that oil money\non actually fixing the Russian economy?
\n
It also sucks that the US electorate has chosen to elect a felon, fraudster, and sex predator to its highest office, aided by corrupt justices in its highest court, and a bunch of weak-minded, weak-willed fools in its Congress.
It's all a huge shame, since technology is making incredible strides\nand it's incredibly exciting. Sure, we don't have Jetsons-style aircars, but \nhere's a list of stuff we do have that's frankly mind-blowing.
\n
Tricorders, Datasticks, Wrist Computers
\n
The iPhone, Apple Watch, and Android-based equivalents are better and more \nuseful than pretty much any SF device of the same kind. An iPhone strapped to\nyour wrist would be a superior device (ignoring ridiculous magic) than any\nsimilar device shown in SF. Even futuristic \"post iPhone\" devices shown in\nSF (e.g. the glass slates in The Expanse) don't really match modern smart\nphones.
\n
The communicators in Star Trek: Next Generation didn't even have call waiting,\nwhich was something we had on regular phones at the time.
\n
Solar Power
\n
I don't know if you've been paying attention, but the price of solar power is\nnow so low that the only thing stopping it from simply replacing all other \npower sources is the fact it isn't sunny everywhere all the time. If we just \nhad a grid that could send power from where it's sunny to other places (and\nit's being built) and conveniently store and recharge power on things that\nneed to move, there'd really no reason to generate power any other way.
\n
Yes, climate change is bad, but if we're not actively stupid we already have\nall the technology we need to solve the problem both in the near term (build\nmore modern nuclear power plants) and the long (create a worldwide solar grid).
\n
Oh yeah, and we seem to have solved the population crisis so bad we might need\nto encourage folks to have more kids at some point. Maybe not treat women (or\nbiologically female persons if you prefer) who have kids as third class \ncitizens? I dunno.
\n
Nuclear Power… Fusion Power?
\n
We can make nuclear power plants that are safer and have a smaller environmental\nimpact than any other form of power generation. Right now, it's not cheap, but\nnuclear power is exactly the kind of technology that would get super cheap if\nwe just scaled it.
\n
Never mind, fusion seems to have gone from being \"fifty years away\" to actually \ngetting serious private investment. Someone might actually get it to work!
\n
Battery Technology
\n
Electric cars are actually the least interesting thing that we've gotten from\nimprovements in battery techology. Moore's law predicted we'd have cheap\nsupercomputers that could fit on our wrists by now, but it didn't say they'd\nbe able to run all day on battery and have a fast wireless network connection.
\n
For quite a while, Moore's Law actually seemed to be saying the opposite. A\nmodern gaming PC consumes an embarassing amount of power compared to, say,\nthe very expensive graphics workstation I had custom made for myself back in\n2001.
\n
Insanely Good Cellular Networks
\n
We have a very fast cable connection here in Oulu, Finland (2h south of the\nArtic Circle by car…). But when it wobbles and I switch to sharing my iPhone's\nunlimited data plan, the network gets better.
\n
AI (sort of)
\n
OK, it's not really intelligent, but it does pretty much ace the Turing Test,\nso while AGI may be a big con, the fact is we've now got something that passes\nthe Turing Test and can run locally on your laptop or phone.
\n
Can it think? No. Can it conduct an \"intelligent\" conversation? Sure, it may\nmake shit up or hallucinate, but find me a person who doesn't sometimes do\nthat.
\n
And if you're a programmer, the interface for these things is trivial. \nYou pass it a bunch of text and you get back a bunch of text. For advanced\ncases you can add images and so on. It's so much easier than, say, displaying\na window with \"hello world\" in it was on a 1984 Mac.
\n
To give an example, I have an AI-powered tool for my blog to create summaries\nof my posts, and the API is not much more than:
\n
const post = ... // get the post source\nconst response = await ai(`Here is a blog post I've written in markdown format, please write a blurb for it or extract a key paragraph. ${post}`)\nconst summary = response.text\n
\n\n
Sure, it doesn't write very well, but it was super easy to implement.
\n
Oh, and I can right-click and have an AI proofread anything.
\n
AR and VR headsets
\n
The Meta Quest 3 is amazing and really quite cheap. The Vision Pro seems even\nmore amazing, but not cheap. But both cost peanuts compared to even mediocre \nVR headsets from 10 years ago.
\n
3D Printers
\n
You can 3D print in plastic, metal, or wood. You can print subtractively\nor additively. The precision of 3D prints is such that you can print parts that\nshould fit together perfectly, and they fit together perfectly.
\n
And this is no longer for toys. Even with inexpensive printers, you can print \ncar and motorcycle parts or entire drones (aside from the circuits and motors),
\n
And the devices to do this can be bought for the price of a high-end power tool.\nIt's nuts.
\n
Video Conferencing
\n
You can do business from anywhere via video conferencing on laptops, phones,\nand TVs. In fact it's so common, we've seen several generations of social norms\nevolve (e.g. today, there's an unspoken threshold at which almost everyone turns\noff their camera).
\n
Shared Office Space
\n
We Inc may have been a huge scam, but the idea remains brilliant and there are\nplenty of people offering shared office space sustainably today.
\n
The business I founded in 2001 had to commit to a multi-year AUD $8000/month \nlease. If we had shared office space we'd have had $100k/year more to spend on \nanything we needed… or just salaries.
\n
And don't even talk about the horrific price we paid for a 56kB ISDN connection.
\n
Cube Sats
\n
Getting to be an astronaut has never been a reasonable goal for more people. But\nfor the price of a new Porsche 911, BMW M5, or Tesla Cybertruck, you can launch \nyour own satellite.
You can get fully functional computers like the Arduino or Raspberry Pi for \nalmost nothing, and program them to do AI, drive displays, take pictures, \nyou name it.
\n
So, if you want to build your own satellite, it's not that big a deal to hook\nup a radio to a tiny computer and a fancy camera, solar cells, batteries, etc.\nand you've got your own spy satellite. And quite a few startups have done\nbasically this.
\n
Very Fast Trains
\n
You might not know this if you live in the US, bu in many countries outside the\nUS, trains are so cheap and convenient that it's hard to justify driving long\ndistance by car. In Finland the intercity trains can reach 200km/h—you simply\ncan't get places as fast in a car—and the cities tend to be highly walkable.\nIt doesn't hurt that gas is highly taxed, but if you had a choice of driving or\ngetting there quicker and being able to work and go to a restaurant while \nzooming through the landscape at 150km/h, would you drive?
\n
It would be great if the WiFi on the trains were a little better. I mean, \nsometimes I have to wait a few minutes for the network to be quick enough for\nme to push a PR.
\n
Genetic Engineering
\n
You can literally buy a kit for very little money that allows kids to use\ngene shears to make a plant glow in the dark. This isn't even all that recent.
\n
More recently, there was this worldwide pandemic. It was in the news.
\n
Very shortly after COVID-19 was first detected, scientists in China were able \nto sequence the virus's DNA and publish it. Other scientists at Moderna \ndownloaded the genome and tailored a highly effective vaccine against it over \na weekend. While it needed about a year to pass safety testing, the team \nthemselves were sufficiently\nconvinced of their technology that they all innoculated themselves immediately.\nIt seems highly likely to me that a lot of Trump's bravado about the pandemic\ncame from he and those close to him having early access to these vaccines.
\n
Software Development is Crazy Easy
\n
And no, I'm not talking about using AI to write a to do list manager.
\n
I literally wrote a utility program yesterday using npx create-xinjs-blueprint\nin less time than it would have taken me to do the same thing using \nRealbasic.
\n
I actually have a stack that lets you build web-apps top-to-bottom, desktop \napps, and tiny utilities as easily as I used to be able to in the 1990s. You \ncan't draw the user interface (which is the final missing piece) but everything \nelse is as easy or easier—I just need to wrap some convenience tooling around\nit and write documentation. So much documentation.
\n
Air Cars
\n
I lied. You can actually buy air cars now.
\n",date:"2025-02-04T17:57:55.814Z",summary:"So much bad news right now…\n\nIt's all a huge shame, since technology is making incredible strides and it's incredibly exciting. Sure, we don't have Jetsons-style aircars, but \nhere's a list of stuff we do have that's frankly mind-blowing.",author:"Tonio Loewald",_created:"2025-02-04T17:49:54.240Z",_modified:"2025-02-04T18:07:11.527Z",_path:"post/2n8p4qfong3r"},"post/path=peak-ai":{keywords:[],title:"Contrary to popular belief, AI may be peaking",path:"peak-ai",content:"\n\nOn the one hand, they're running out of human-generated text to train on.\nOn the other hand, they're thinking of mining the text in online videos and\nusing \"synthetic content\" to get more data.\n\nIn 2005, if you showed someone who hadn't used Google how good Google was and \nsaid, \"and it will only get better over time\", they would have been blown away \nand nodded numbly in agreement.\n\nInstead we got SEO-hacking and Google AdWords.\n\nAs with most things tech, including the flaws, I'm an early adopter. Back in\nthe late 90s when my father was diagnosed with prostate cancer, I searched the\ninternet for information of prognoses and cures. Literally 90% of the results\nmentioned \"proton beam therapy\", and they all seemed pretty reputable. I think\nAltavista found something like 50,000 web pages on it.\n\nProton beam therapy is a real thing, and while there are\npages devoted to it on most big health sites, it's not even explicitly \nmentioned in the Mayo Clinic's page of prostate cancer treatment).\nA lot of the time, it doesn't really need to be treated.\n\n## If it's valuable, it's worth perverting\n\nThe reason 2005 or so was \"peak google\" was that Google started out as an end-run\naround the sort of \"SEO-hacking\" that had been successful before Google existed.\nOriginally, search engines tended to trust that websites were about what they\nsaid they were about. Then they got clever and started looking at the text\non the site.\n\nSo, in the late 90s, a search engine assumed that if the keywords for your \nweb page (which you created) included \"best\" \"prostate\" \"cancer\" \"treatment\"\nthen, to quote Stefon, \"look no further\".\n\nA more sophisticated search engine might actually decide to see if the words\non the page also contained those words. And that was about it. It was common\npractice in the mid aughts to include click-bait text in invisible divs on pages\nwhich tricked most search engines.\n\nGoogle, on the other hand, basically tried to figure out if sites that seemed\nto be about X pointed to your page, which suggested your page, too, was about X.\nAnd then the link-farming and other hacks began, and combined with the business\nmodel Google chose to adopt (inserting sponsored search results assigned by\nauction above actual search results), Google got worse and worse.\n\n## AI is just as hackable, and it's already happening\n\nIgnore the placeholder image! This video is directly on point (and discusses \nthe article Adobe’s Generative AI Jumps The Shark, Adds Bitcoin to Bird Photo).\n\n\n\nAI's generative tools used to work pretty well most of the time, but they're\nstarting to hallucinate. In the linked example, a photographer deleted an ugly\nhighlight and Photoshop inserted a bitcoin logo. Maybe this is intentional\ncontent-flooding by someone, or maybe it's unintentional. The fact is that \nAI-generated content produced with ill intent has infected the AI Adobe \npresumably pretty carefully husbands.\n\nI see evidence of unintentional hacking all the time. When software platforms\nget successful, there's a network effect. If you want your rank on Stackoverflow\nto go up (for some reason) then answer questions on this framework. If you want\nto burnish your web dev cred on medium, write articles on it, and so on. As a\nresult of this, the vast majority of front-end code on the public web is\nReact-related, often tacitly assuming React imports, declarations, and patterns \nare being used. This is so pernicious that it's almost impossible to convince\nChatGPT not to write code like this. I ask it to write vanilla js, it writes\nReact. I ask it to read my documentation, it writes React.\n\nAnd it's not good React code either. (Hilariously, I've heard OpenAI has just opened up\na ton of ReactJS front-end dev positions.)\n\nStanford \nconducted a study showing that ChatGPT outperformed doctors in \nperforming diagnoses based on patient histories and test results. Even if \ndoctors have healthy skepticism \nabout this result, it's been widely reported\nand people are going to use ChatGPT (given these stories) given that they \ncheerfully used WebMD for years when everyone told them not to.\n\nConvincing doctors to change their prescribing habits is difficult and expensive.\nIn most civilized countries, including the US until the late 90s / early 2000s \nwhen Big Pharma found a loophole) it is illegal for drug companies to advertise\nprescription drugs to consumers. Pharma sends reps to doctors, hospitals, and\npharmacists to convince them to prescribe or dispense their drugs using means\nboth fair and foul. It's a big business. Just writing software to help those \nreps market a few drugs for a single company in Australia paid me handsomely for\na decade.\n\nNow imagine you can just generate content that is designed and A|B-tested to\ninfluence AIs to recommend your drug. Would you do it? More importantly, would\na multi-billion dollar company do it? A company that already spends buckets of money \non pens, post-it notes, cardboard signs, \"medical conferences\" in tropical \nparadises, cruises, politicians, 24 year old communications grads who look \nincredible in stilettos and pencil skirts, TV ads, mentions in movies and \nmedical dramas. I mean, they have some ethical standards, I'm sure. We just \nhaven't found them yet.\n\nAs easy as it is to dumb down the populace by destroying the education system, \ninventing reality television and social media, etc. etc. AIs can do it faster\nand in real time.\n\n
\n
And I'm talking about humans using AIs to corrupt other AIs, test whether it's working, and then doing the stuff that works more and faster and harder. This is what SEO did to Google.
\n\n## Curation is a Partial Fix\n\nI work for a company that processes invoices using custom AI models. The training\ndata is highly curated and so everything basically just tends to get better over\ntime. Find a mistake? It probably originated in the training dataset so we go\nfix the training data and then see if the retrained AI still makes the mistake.\nBut we (a) control the data and (b) know what the correct answer is. This is\nvery much not how LLMs work. \n\nThe people who build LLMs kind of control the data, but their goal is\nquantity first, quality second, and they have no idea what the question is, let alone what the correct answer is. The answer you'll get is just some kind of statistically generated likely answer that the random contributors to the dataset might give.\n\nThis isn't just a problem with generative AI, although it seems to be an \nespecially bad problem for generative AI. When it comes to front-end coding,\n\"AI\" is currently \"artificial stupidity\". This is what happens when curation \nfails at scale.\n\nAll of this doesn't fully justify my clickbait title. Obviously, AI (and software as a whole) will continue to progress. But the thing that's currently generating all the hype—LLMs and generative AIs powered by all the content they can scrape— those may be peaking as they themselves are used to pollute the very resource they depend on at scale.\n\n",date:"2025-01-21T19:47:46.193Z",summary:"Is artificial intelligence actually getting *smarter*, or just more easily manipulated? This post delves into the surprising ways AI systems can be tricked, revealing a disturbing parallel to the SEO shenanigans of the early 2000s. From generating dodgy medical advice to subtly pushing specific products, the potential for AI to be used for nefarious purposes is not only real but the effects are already visible.",author:"Tonio Loewald",_created:"2025-01-21T08:45:58.238Z",_modified:"2025-02-03T10:30:18.922Z",_path:"post/pcczhmzc23hk"},"post/path=large-language-models-a-few-truths":{keywords:[],title:"Large Language Models — A Few Truths",path:"large-language-models-a-few-truths",content:"\n\nLLMs and derivative products (notably ChatGPT) continue to generate a combination\nof excitement, apprehensions, uncertainty, buzz, and hype. But there are a few\nthings that are becoming increasinfly clear, in the two-and-a-bit years since\nChatGPT 3 came out and, basically, aced the Turing Test \nwithout (as Turing had pointed out when he called \"the Imitation Game\") \nnecessarily thinking.\n\nThe Imitation Game asks if a machine can imitate the behavior of someone\nwho thinks (whatever that means) by reducing that to once specific behavior,\ncarrying on a conversation. \n\nJust as a Skinnerian\nbehaviorist might suggest that \"I can't tell if you're in pain, but you seem\nto be exhibiting pained behavior\", Turing basically said \"I can't tell if\na machine can think, but I can test whether it seems to exhibit a specific \nthinking behavior.\" He's engaging in classic analysis: I can't solve the general problem, but I can identify a simpler more tractable problem and then we can try to solve that. But almost everyone misses the step where Turing said the general problem is worse than difficult—it's ill-defined.\n\n## LLMs aren't thinking, or reasoning, or doing anything like that\n\nYou can quickly determine that LLMs cannot as a class of object think by looking\nat how they handle iterative questioning. If you ask an LLM to do something\nit will often (a) fail to do it (even when it would be extremely easy for it\nto check whether it had done it) and then (b) tell you it has done exactly\nwhat it was asked to do (which shows it's very good at reframing a request as\nresponse).\n\nThis should be no surprise. The machines are simply manipulating words. They\nhave no understanding of the connection between, say, the process of writing code and seeing the results\nthan they do between Newton's equations of motion and the sensations one feels\nwhen sitting on a swing.\n\nSo when you say \"hey can you write a hello world application for me in Rust?\"\nthey can probably do it via analysis of many source texts, some of which \nquite specifically had to do with that exact task. But they might easily\nproduce code that's 95% correct but doesn't run at all because, not having any\nreal experience of coding they don't \"know\" that when you write a hello world\nprogram you then run it and see if it works, and if it doesn't you fix the\ncode and try again.\n\nThey are, however, perfectly capable of reproducing some mad libs version of\na response. So, they might tell you they wrote it, tested it, found a bug,\nfixed it, and here it is. And yet it doesn't work at all.\n\nAnd that's just simple stuff.\n\nAnd look, for stuff where \"there is no one correct answer\" they generally\nproduce stuff that is in some vague way more-or-less OK. If you ask them to\nread some text and summarize it, they tend to be fairly successful. But they\ndon't exhibit sound judgment in (for example) picking the most important\ninformation because, as noted, they have no concept of what any of the words\nthey are using actually mean.\n\nAll that said, when a process is essentially textual and there is a huge corpus\nof training data for that process, such as patient records with medical histories,\ntest results, and diagnoses, LLMs can quickly outperform humans, who lack both\nthe time and inclination to read every patient record and medical paper online.\n\n## There is no Moat\n\nThe purported value of LLMs and their uses are a chimera. No-one is going to\nsuddenly get rich because \"they have an LLM\". Companies will go out of business\nbecause they don't have an LLM or workflows that use LLMs.\n\nLLMs will doubtless have enormous economic impacts (probably further hollowing \nout what is left of the Middle Class, since most people have jobs where \"there is\nno one correct answer\" and all they have to do is produce stuff that is in\nsome way more-or-less OK… Just consider asking a question of a sales person in an\nelectronics store or a bureaucrat processing an application based on a form and\na body of regulations—do you think a human is better than the average AI? Given\na panel of test results and patient history, general purpose LLMs already\noutperform doctors at delivering accurate diagnoses in some studies).\n\nBut, UC Berkeley's Sky Computing Lab—UC Berkeley having already reduced the price of operating systems to zero \nby clean-room-cloning AT&T UNIX and then \nopen-sourcing it all… basically kickstarting the open source movement\njust released Sky-T1-32B-Preview\nan open source clone of not just ChatGPT but the entire \"reasoning pipeline\"\n(which OpenAI has not open-sourced) that it trained for $451.\n\nSo if you just invested, say, $150M training one iteration of your LLM six months\nago, the value of that investement has depreciated by about 99.999997%.\n\nAnd the sky-high valuations of nVidia are predicted mainly on the costs of\ntraining models, not running them. People don't need $1B render farms to\nrun these models. Very good versions of them can run on cell phones, and the\nfull versions can run on suitably beefed up laptops, which is to day, next\nyear's consumer laptops.\n\n
\n
I should add that while Sky-T1-32B-Preview allegedly outperformed ChatGPT-o1 on a battery of tests, I played with a 10GB quantized version on my laptop just now, and it produced far worse results, far more slowly than Meta Llama 7B.
\n
\n",date:"2025-01-17T20:25:13.992Z",summary:"LLMs, like ChatGPT, excel at manipulating language but lack true understanding or reasoning capabilities. While they can produce acceptable responses for tasks with no single correct answer, their lack of real-world experience and understanding can lead to errors. Furthermore, the rapid pace of open-source development, exemplified by projects like Sky-T1-32B-Preview, suggests that the economic value of LLMs may be short-lived, as their capabilities can be replicated and distributed at a fraction of the initial investment.",author:"Tonio Loewald",_created:"2025-01-17T09:54:17.152Z",_modified:"2025-01-22T12:09:30.679Z",_path:"post/k1p7e2gn552k"},"post/path=adventures-with-chatgpt":{keywords:[],date:"2025-01-17T19:55:24.473Z",summary:"ChatGPT excels at mundane coding tasks, akin to a bright intern who reads documentation and StackOverflow but lacks creativity and testing. While useful for automating repetitive tasks, its code requires refinement and testing.",path:"adventures-with-chatgpt",author:"Tonio Loewald",_created:"2024-08-27T07:57:31.670Z",title:"Adventures with ChatGPT",_modified:"2025-01-17T10:14:49.289Z",content:"## Or, how I learned to stop worrying and just teach ChatGPT to code with xinjs…\n\n\n\nI've been slow to jump on the ChatGPT bandwagon in large part because a lot of the code I write is very hard and somewhat novel, and ChatGPT isn't really good at that kind of thing. What it's really good at is grokking lots of examples, including documentation, and then adapting it to a specific request, often with mistakes.\n\nE.g. I have gotten various LLMs to write me code to do something very mundane, such as traverse all the documents in a directory and pass them through a function in Node.js. This isn't hard to do, but it's tedious and time-consuming, and in the end, the code I would have written won't be markedly superior to something any decent coder would write.\n\nChatGPT (et al.) excels at this kind of task. Think of it as a pretty bright intern who will read any amount of source code and documentation and delve into StackOverflow to figure out how to solve a problem, but won't come up with anything new and won't actually test their code at all before handing it to you.\n\nNow, I'm sure there are workflows which you can fairly easily assemble that would get the resulting code, try to wrap it in tests, try to run it, and so forth, and then iterate until they get through the flow without exploding.\n\n## OK, so I have an intern!\n\nYesterday I decided to play around with ChatGPT because it had been a minute.\n\nThe first thing I tried to do was get it to help me improve the TypeScript declaration of the xin proxy in xinjs by explaining what it was intended to do, and it seems to me that ChatGPT did a great job there, although the code it produced was a toy, and I need to spend some time figuring out if I can apply the strategy to the actual code.\n\nIn any event, a lot quicker and more effective than the time I've spent trying to spelunk the TypeScript documentation, miscellaneous blogs, or StackOverflow.\n\nSo, I got curious.\n\nOne of the suggested prompts was \"make me a personal website\" or something along those lines. So I tried it and it was… fine.\n\nThen I decided to explain the way xinjs's elements proxy works, and asked it to write the code using elements. Here's what I told ChatGPT:\n\n
\n
I'm using a library that provides a proxy named elements. Its properties are element factories, so elements.div() produces an HTMLDivElement.
\n
The factories will take their arguments as children (strings are treated as text nodes) or bags of properties (e.g. {class: 'foo bar', title: 'baz'} will add those attributes.
\n
E.g. this Javascript
\n
import { elements } from 'xinjs'\nconst { label, span, input } = elements\n\n
Could you rewrite the HTML code using the elements proxy?\n
\n
\n\nAnd ChatGPT immediately produced this:\n\n```\nimport { elements } from 'xinjs';\n\n// Destructure the element factories from the proxy\nconst { header, nav, h1, section, a, div, p, ul, li, img, footer } = elements;\n\n// Create the HTML structure using element factories\nconst pageContent = div({ class: 'container' }, [\n section({ id: 'bio', class: 'bio' }, [\n h2('About Me'),\n p('I am a software developer with over thirty years of experience. My expertise spans full-stack development, with a career-long passion for usability and creating intuitive user experiences.')\n ]),\n section({ id: 'portfolio', class: 'portfolio' }, [\n h2('Portfolio'),\n h3('Technical Projects'),\n p('Here are some examples of my technical work:'),\n ul([\n li(a({ href: '#' }, 'Project 1: Description')),\n li(a({ href: '#' }, 'Project 2: Description')),\n li(a({ href: '#' }, 'Project 3: Description'))\n ]),\n h3('Artistic Projects'),\n p('Some of my artistic work:'),\n img({ src: 'example-artwork.jpg', alt: 'Example Artwork' })\n]),\n...\n```\n\n> I should note here that there's a major error in this fragment I didn't pick \n> until I started proof-reading this post, weeks after starting it. Code from\n> LLMs is rife with idiotic errors that often aren't immediately apparent.\n\nI got so excited I tried explaining how xinjs's Component class works to see if I could get it to produce web-components the xinjs way. This took a few attempts, and one thing I learned from this is that ChatGPT seems better at learning from simple, clear examples than more complex examples.\n\nI started out by pointing it at an example in GitHub and asking it to build the components along the same lines. This was a disaster, especially in that it looked sorta, kinda right but wouldn't have worked at all.\n\nI spent a bunch of time refining my prompts, and the most helpful prompts provided focused examples, e.g. I used this prompt after the third or fourth attempt to explain how elementCreator was a static method of Component that did x and y, and it produced incorrect code with incorrect explanations of what it was doing.\n\n
\n No. Here's a simple example that should help.\n\n
\n\nAfter a few prompts like this, ChatGPT provided pretty flawless results. But a\nlot of the time it really feels like you're just training a conscientious but\neager fool to imitate examples. It's kind of like how you wished regexp worked,\nwith the understand that, just as with regexp, you really want to verify that\nit didn't blow up in some subtle but exciting way.\n\n## Aside: \"artificial intelligence\"? it's not even artificial stupidity…\n\nIf you're just dealing with text, it can take a while using ChatGPT to realize \nthat ChatGPT isn't \"intelligent\" at all. It is predictive text completion with\nan insane knowledge base, but it simply doesn't understand anything at all.\n\nBut if you start working with image generation, it's immediately obvious that\n(1) the image generator doesn't understand what it's being asked to do, and\n(2) ChatGPT doesn't understand the image it's been given.\n\n\n\nIf you look at this example (and this is literally its response to my initial\nrequest) it does a great job of transforming the prompt into a concept and\nthen generating a prompt to produce that concept. The text portion of the \npipeline is fabulous, and it really feels like it understands. The fact the\ntext engine is not even stupid doesn't become immediately apparent.\n\nBut the image is just horribly wrong in so many ways, and as I iterated on\nthe prompt trying to improve the results it just got worse and worse. E.g.\nI never even suggested inserting a React Logo but it started putting them\neverywhere. (Like a lot of programming tools out there, ChatGPT just freaking\nloves React.)\n\nSo, the image generator just kind of free associates on the input text and\nproduces something but it doesn't know that, for example, you can't push\nsomething up a hill if it's already at the top of the hill. It doesn't know\nthat the stuff on the hill and the stuff on the boulder need to be different\nthings and just puts anything you ask for in both places or neither.\n\n",_path:"post/ui4rtvaqo6vv"},"post/path=apple-intelligence-image-playground":{keywords:[],title:"Apple Intelligence—Image Playground",path:"apple-intelligence-image-playground",content:"\n\n
The first bit of Apple Intelligence™ I've really played with is the new\nImage Playground (beta) that appeared on my Macbook Pro with the latest\nsystem update.
\n
The first things that come to mind about it are:
\n\n
it's laser-focused on producing pictures of people and [bipedal] animals (furries?) and basing\nthem on photos from your Photo library. ChatGPT's DALL-E by contrast, refuses\nto play with photos of people (in general) assuming that you might be producing\ndeep fakes of politicians or whatever.
\n
if it doesn't understand a word, it just ignores it. I wanted to have my\nspace weasel smoking a cigar. Nope. No cigar. Sleeping alligator? It's \nan alligator.
\n
it has been trained on corpuses of, I assume, commissioned and art-directed\nimages that. The \"animation\" style seems very Pixar-like, while the\n\"illustration\" style seems somewhere between 1980s comic books and\nRoy Lichtenstein. I don't care for it.
\n
Apple has \"dumbed down\" the UI to the point where it's very easy to use…
\n
…and pretty much impossible to get bad results.
\n
It's pretty fast (at least on my rather nicely specced M3 Max).
\n\n
Example Output
\n
This is the default \"animation\" style:
\n\n\n
Here's the \"illustration\" style:
\n\n\n
Conclusions
\n
Right now, it's a really cute toy. But while it does produce consistently\nlovely results, they're basically all the same—pictures of a single person\nor animal, pretty much front-on, with some minor modifications allowed\n(e.g. \"scarf\"), some general atmosphere / background stuff (\"sci fi\", \"mountains\"),\nand either from head to toe, head and chest, or portrait.
\n
I think this has potential if a lot more stuff is added. It's reliable \nand fast—it just doesn't do a whole lot.
\n",date:"2025-01-15T18:14:38.350Z",summary:"Apple's new Image Playground is focused, and easy to use. If you want to produce cute \"Pixar-style\" people and animals, it quickly churns out consistent, but very limited, results. My M3 Max rendered images in seconds, but right now it's more of a cute toy than a useful tool\n",author:"Tonio Loewald",_created:"2025-01-15T11:14:18.592Z",_modified:"2025-01-15T11:14:47.871Z",_path:"post/xohqdwdgjlxa"},"post/path=bambulabs-p1s-initial-review":{keywords:[],title:"Bambulabs P1S Initial Review",path:"bambulabs-p1s-initial-review",content:"\n\nThe BambuLabs P1S is, I think, the latest printer from Bambulabs, a Chinese 3D Printer\ncompany that is trying to be \"the Apple of 3D printing\". (Other previous would-be Apples of\n3D Printing would include Formlabs.)\n\nPerhaps \"labs\" is a bad suffix for a company planning to be the Apple of anything.\n\n## Unboxing\n\n\n\nThe printer itself is heavier than I expected (the box was 22kg according to Postti,\nand I believe them). There's a lot of packaging but not more than needed. Included\nin the main box were three spools of PLA (green, red, and glossy white \"support\").\nAll three are standard spools but with maybe 25% of a full load of material. (I\nbought a bunch of different materials at the same time and those are all much\nfuller).\n\nWhile there is a \"quick start\" guide, visible (if not accessible) upon opening\nthe box, once you somehow tear it out multiple layers of padding and\nplastic it assumes you've gotten the printer out of the box, which is no mean feat.\n\nAnyway, having pulled all the packing material out from around the printer and \nsqueezed my arms down around the package and found something I could hold without\nfeeling like I might break something off, I did get the printer out of the box\nand onto a table.\n\nThe instructions for removing packing materials, which is similar to unpacking\na printer or scanner, which often comes with disposable shims and bits of foam\nhidden in its inner workings, are fine once you realize that there is, in fact,\na ridiculously long allen key in the box of stuff, and that it can remove the rather\ninconveniently placed screws you need to remove fairly easily.\n\nI should note that 90% of my issues came from the fact that the quick start\nguide is, like the ones you get with Apple products these days, printed in minuscule\ntext and my eyesight isn't what it once was, so I didn't read the packing list\ncarefully. That would have saved me a few minutes.\n\nEven so, assembling the printer requires access to its back and front and good light.\n\nBut while it probably took me 30 minutes to have it ready to print, it might have\ntaken someone with a less messy office and better eyesight 15 minutes as advertised.\n\n## Software\n\n\n\nIf you have aspirations to be the \"Apple\" of anything, you may want to hire\nan actual UI designer and a proof reader.\n\nSoftware is what lets the Bambu experience down. if you're going to be the Apple\nof anything you need to not suck at software, and the software for this thing \nis a mixture of great when it works, confusing as hell, half-assed, and perplexing.\n\nTo begin with the quick start guide firsts gets you to download the iOS app and\nbind it to the printer. This did not work as advertised and requires you to go\nfind numbers deep in the printer's crappy UI. (This is a recurring theme.)\n\nFirst things first, because I couldn't get the App to sync with the printer using\nBluetooth (it kept timing out), I figured out how to print one of the pre-loaded\nmodels (good old benchy) from the printer's small front-panel. (Installing that\npanel was the least well explained step in the setup process—luckily most electronic\nconnectors are designed so that if you look hard at them and aren't a complete idiot\nyou can usually guess the correct orientation—good luck actually plugging the damn\nthings into each other though.)\n\nIn the end I think the big problem was that the app requires you to set up the\nWifi for the printer on your phone using bluetooth rather than allowing you \nto configure wifi on the printer itself. (Probably because entering a typical\nwifi password on the front panel would be torture, while doing it on your phone\ncould be effortless but is merely nasty.) Because the phone app's dialog is poorly\ndesigned it tried to join the network without a password and then hung.\n\nMeanwhile…\n\nThe benchy test print was both fast and pretty close to perfect. Given that the number\nof good or near perfect prints I've gotten from any of my two previous 3d printers \nis—generously speaking—three, that's a very good start.\n\n## Output\n\n\n\nEventually I solved my software problems. The quick start guide wants you\nto print benchy from the desktop software but the screenshots look nothing like\nthe software I downloaded and the instructions start once you've done stuff I hadn't\ndone. If the software sucked less it wouldn't need instructions, but as it is, the\ninstructions you get aren't adequate and the failure modes are horrific. (E.g.\nbecause I wasn't logged into the cloud service, I couldn't see my printer on the \nnetwork. Adding the printer involved one of two processes, neither of which worked\nand both of which were poorly documented and involved navigating the printer's\nawful front-panel interface. In the end, the solution was to figure out how to log\ninto the cloud at which point everything Just Worked.)\n\nOh, the first thing the software wanted me to do was detail my printer's configuration\n(which it apparently didn't know). What is the diameter of my hot end? I don't know.\n\n1. Downloading highly-rated models and printing them works fantastically well.\n2. Loading your own models to print is a shit show.\n\nAgain, the software is the problem. First of all, the Mac version (which judging from\nwhat I've seen, is actually the more popular choice for their target market) is missing\na mysterious \"repair\" function for models. So the software refuses to print your model\nuntil it's repaired with the Windows version of the software or a third-party website.\n\nDudes. Fix this.\n\nSecond, by default supports are disabled and anything requiring supports won't print.\nOnce you find the supports panel, the default supports are horrific (not the \"tree\"\nsupports everyone uses these days) and that basically ruined my second print.\n\nThat said, once I figured these things out, every print I have made (I think I'm up to\nsix, total) has turned out at least 90% perfect (and 100% is unattainable with FDM as\nfar as I can tell).\n\n## Bottom Line\n\nThe main reason I'm interested in 3d printers is tabletop games, and every printer\nI've tried has been a miserable failure until now.\n\nThe first serious thing I tried to print with the Bambulabs P1S was an interlocking\nhex tile. Of course I only printed one because I'm an idiot. So I printed it again.\nThe goal was to figure out how much I needed to tweak the model to make them snap\ntogether. They snapped together perfectly first time.\n\nSo I printed four more in a different color. They all fitted perfectly. I am a very\nhappy formerly frustrated game designer.\n\n\n\nThe only \"failed\" print I've had so far is a space marine\nminiature that requires a crazy amount of support and where\nI had not realized the default supports are horrible. I haven't\ntried printing it again, yet, because the software is hazing me, \nbut that's anazing. Right now, 3D printing is still in WAY too primitive a state\nfor an \"Apple of 3d printing\" to even make sense. But this thing is so much better\nthan anything I've owned or used it's incredible.\n\nI just sketched out a new game and I am so excited.\n\n## Final Note\n\nI bought the AMS bundle which includes a device for switching materials automatically.\nEven if you only print in one color at a time (printing multi-colored models is very\neasteful because each time the printer changes color it has to rewind the current\ncolor, then waste a certain amount of the new color, before continuing—even allowing\nfor the minor gain you might get from doing the next layer's portion of the current\ncolor before swapping, the difference in material consumption between one and two\ncolors is enormous).\n\nThat said, the AMS is fantastic just for being able to swap from one material to\nanother between prints. Even so, it's not without issues. E.g. I believe it's supposed\nto automatically identify materials if I get them from Bambulabs, which I did, but it's\nnot working thus far. I definitely got it because I want to produce multi-colored\nmodels, but where possible I'm printing in a single color at a time and planning to\nsnap things together.\n\nAnyway, best 3D printer ever.\n\n",date:"2025-01-03T14:19:55.667Z",summary:"Unboxing a BambuLabs P1S 3D printer was a surprisingly involved process, and the journey didn't get much easier from there. This Chinese contender for the \"Apple of 3D printing\" boasts impressive hardware, but the software experience can be pretty frustrating. From fiddling with hidden allen keys to battling Bluetooth connectivity, the setup was a minor ordeal, and navigating the app was a test of patience. But the truly game-changing moment? Snapping together perfectly printed hex tiles on the first try.",author:"Tonio Loewald",_created:"2025-01-03T10:40:28.102Z",_modified:"2025-01-23T19:18:17.360Z",_path:"post/path=bambulabs-p1s-initial-review"},"page/path=blog":{path:"blog",css:"",imageUrl:"",prefetch:[{regexp:"^\\/(([\\w\\d]+\\/)*)([\\w-]+)\\/?$",path:"post/path=[3]"}],tags:["public"],source:"",title:"",description:"",_path:"page/path=blog"}}