` web-component.",date:"2025-06-03T22:24:57.708Z",keywords:[],summary:"This article introduces tosijs, a highly opinionated front-end framework designed to radically simplify complex web and desktop application development. Unlike frameworks that complicate simple tasks or rely on inefficient virtual DOMs, tosijs leverages native Web Components, direct DOM manipulation, and a unique proxy-based state management system to minimize boilerplate, boost performance, and enhance maintainability, allowing developers to achieve more with less code.",author:"Tonio Loewald",_created:"2025-05-31T07:01:44.486Z",_modified:"2025-09-13T16:53:51.602Z",format:"markdown",_path:"post/s51blf4tsx65"},{keywords:[],title:"Squircles",path:"squirrels",content:"\n \n \n\nProducing squircles is not especially easy. There's a number of Youtube tutorials on the subject.\n\nThing is, once you've seen a squircle, it's hard to go back to rounded rectangles… they just look so much better because of the way the curve straightens.\n\n \n\nI should note that in the diagram above the two figures have the same \"bevel radius\". In practice with a squircel you'd use a larger bevel radius.\n\nAnyway, **Amadine** doesn't do them perfectly, but it does offer elliptical curved corners as a beveling option and it's damn close.",date:"2025-05-05T16:14:46.879Z",summary:"Tired of those boring rounded rectangles? Squircles are the next evolution in smooth, visually appealing shapes, and while perfect execution is tricky, Amadine's does a nice job with them.",author:"Tonio Loewald",_created:"2025-05-05T12:15:00.202Z",_modified:"2025-06-03T20:51:23.168Z",format:"markdown",_path:"post/nh3jt0g2b6wl"},{keywords:[],title:"Blender Gets Real",path:"blender-gets-real",content:" \n\nThe Oscar for best animated movie went to Flow , a 3d animated movie made with Blender by director [Gints Zilbalodis](https://www.imdb.com/name/nm6478535/?ref_=tt_ov_dr_1).\n\nNot only is a tiny team beating out the likes of Pixar and Dreamworks an incredible achievment on its own, \nit's also a big win for open source and anyone with a great idea and limited access to financing. \nThis is the first Oscar-winning animated movie to be made without dedicated engineering support \n(e.g. Pixar uses tons of custom software and has two engineers for every artist). These folks had a total budget $4M.\n",date:"2025-03-26T15:04:54.659Z",summary:"Flow, the Blender-animated film, took home the Oscar for Best Animated Feature. But it's more than just a win for a small team; it's a monumental victory for open-source software and anyone with a vision and a limited budget.\n",author:"Tonio Loewald",_created:"2025-03-26T11:54:52.605Z",_modified:"2025-03-26T15:05:04.371Z",_path:"post/5lxsjwunwnjb"},{keywords:[],title:"The future's so bright… I want to wear AR glasses",path:"futures-so-bright-i-want-to-wear-ar-glasses",content:" \n\nThe collapse of Democracy, loss of trust in institutions,\nthe corruption of social norms, the rise of ignorant bigotry and wilful \nstupidity. All of this sucks.
\nIt also sucks that the dictator of Russia thinks he's some kind of historic\ntsar destined to have \"the great\" after his name, if he can only commit enough\npointless atrocities, or at least keep his population distracted from his own \nabject incompetence as a leader. Imagine if he'd spent all that oil money\non actually fixing the Russian economy?
\nIt also sucks that the US electorate has chosen to elect a felon, fraudster, and sex predator to its highest office, aided by corrupt justices in its highest court, and a bunch of weak-minded, weak-willed fools in its Congress.
It's all a huge shame, since technology is making incredible strides\nand it's incredibly exciting. Sure, we don't have Jetsons-style aircars, but \nhere's a list of stuff we do have that's frankly mind-blowing.
\nTricorders, Datasticks, Wrist Computers \nThe iPhone, Apple Watch, and Android-based equivalents are better and more \nuseful than pretty much any SF device of the same kind. An iPhone strapped to\nyour wrist would be a superior device (ignoring ridiculous magic) than any\nsimilar device shown in SF. Even futuristic \"post iPhone\" devices shown in\nSF (e.g. the glass slates in The Expanse ) don't really match modern smart\nphones.
\nThe communicators in Star Trek: Next Generation didn't even have call waiting,\nwhich was something we had on regular phones at the time .
\nSolar Power \nI don't know if you've been paying attention, but the price of solar power is\nnow so low that the only thing stopping it from simply replacing all other \npower sources is the fact it isn't sunny everywhere all the time. If we just \nhad a grid that could send power from where it's sunny to other places (and\nit's being built) and conveniently store and recharge power on things that\nneed to move, there'd really no reason to generate power any other way.
\nYes, climate change is bad, but if we're not actively stupid we already have\nall the technology we need to solve the problem both in the near term (build\nmore modern nuclear power plants) and the long (create a worldwide solar grid).
\nOh yeah, and we seem to have solved the population crisis so bad we might need\nto encourage folks to have more kids at some point. Maybe not treat women (or\nbiologically female persons if you prefer) who have kids as third class \ncitizens? I dunno.
\nNuclear Power… Fusion Power? \nWe can make nuclear power plants that are safer and have a smaller environmental\nimpact than any other form of power generation. Right now, it's not cheap, but\nnuclear power is exactly the kind of technology that would get super cheap if\nwe just scaled it.
\nNever mind, fusion seems to have gone from being \"fifty years away\" to actually \ngetting serious private investment. Someone might actually get it to work!
\nBattery Technology \nElectric cars are actually the least interesting thing that we've gotten from\nimprovements in battery techology. Moore's law predicted we'd have cheap\nsupercomputers that could fit on our wrists by now, but it didn't say they'd\nbe able to run all day on battery and have a fast wireless network connection.
\nFor quite a while, Moore's Law actually seemed to be saying the opposite. A\nmodern gaming PC consumes an embarassing amount of power compared to, say,\nthe very expensive graphics workstation I had custom made for myself back in\n2001.
\nInsanely Good Cellular Networks \nWe have a very fast cable connection here in Oulu, Finland (2h south of the\nArtic Circle by car…). But when it wobbles and I switch to sharing my iPhone's\nunlimited data plan, the network gets better .
\nAI (sort of) \nOK, it's not really intelligent, but it does pretty much ace the Turing Test,\nso while AGI may be a big con, the fact is we've now got something that passes\nthe Turing Test and can run locally on your laptop or phone.
\nCan it think? No. Can it conduct an \"intelligent\" conversation? Sure, it may\nmake shit up or hallucinate, but find me a person who doesn't sometimes do\nthat.
\nAnd if you're a programmer, the interface for these things is trivial . \nYou pass it a bunch of text and you get back a bunch of text. For advanced\ncases you can add images and so on. It's so much easier than, say, displaying\na window with \"hello world\" in it was on a 1984 Mac.
\nTo give an example, I have an AI-powered tool for my blog to create summaries\nof my posts, and the API is not much more than:
\nconst post = ... // get the post source\nconst response = await ai(`Here is a blog post I've written in markdown format, please write a blurb for it or extract a key paragraph. ${post}`)\nconst summary = response.text\n
\n\nSure, it doesn't write very well, but it was super easy to implement.
\nOh, and I can right-click and have an AI proofread anything.
\nAR and VR headsets \nThe Meta Quest 3 is amazing and really quite cheap. The Vision Pro seems even\nmore amazing, but not cheap. But both cost peanuts compared to even mediocre \nVR headsets from 10 years ago.
\n3D Printers \nYou can 3D print in plastic, metal, or wood. You can print subtractively\nor additively. The precision of 3D prints is such that you can print parts that\nshould fit together perfectly, and they fit together perfectly.
\nAnd this is no longer for toys. Even with inexpensive printers, you can print \ncar and motorcycle parts or entire drones (aside from the circuits and motors),
\nAnd the devices to do this can be bought for the price of a high-end power tool.\nIt's nuts.
\nVideo Conferencing \nYou can do business from anywhere via video conferencing on laptops, phones,\nand TVs. In fact it's so common, we've seen several generations of social norms\nevolve (e.g. today, there's an unspoken threshold at which almost everyone turns\noff their camera).
\nShared Office Space \nWe Inc may have been a huge scam, but the idea remains brilliant and there are\nplenty of people offering shared office space sustainably today.
\nThe business I founded in 2001 had to commit to a multi-year AUD $8000/month \nlease. If we had shared office space we'd have had $100k/year more to spend on \nanything we needed… or just salaries.
\nAnd don't even talk about the horrific price we paid for a 56kB ISDN connection.
\nCube Sats \nGetting to be an astronaut has never been a reasonable goal for more people. But\nfor the price of a new Porsche 911, BMW M5, or Tesla Cybertruck, you can launch \nyour own satellite.
\nTiny, Customizable, Powerful, Embeddable Computers \nYou can get fully functional computers like the Arduino or Raspberry Pi for \nalmost nothing, and program them to do AI, drive displays, take pictures, \nyou name it.
\nSo, if you want to build your own satellite, it's not that big a deal to hook\nup a radio to a tiny computer and a fancy camera, solar cells, batteries, etc.\nand you've got your own spy satellite. And quite a few startups have done\nbasically this.
\nVery Fast Trains \nYou might not know this if you live in the US, bu in many countries outside the\nUS, trains are so cheap and convenient that it's hard to justify driving long\ndistance by car. In Finland the intercity trains can reach 200km/h—you simply\ncan't get places as fast in a car—and the cities tend to be highly walkable.\nIt doesn't hurt that gas is highly taxed, but if you had a choice of driving or\ngetting there quicker and being able to work and go to a restaurant while \nzooming through the landscape at 150km/h, would you drive?
\nIt would be great if the WiFi on the trains were a little better. I mean, \nsometimes I have to wait a few minutes for the network to be quick enough for\nme to push a PR.
\nGenetic Engineering \nYou can literally buy a kit for very little money that allows kids to use\ngene shears to make a plant glow in the dark. This isn't even all that recent.
\nMore recently, there was this worldwide pandemic. It was in the news.
\nVery shortly after COVID-19 was first detected, scientists in China were able \nto sequence the virus's DNA and publish it. Other scientists at Moderna \ndownloaded the genome and tailored a highly effective vaccine against it over \na weekend. While it needed about a year to pass safety testing, the team \nthemselves were sufficiently\nconvinced of their technology that they all innoculated themselves immediately.\nIt seems highly likely to me that a lot of Trump's bravado about the pandemic\ncame from he and those close to him having early access to these vaccines.
\nSoftware Development is Crazy Easy \nAnd no, I'm not talking about using AI to write a to do list manager.
\nI literally wrote a utility program yesterday using npx create-xinjs-blueprint
\nin less time than it would have taken me to do the same thing using \nRealbasic .
\nI actually have a stack that lets you build web-apps top-to-bottom, desktop \napps, and tiny utilities as easily as I used to be able to in the 1990s. You \ncan't draw the user interface (which is the final missing piece) but everything \nelse is as easy or easier—I just need to wrap some convenience tooling around\nit and write documentation. So much documentation.
\nAir Cars \nI lied. You can actually buy air cars now.
\n",date:"2025-02-04T17:57:55.814Z",summary:"So much bad news right now…\n\nIt's all a huge shame, since technology is making incredible strides and it's incredibly exciting. Sure, we don't have Jetsons-style aircars, but \nhere's a list of stuff we do have that's frankly mind-blowing.",author:"Tonio Loewald",_created:"2025-02-04T17:49:54.240Z",_modified:"2025-02-04T18:07:11.527Z",_path:"post/2n8p4qfong3r"},{keywords:[],title:"Contrary to popular belief, AI may be peaking",path:"peak-ai",content:" \n\nOn the one hand, they're running out of human-generated text to train on.\nOn the other hand, they're thinking of mining the text in online videos and\nusing \"synthetic content\" to get more data.\n\nIn 2005, if you showed someone who hadn't used Google how good Google was and \nsaid, \"and it will only get better over time\", they would have been blown away \nand nodded numbly in agreement.\n\nInstead we got SEO-hacking and Google AdWords.\n\nAs with most things tech, including the flaws, I'm an early adopter. Back in\nthe late 90s when my father was diagnosed with prostate cancer, I searched the\ninternet for information of prognoses and cures. Literally 90% of the results\nmentioned \"proton beam therapy\", and they all seemed pretty reputable. I think\nAltavista found something like 50,000 web pages on it.\n\nProton beam therapy is a real thing, and while there are\npages devoted to it on most big health sites, it's not even explicitly \nmentioned in the Mayo Clinic's page of prostate cancer treatment ).\nA lot of the time, it doesn't really need to be treated.\n\n## If it's valuable, it's worth perverting\n\nThe reason 2005 or so was \"peak google\" was that Google started out as an end-run\naround the sort of \"SEO-hacking\" that had been successful before Google existed.\nOriginally, search engines tended to trust that websites were about what they\nsaid they were about. Then they got clever and started looking at the text\non the site.\n\nSo, in the late 90s, a search engine assumed that if the keywords for your \nweb page (which you created) included \"best\" \"prostate\" \"cancer\" \"treatment\"\nthen, to quote Stefon , \"look no further\".\n\nA more sophisticated search engine might actually decide to see if the words\non the page also contained those words. And that was about it. It was common\npractice in the mid aughts to include click-bait text in invisible divs on pages\nwhich tricked most search engines.\n\nGoogle, on the other hand, basically tried to figure out if sites that seemed\nto be about X pointed to your page, which suggested your page, too, was about X.\nAnd then the link-farming and other hacks began, and combined with the business\nmodel Google chose to adopt (inserting sponsored search results assigned by\nauction above actual search results), Google got worse and worse.\n\n## AI is just as hackable, and it's already happening\n\nIgnore the placeholder image! This video is directly on point (and discusses \nthe article Adobe’s Generative AI Jumps The Shark, Adds Bitcoin to Bird Photo ).\n\nVIDEO \n\nAI's generative tools used to work pretty well most of the time, but they're\nstarting to hallucinate. In the linked example, a photographer deleted an ugly\nhighlight and Photoshop inserted a bitcoin logo. Maybe this is intentional\ncontent-flooding by someone, or maybe it's unintentional. The fact is that \nAI-generated content produced with ill intent has infected the AI Adobe \npresumably pretty carefully husbands.\n\nI see evidence of unintentional hacking all the time. When software platforms\nget successful, there's a network effect. If you want your rank on Stackoverflow\nto go up (for some reason) then answer questions on this framework. If you want\nto burnish your web dev cred on medium, write articles on it, and so on. As a\nresult of this, the vast majority of front-end code on the public web is\nReact-related, often tacitly assuming React imports, declarations, and patterns \nare being used. This is so pernicious that it's almost impossible to convince\nChatGPT not to write code like this. I ask it to write vanilla js, it writes\nReact. I ask it to read my documentation, it writes React.\n\nAnd it's not good React code either. (Hilariously, I've heard OpenAI has just opened up\na ton of ReactJS front-end dev positions.)\n\nStanford \nconducted a study showing that ChatGPT outperformed doctors in \nperforming diagnoses based on patient histories and test results. Even if \ndoctors have healthy skepticism \nabout this result, it's been widely reported \nand people are going to use ChatGPT (given these stories) given that they \ncheerfully used WebMD for years when everyone told them not to .\n\nConvincing doctors to change their prescribing habits is difficult and expensive.\nIn most civilized countries, including the US until the late 90s / early 2000s \nwhen Big Pharma found a loophole) it is illegal for drug companies to advertise\nprescription drugs to consumers. Pharma sends reps to doctors, hospitals, and\npharmacists to convince them to prescribe or dispense their drugs using means\nboth fair and foul. It's a big business. Just writing software to help those \nreps market a few drugs for a single company in Australia paid me handsomely for\na decade.\n\nNow imagine you can just generate content that is designed and A|B-tested to\ninfluence AIs to recommend your drug. Would you do it? More importantly, would\na multi-billion dollar company do it? A company that already spends buckets of money \non pens, post-it notes, cardboard signs, \"medical conferences\" in tropical \nparadises, cruises, politicians, 24 year old communications grads who look \nincredible in stilettos and pencil skirts, TV ads, mentions in movies and \nmedical dramas. I mean, they have some ethical standards, I'm sure. We just \nhaven't found them yet.\n\nAs easy as it is to dumb down the populace by destroying the education system, \ninventing reality television and social media, etc. etc. AIs can do it faster\nand in real time.\n\n\nAnd I'm talking about humans using AIs to corrupt other AIs, test whether it's working, and then doing the stuff that works more and faster and harder. This is what SEO did to Google.
\n\n## Curation is a Partial Fix\n\nI work for a company that processes invoices using custom AI models. The training\ndata is highly curated and so everything basically just tends to get better over\ntime. Find a mistake? It probably originated in the training dataset so we go\nfix the training data and then see if the retrained AI still makes the mistake.\nBut we (a) control the data and (b) know what the correct answer is. This is\nvery much not how LLMs work. \n\nThe people who build LLMs kind of control the data, but their goal is\nquantity first, quality second, and they have no idea what the question is, let alone what the correct answer is. The answer you'll get is just some kind of statistically generated likely answer that the random contributors to the dataset might give.\n\nThis isn't just a problem with generative AI, although it seems to be an \nespecially bad problem for generative AI. When it comes to front-end coding,\n\"AI\" is currently \"artificial stupidity\". This is what happens when curation \nfails at scale.\n\nAll of this doesn't fully justify my clickbait title. Obviously, AI (and software as a whole) will continue to progress. But the thing that's currently generating all the hype—LLMs and generative AIs powered by all the content they can scrape— those may be peaking as they themselves are used to pollute the very resource they depend on at scale.\n\n",date:"2025-01-21T19:47:46.193Z",summary:"Is artificial intelligence actually getting *smarter*, or just more easily manipulated? This post delves into the surprising ways AI systems can be tricked, revealing a disturbing parallel to the SEO shenanigans of the early 2000s. From generating dodgy medical advice to subtly pushing specific products, the potential for AI to be used for nefarious purposes is not only real but the effects are already visible.",author:"Tonio Loewald",_created:"2025-01-21T08:45:58.238Z",_modified:"2025-02-03T10:30:18.922Z",_path:"post/pcczhmzc23hk"},{keywords:[],title:"Large Language Models — A Few Truths",path:"large-language-models-a-few-truths",content:" \n\nLLMs and derivative products (notably ChatGPT) continue to generate a combination of excitement, apprehensions, uncertainty, buzz, and hype. But there are a few things that are becoming increasinfly clear, in the two-and-a-bit years since ChatGPT 3 came out and, basically, aced the Turing Test without (as Turing had pointed out when he called \"the Imitation Game\") necessarily thinking.\n\nThe Imitation Game asks if a machine can imitate the behavior of someone who thinks (whatever that means) by reducing that to one specific behavior, carrying on a conversation.\n\nJust as a Skinnerian behaviorist might suggest that \"I can't tell if you're in pain, but you seem to be exhibiting pained behavior\", Turing basically said \"I can't tell if a machine can think, but I can test whether it seems to exhibit a specific thinking behavior.\" He's engaging in classic analysis: I can't solve the general problem, but I can identify a simpler more tractable problem and then we can try to solve that. But almost everyone misses the step where Turing said the general problem is worse than difficult—it's ill-defined.\n\n## LLMs aren't thinking, or reasoning, or doing anything like that\n\nYou can quickly determine that LLMs cannot as a class of object think by looking at how they handle iterative questioning. If you ask an LLM to do something it will often (a) fail to do it (even when it would be extremely easy for it to check whether it had done it) and then (b) tell you it has done exactly what it was asked to do (which shows it's very good at reframing a request as response).\n\nThis should be no surprise. The machines are simply manipulating words. They have no understanding of the connection between, say, the process of writing code and seeing the results than they do between Newton's equations of motion and the sensations one feels when sitting on a swing.\n\nSo when you say \"hey can you write a hello world application for me in Rust?\" they can probably do it via analysis of many source texts, some of which quite specifically had to do with that exact task. But they might easily produce code that's 95% correct but doesn't run at all because, not having any real experience of coding they don't \"know\" that when you write a hello world program you then run it and see if it works, and if it doesn't you fix the code and try again.\n\nThey are, however, perfectly capable of reproducing some mad libs version of a response. So, they might tell you they wrote it, tested it, found a bug, fixed it, and here it is. And yet it doesn't work at all.\n\nAnd that's just simple stuff.\n\nAnd look, for stuff where \"there is no one correct answer\" they generally produce stuff that is in some vague way more-or-less OK. If you ask them to read some text and summarize it, they tend to be fairly successful. But they don't exhibit sound judgment in (for example) picking the most important information because, as noted, they have no concept of what any of the words they are using actually mean.\n\nAll that said, when a process is essentially textual and there is a huge corpus of training data for that process, such as patient records with medical histories, test results, and diagnoses, LLMs can quickly outperform humans, who lack both the time and inclination to read every patient record and medical paper online.\n\n## There is no Moat\n\nThe purported value of LLMs and their uses are a chimera. No-one is going to suddenly get rich because \"they have an LLM\". Companies will go out of business because they don't have an LLM or workflows that use LLMs.\n\nLLMs will doubtless have enormous economic impacts (probably further hollowing out what is left of the Middle Class, since most people have jobs where \"there is no one correct answer\" and all they have to do is produce stuff that is in some way more-or-less OK… Just consider asking a question of a sales person in an electronics store or a bureaucrat processing an application based on a form and a body of regulations—do you think a human is better than the average AI? Given a panel of test results and patient history, general purpose LLMs already outperform doctors at delivering accurate diagnoses in some studies).\n\nBut, UC Berkeley's Sky Computing Lab—UC Berkeley having already reduced the price of operating systems to zero by clean-room-cloning AT&T UNIX and then open-sourcing it all… basically kickstarting the open source movement just released Sky-T1-32B-Preview an open source clone of not just ChatGPT but the entire \"reasoning pipeline\" (which OpenAI has not open-sourced) that it trained for $451.\n\nSo if you just invested, say, $150M training one iteration of your LLM six months ago, the value of that investement has depreciated by about 99.999997%.\n\nAnd the sky-high valuations of nVidia are predicted mainly on the costs of training models, not running them. People don't need $1B render farms to run these models. Very good versions of them can run on cell phones, and the full versions can run on suitably beefed up laptops, which is to day, next year's consumer laptops.\n\n> I should add that while Sky-T1-32B-Preview allegedly outperformed ChatGPT-o1 on a battery of tests, I played with a 10GB quantized version on my laptop just now, and it produced far worse results, far more slowly than Meta Llama 7B.",date:"2025-01-17T20:25:13.992Z",summary:"LLMs, like ChatGPT, excel at manipulating language but lack true understanding or reasoning capabilities. While they can produce acceptable responses for tasks with no single correct answer, their lack of real-world experience and understanding can lead to errors. Furthermore, the rapid pace of open-source development, exemplified by projects like Sky-T1-32B-Preview, suggests that the economic value of LLMs may be short-lived, as their capabilities can be replicated and distributed at a fraction of the initial investment.",author:"Tonio Loewald",_created:"2025-01-17T09:54:17.152Z",_modified:"2025-06-26T13:31:50.273Z",format:"markdown",_path:"post/k1p7e2gn552k"},{keywords:[],date:"2025-01-17T19:55:24.473Z",summary:"ChatGPT excels at mundane coding tasks, akin to a bright intern who reads documentation and StackOverflow but lacks creativity and testing. While useful for automating repetitive tasks, its code requires refinement and testing.",path:"adventures-with-chatgpt",author:"Tonio Loewald",_created:"2024-08-27T07:57:31.670Z",title:"Adventures with ChatGPT",_modified:"2025-01-17T10:14:49.289Z",content:"## Or, how I learned to stop worrying and just teach ChatGPT to code with xinjs
…\n\n \n\nI've been slow to jump on the ChatGPT bandwagon in large part because a lot of the code I write is very hard and somewhat novel, and ChatGPT isn't really good at that kind of thing. What it's really good at is grokking lots of examples, including documentation, and then adapting it to a specific request, often with mistakes.\n\nE.g. I have gotten various LLMs to write me code to do something very mundane, such as traverse all the documents in a directory and pass them through a function in Node.js. This isn't hard to do, but it's tedious and time-consuming, and in the end, the code I would have written won't be markedly superior to something any decent coder would write.\n\nChatGPT (et al.) excels at this kind of task. Think of it as a pretty bright intern who will read any amount of source code and documentation and delve into StackOverflow to figure out how to solve a problem, but won't come up with anything new and won't actually test their code at all before handing it to you.\n\nNow, I'm sure there are workflows which you can fairly easily assemble that would get the resulting code, try to wrap it in tests, try to run it, and so forth, and then iterate until they get through the flow without exploding.\n\n## OK, so I have an intern!\n\nYesterday I decided to play around with ChatGPT because it had been a minute.\n\nThe first thing I tried to do was get it to help me improve the TypeScript declaration of the xin proxy in xinjs by explaining what it was intended to do, and it seems to me that ChatGPT did a great job there, although the code it produced was a toy, and I need to spend some time figuring out if I can apply the strategy to the actual code.\n\nIn any event, a lot quicker and more effective than the time I've spent trying to spelunk the TypeScript documentation, miscellaneous blogs, or StackOverflow.\n\nSo, I got curious.\n\nOne of the suggested prompts was \"make me a personal website\" or something along those lines. So I tried it and it was… fine.\n\nThen I decided to explain the way xinjs's elements proxy works, and asked it to write the code using elements. Here's what I told ChatGPT:\n\n\nI'm using a library that provides a proxy named elements
. Its properties are element factories, so elements.div()
produces an HTMLDivElement.
\nThe factories will take their arguments as children (strings are treated as text nodes) or bags of properties (e.g. {class: 'foo bar', title: 'baz'}
will add those attributes.
\nE.g. this Javascript
\nimport { elements } from 'xinjs'\nconst { label, span, input } = elements\n\n
document.body.append(label(span('Date'), input({ type: 'date' })))\n
\nWould add the following to document.body
\n<label>\n <span>Date</span>\n <input type=\"date\">\n</label>\n\n
Could you rewrite the HTML code using the elements proxy?\n
\n \n\nAnd ChatGPT immediately produced this:\n\n```\nimport { elements } from 'xinjs';\n\n// Destructure the element factories from the proxy\nconst { header, nav, h1, section, a, div, p, ul, li, img, footer } = elements;\n\n// Create the HTML structure using element factories\nconst pageContent = div({ class: 'container' }, [\n section({ id: 'bio', class: 'bio' }, [\n h2('About Me'),\n p('I am a software developer with over thirty years of experience. My expertise spans full-stack development, with a career-long passion for usability and creating intuitive user experiences.')\n ]),\n section({ id: 'portfolio', class: 'portfolio' }, [\n h2('Portfolio'),\n h3('Technical Projects'),\n p('Here are some examples of my technical work:'),\n ul([\n li(a({ href: '#' }, 'Project 1: Description')),\n li(a({ href: '#' }, 'Project 2: Description')),\n li(a({ href: '#' }, 'Project 3: Description'))\n ]),\n h3('Artistic Projects'),\n p('Some of my artistic work:'),\n img({ src: 'example-artwork.jpg', alt: 'Example Artwork' })\n]),\n...\n```\n\n> I should note here that there's a major error in this fragment I didn't pick \n> until I started proof-reading this post, weeks after starting it. Code from\n> LLMs is rife with idiotic errors that often aren't immediately apparent.\n\nI got so excited I tried explaining how xinjs's Component class works to see if I could get it to produce web-components the xinjs way. This took a few attempts, and one thing I learned from this is that ChatGPT seems better at learning from simple, clear examples than more complex examples.\n\nI started out by pointing it at an example in GitHub and asking it to build the components along the same lines. This was a disaster, especially in that it looked sorta, kinda right but wouldn't have worked at all.\n\nI spent a bunch of time refining my prompts, and the most helpful prompts provided focused examples, e.g. I used this prompt after the third or fourth attempt to explain how elementCreator
was a static method of Component
that did x and y, and it produced incorrect code with incorrect explanations of what it was doing.\n\n\n No. Here's a simple example that should help.\n\n export class MyThing extends Component {\n content = () => div('my thing is here')\n }\n\n export const myThing = MyThing.elementCreator({tag: 'my-thing'})\n
\n \n\nAfter a few prompts like this, ChatGPT provided pretty flawless results. But a\nlot of the time it really feels like you're just training a conscientious but\neager fool to imitate examples. It's kind of like how you wished regexp worked,\nwith the understand that, just as with regexp, you really want to verify that\nit didn't blow up in some subtle but exciting way.\n\n## Aside: \"artificial intelligence\"? it's not even artificial stupidity…\n\nIf you're just dealing with text, it can take a while using ChatGPT to realize \nthat ChatGPT isn't \"intelligent\" at all. It is predictive text completion with\nan insane knowledge base, but it simply doesn't understand anything at all.\n\nBut if you start working with image generation , it's immediately obvious that\n(1) the image generator doesn't understand what it's being asked to do, and\n(2) ChatGPT doesn't understand the image it's been given.\n\n \n\nIf you look at this example (and this is literally its response to my initial\nrequest) it does a great job of transforming the prompt into a concept and\nthen generating a prompt to produce that concept. The text portion of the \npipeline is fabulous, and it really feels like it understands. The fact the\ntext engine is not even stupid doesn't become immediately apparent.\n\nBut the image is just horribly wrong in so many ways, and as I iterated on\nthe prompt trying to improve the results it just got worse and worse. E.g.\nI never even suggested inserting a React Logo but it started putting them\neverywhere. (Like a lot of programming tools out there, ChatGPT just freaking\nloves React.)\n\nSo, the image generator just kind of free associates on the input text and\nproduces something but it doesn't know that, for example, you can't push\nsomething up a hill if it's already at the top of the hill. It doesn't know\nthat the stuff on the hill and the stuff on the boulder need to be different\nthings and just puts anything you ask for in both places or neither.\n\n",_path:"post/ui4rtvaqo6vv"},{keywords:[],title:"Apple Intelligence—Image Playground",path:"apple-intelligence-image-playground",content:" \n\nThe first bit of Apple Intelligence™ I've really played with is the new\nImage Playground (beta) that appeared on my Macbook Pro with the latest\nsystem update.
\nThe first things that come to mind about it are:
\n\nit's laser-focused on producing pictures of people and [bipedal] animals (furries?) and basing\nthem on photos from your Photo library. ChatGPT's DALL-E by contrast, refuses\nto play with photos of people (in general) assuming that you might be producing\ndeep fakes of politicians or whatever. \nif it doesn't understand a word, it just ignores it. I wanted to have my\nspace weasel smoking a cigar. Nope. No cigar. Sleeping alligator? It's \nan alligator. \nit has been trained on corpuses of, I assume, commissioned and art-directed\nimages that. The \"animation\" style seems very Pixar-like, while the\n\"illustration\" style seems somewhere between 1980s comic books and\nRoy Lichtenstein. I don't care for it. \nApple has \"dumbed down\" the UI to the point where it's very easy to use… \n…and pretty much impossible to get bad results. \nIt's pretty fast (at least on my rather nicely specced M3 Max). \n \nExample Output \nThis is the default \"animation\" style:
\n \n\nHere's the \"illustration\" style:
\n \n\nConclusions \nRight now, it's a really cute toy. But while it does produce consistently\nlovely results, they're basically all the same—pictures of a single person\nor animal, pretty much front-on, with some minor modifications allowed\n(e.g. \"scarf\"), some general atmosphere / background stuff (\"sci fi\", \"mountains\"),\nand either from head to toe, head and chest, or portrait.
\nI think this has potential if a lot more stuff is added. It's reliable \nand fast—it just doesn't do a whole lot.
\n",date:"2025-01-15T18:14:38.350Z",summary:"Apple's new Image Playground is focused, and easy to use. If you want to produce cute \"Pixar-style\" people and animals, it quickly churns out consistent, but very limited, results. My M3 Max rendered images in seconds, but right now it's more of a cute toy than a useful tool\n",author:"Tonio Loewald",_created:"2025-01-15T11:14:18.592Z",_modified:"2025-01-15T11:14:47.871Z",_path:"post/xohqdwdgjlxa"},{keywords:[],title:"Bambulabs P1S Initial Review",path:"bambulabs-p1s-initial-review",content:" \n\nThe BambuLabs P1S is, I think, the latest printer from Bambulabs, a Chinese 3D Printer\ncompany that is trying to be \"the Apple of 3D printing\". (Other previous would-be Apples of\n3D Printing would include Formlabs.)\n\nPerhaps \"labs\" is a bad suffix for a company planning to be the Apple of anything.\n\n## Unboxing\n\n \n\nThe printer itself is heavier than I expected (the box was 22kg according to Postti,\nand I believe them). There's a lot of packaging but not more than needed. Included\nin the main box were three spools of PLA (green, red, and glossy white \"support\").\nAll three are standard spools but with maybe 25% of a full load of material. (I\nbought a bunch of different materials at the same time and those are all much\nfuller).\n\nWhile there is a \"quick start\" guide, visible (if not accessible) upon opening\nthe box, once you somehow tear it out multiple layers of padding and\nplastic it assumes you've gotten the printer out of the box, which is no mean feat.\n\nAnyway, having pulled all the packing material out from around the printer and \nsqueezed my arms down around the package and found something I could hold without\nfeeling like I might break something off, I did get the printer out of the box\nand onto a table.\n\nThe instructions for removing packing materials, which is similar to unpacking\na printer or scanner, which often comes with disposable shims and bits of foam\nhidden in its inner workings, are fine once you realize that there is, in fact,\na ridiculously long allen key in the box of stuff, and that it can remove the rather\ninconveniently placed screws you need to remove fairly easily.\n\nI should note that 90% of my issues came from the fact that the quick start\nguide is, like the ones you get with Apple products these days, printed in minuscule\ntext and my eyesight isn't what it once was, so I didn't read the packing list\ncarefully. That would have saved me a few minutes.\n\nEven so, assembling the printer requires access to its back and front and good light.\n\nBut while it probably took me 30 minutes to have it ready to print, it might have\ntaken someone with a less messy office and better eyesight 15 minutes as advertised.\n\n## Software\n\n \n\nIf you have aspirations to be the \"Apple\" of anything, you may want to hire\nan actual UI designer and a proof reader.\n\nSoftware is what lets the Bambu experience down. if you're going to be the Apple\nof anything you need to not suck at software, and the software for this thing \nis a mixture of great when it works, confusing as hell, half-assed, and perplexing.\n\nTo begin with the quick start guide firsts gets you to download the iOS app and\nbind it to the printer. This did not work as advertised and requires you to go\nfind numbers deep in the printer's crappy UI. (This is a recurring theme.)\n\nFirst things first, because I couldn't get the App to sync with the printer using\nBluetooth (it kept timing out), I figured out how to print one of the pre-loaded\nmodels (good old benchy) from the printer's small front-panel. (Installing that\npanel was the least well explained step in the setup process—luckily most electronic\nconnectors are designed so that if you look hard at them and aren't a complete idiot\nyou can usually guess the correct orientation—good luck actually plugging the damn\nthings into each other though.)\n\nIn the end I think the big problem was that the app requires you to set up the\nWifi for the printer on your phone using bluetooth rather than allowing you \nto configure wifi on the printer itself. (Probably because entering a typical\nwifi password on the front panel would be torture, while doing it on your phone\ncould be effortless but is merely nasty.) Because the phone app's dialog is poorly\ndesigned it tried to join the network without a password and then hung.\n\nMeanwhile…\n\nThe benchy test print was both fast and pretty close to perfect. Given that the number\nof good or near perfect prints I've gotten from any of my two previous 3d printers \nis—generously speaking—three, that's a very good start.\n\n## Output\n\n \n\nEventually I solved my software problems. The quick start guide wants you\nto print benchy from the desktop software but the screenshots look nothing like\nthe software I downloaded and the instructions start once you've done stuff I hadn't\ndone. If the software sucked less it wouldn't need instructions, but as it is, the\ninstructions you get aren't adequate and the failure modes are horrific. (E.g.\nbecause I wasn't logged into the cloud service, I couldn't see my printer on the \nnetwork. Adding the printer involved one of two processes, neither of which worked\nand both of which were poorly documented and involved navigating the printer's\nawful front-panel interface. In the end, the solution was to figure out how to log\ninto the cloud at which point everything Just Worked.)\n\nOh, the first thing the software wanted me to do was detail my printer's configuration\n(which it apparently didn't know). What is the diameter of my hot end? I don't know.\n\n1. Downloading highly-rated models and printing them works fantastically well.\n2. Loading your own models to print is a shit show.\n\nAgain, the software is the problem. First of all, the Mac version (which judging from\nwhat I've seen, is actually the more popular choice for their target market) is missing\na mysterious \"repair\" function for models. So the software refuses to print your model\nuntil it's repaired with the Windows version of the software or a third-party website.\n\nDudes. Fix this.\n\nSecond, by default supports are disabled and anything requiring supports won't print.\nOnce you find the supports panel, the default supports are horrific (not the \"tree\"\nsupports everyone uses these days) and that basically ruined my second print.\n\nThat said, once I figured these things out, every print I have made (I think I'm up to\nsix, total) has turned out at least 90% perfect (and 100% is unattainable with FDM as\nfar as I can tell).\n\n## Bottom Line\n\nThe main reason I'm interested in 3d printers is tabletop games, and every printer\nI've tried has been a miserable failure until now.\n\nThe first serious thing I tried to print with the Bambulabs P1S was an interlocking\nhex tile. Of course I only printed one because I'm an idiot. So I printed it again.\nThe goal was to figure out how much I needed to tweak the model to make them snap\ntogether. They snapped together perfectly first time.\n\nSo I printed four more in a different color. They all fitted perfectly. I am a very\nhappy formerly frustrated game designer.\n\nThe only \"failed\" print I've had so far is a space marine\nminiature that requires a crazy amount of support and where\nI had not realized the default supports are horrible. I haven't\ntried printing it again, yet, because the software is hazing me, \nbut that's anazing. Right now, 3D printing is still in WAY too primitive a state\nfor an \"Apple of 3d printing\" to even make sense. But this thing is so much better\nthan anything I've owned or used it's incredible.\n\nI just sketched out a new game and I am so excited.\n\n## Final Note\n\nI bought the AMS bundle which includes a device for switching materials automatically.\nEven if you only print in one color at a time (printing multi-colored models is very\neasteful because each time the printer changes color it has to rewind the current\ncolor, then waste a certain amount of the new color, before continuing—even allowing\nfor the minor gain you might get from doing the next layer's portion of the current\ncolor before swapping, the difference in material consumption between one and two\ncolors is enormous).\n\nThat said, the AMS is fantastic just for being able to swap from one material to\nanother between prints. Even so, it's not without issues. E.g. I believe it's supposed\nto automatically identify materials if I get them from Bambulabs, which I did, but it's\nnot working thus far. I definitely got it because I want to produce multi-colored\nmodels, but where possible I'm printing in a single color at a time and planning to\nsnap things together.\n\nAnyway, best 3D printer ever.\n\n",date:"2025-01-03T14:19:55.667Z",summary:"Unboxing a BambuLabs P1S 3D printer was a surprisingly involved process, and the journey didn't get much easier from there. This Chinese contender for the \"Apple of 3D printing\" boasts impressive hardware, but the software experience can be pretty frustrating. From fiddling with hidden allen keys to battling Bluetooth connectivity, the setup was a minor ordeal, and navigating the app was a test of patience. But the truly game-changing moment? Snapping together perfectly printed hex tiles on the first try.",author:"Tonio Loewald",_created:"2025-01-03T10:40:28.102Z",_modified:"2025-03-09T16:55:15.540Z",_path:"post/wl9ydyiwzvzl"},{keywords:[],title:"Design for effect",path:"design-for-effect",content:" \n\nI just stumbled across a video on YouTube commenting about the Star Citizen 1.0 announcement, and I\nwondered if I had stepped into a time warp and somehow Star Citizen was nearing an actual release.
\nI needn't have worried, it's just another vaporware announcement, but what's really concerning is\nwhat they've told us what they're now aiming to do.
\nPrepare for Disappointment \nFirst of all, I'm not one of those people who had high hopes for Star Citizen. I certainly had\nhopes for it. It is a \"science fiction\" game in some loose sense. And it is, after all, the \nbest-funded game development in history, that I know of… Who knows, GTA VI might actually be \nbetter-funded at this point since it is virtually guaranteed to make billions of dollar.
\nStill, the \"genius\" behind all this is the creator of the Wing Commander franchise, a series of\nbasically bad space-fighter simulations that had ludicrously complicated controls (each game came\nwith a keyboard overlay along the same lines as professional studio software like Avid Composer)\nand featured pretty much linear \"groundhog day\" stories with badly acted cut scenes.
\nHaving left the game industry to pursue a career making mediocre movies, he returned with\ngreat fanfare to his true love—making overly complicated space flight sims. But now, instead of\nhaving a linear plot which cliched dialog he was switching to some variation of \"simulate \neverything, it will all work out\".
\nAnd so in around 2015 we got three of these things. Elite Dangerous the heir of the OG space\nfighter simulation with a procedural-generated universe and a simple economic system that let\nyou make money from trade, bounty hunting, and piracy. Star Citizen the heir of the OG\ndo a space fighter mission, get a cut scene, repeat empire. And No Man's Sky , a game based\non the idea thar \"if we give players a spaceship, a crafting system, and a procedurally generated \nuniverse, it will be awesome.\"
\nAnd now, nearly ten years later:
\n\nElite Dangerous is a pretty good updated version of Elite \nNo Man's Sky is a pretty good minecraft in space with some missions somewhere. \nStar Citizen is a rudderless pile of bugs and tech demos. \n \nWhat none of these is is a good game. Sure Elite was a good game in 1983, but Elite Dangerous \nis actually quite a bit more tedious than Elite and generally modern games are\na lot less tedious than 1980s games.
\nSure No Man's Sky has more to do in it now than it did in 2015, but that's a\nlow bar.
\nAmd now they have announced Star Citizen 1.0 and it seems to be \"we're only going to\nsimulate crafting and trade and somehow it will become a game\". In fact, Star Citizen 1.0 \nsounds a lot like a simulation where you get to do real, boring work for fake money, and\nwhere your reward is spending real money on a really expensive PC and having it crash constantly.
\nHow did we get here?
\nDesign for Effect \nAll of these people have forgotten a brilliant concept called \"design for effect\" that\nwas used, and possibly coined, by the original designer of Squad Leader which made\nit one of the best board wargames ever. (Advanced Squad Leader , its descendant, is way,\nway too complicated. Don't get confused.)
\nThe basic idea of \"design for effect\" is that rather than try to simulate everything,\nyou figure out what you want the result to be and then design a game whose mechanics\ngive you that result.
\nElite Dangerous comes closest to doing this of the \"big three\" space sims being discussed.\nThe original Elite was about flying a tiny spaceship from planet to planet, trading\nto make money, fighting aliens and pirates (or being a pirate) to survive and make money, and\noccasionally undertaking a mission (usually to blow up a specific enemy ship).
\nSo with these being the desired effects, Elite Dangerous simply built current tech\nimplementations of the gameplay necessary to make things work. The economy doesn't need\nto be realistic, it just needs to create interesting, somewhat predictable, opportunities\nto make money. You don't need to simulate a society where piracy makes sense, you just\nspawn enough pirates to make life dangerous and interesting.
\nNow you have a game.
\nIf you were to simulate a universe there's no guarantee anything interesting will happen\nin it. In No Man's Sky the universe is so big that people have almost no chance of\nbumping into each other. Even if the game supported player dogfighting (it might, for all\nI know) relying on it to generate your gameplay would be very sad. Oh, you won that fight?\nGreat, now you can spend 200h preparing for the next one by mining and synthesizing fuel\nand ammo. Sounds like fun.
\nIt's quite hard to build a really good system for procedurally generating planets with\nresources that are interesting to prospect for and exploit, and then a crafting system\nthat simulates mining them, manufacturing bits out of them, and then making those\npieces assemble into interesting objects one might care about. Making something like\nthat which is detailed and realistic is even harder. The people who are really good\nat stuff like this have better things to do than make games. If you can simulate an\neconomy really well, there are ways of making real money out of that skill.
\nEven if you do a great job of simulating all of this in some way, there's not only\nno guarantee that this will lead to interesting gameplay situations. It's not even\nlikely.
\nAnyway, No Man's Sky is a very boring game to play, and Star Citizen has now\nannounced its new plan it to release something in 2026 or 2027 that sounds a lot like No Man's Sky was in 2015. Given the money and talent on tap, if they were starting from scratch right now\nthat might not be unrealistic. But turning the buggy pile of garbage they have\nand the expectations of an existing player-base into that seems impossible, and to my mind it's\nnot even a desirable outcome.
\n",date:"2024-12-31T10:09:36.000Z",summary:"Star Citizen's 1.0 announcement has me scratching my head. The creators seem hellbent on simulating everything, ignoring a fundamental truth about game design—simulating reality doesn't guarantee good gameplay.\n",author:"Tonio Loewald",_created:"2024-12-31T10:07:42.634Z",_modified:"2024-12-31T10:09:39.153Z",_path:"post/x4ut4tag0iil"},{keywords:[],title:"Taste is the secret sauce",path:"taste-is-the-secret-sauce",content:" \n\nIn 1984 I had my first direct experience of the Mac. One of my takes was that one of the skills I thought\nset me apart from other people, my artistic talent, had suddenly become a lot less valuable. Sure, MacPaint\ndidn't give you the ability to draw, but just doing things like drawing tidy diagrams or lettering (people \nwould ask me to design protest signs and posters just because I could do large lettering) was something\nthis computer let anyone do effortlessly.\n\nI needn't have worried. It turned out that the Mac actually was part of a trend that made artistic talent\nmuch more valuable . The Mac let people with taste produce tasteful stuff much more easily. And it let\nthose without taste produce monstrosities effortlessly.\n\nTo be sure, there are some skills that the Mac made obsolete. People who typeset musical scores were unemployed within a few years. Mathematical typesetters were displaced by the Mac and TEX. Most tech support tasks became irrelevant \nfor Mac users, and people whose livings depended on the inability of ordinary people to use their \ncomputers for writing or creating simple graphics were—rightly—terrified of the Mac, and successfully\nkept it out of most large organizations for decades.\n\nAnd it seems to me that AI is a lot like the Mac.\n\nAI is very good at producing bad poetry, bad art, and bad code. Learning how to hand-hold AI into \nproducing better output, and recognizing when it is conspicuously failing takes taste .\n\n",date:"2024-12-30T18:35:28.600Z",summary:"The Mac, in its early days, seemed to diminish the value of artistic skill. But the reality was more nuanced. It amplified *taste*, making beautiful things achievable for those with taste, and terrible things easy for everyone else.\n",author:"Tonio Loewald",_created:"2024-12-30T18:33:17.167Z",_modified:"2024-12-30T18:35:41.719Z",_path:"post/3y2c5bcp8tyl"},{keywords:[],title:"Acorn 8 is out and it's fine",path:"acorn8-is-out-and-its-fine",content:" \n\nAcorn's latest version has dropped and it's $20 (this is an introductory price ; thanks to the App Store's lack\nof support for upgrades, it's a whole new app and you need to delete the old\none). Still $20 (or even $40) is dirt cheap for my preferred Photoshop \nreplacement.\n\nBut, does it blend?\n\nThe home page describes its new features.\n\n- AI Subject Detection: from my quick experiments it seems about as good as\nthe magic wand, i.e. not great (open the image in a new tab and see how bad a job it selecting the crane).\n- On Canvas Ruler: um, OK it's nice I guess.\n- Live Text Tool: this is surfacing functionality we get in preview.\n- JPEG-XL support: this plays to Acorn's strong suit. It has already replaced\nGraphicConverter as my tool of choice for converting files between image\nformats (notably for creating .webp files).\n- Data Driven Graphics: haven't tried this. Maybe useful?\n- LUT Support: this is nice but the included set of LUTs isn't great.\n- Improved export functionality: again, plays to Acorn's strengths.\n- Animated GIF and PNG export: convert layers to frames, which is fine, but\nthe main thing we all want is to handle video and there are dedicated tools\nfor this Acorn isn't going to compete with.\n\nA couple of other observations.\n\nThe text-on-path tools are very powerful (I like the dedicated circle text tool\nsince that's a very common use case) but the user interface for them\nis quite fiddly. E.g. figuring out how to actually type some text on a path\ntook some doing, and I'm not sure I remember how I did it.\n\nAcorn continues to improve its vector tools, in some ways better than dedicated vector editors such as Graphic and Amadine. I found myself\nworking on a vector project I started in Amadine —\nwhich I reviewed recently —\nbecause its text tools aren't nearly as good and its\nbooleans turn out to have issues too. Both of these things Acorn 8 did\nwith aplomb.\n\n## Conclusions\n\nAll the stuff that already made Acorn the best of breed remain as good or \nbetter than ever. E.g. the way its non-destructive CoreImage-powered filters\nwork (both in terms of speed, usability, and non-destructiveness) just leaves\neverything else in the dust and it's still the only tool other than Photoshop\nwith deep automation capabilities.\n\nI'm really not that impressed by the new features, but it doesn't matter.\nAcorn continues to be the best all-round bitmap wrangling tool on Mac (or\nWindows, for that matter) and if they want to release a new version every few\nyears and sell it for peanuts, I for one will continue to upgrade.\n\n",date:"2024-12-18T18:51:07.199Z",summary:"Acorn 8, a $20 Photoshop replacement, is out! While the new AI subject selection is not fantastic, other features like JPEG-XL and vector tools are solid, the UI for some elements feels a bit clunky. But ultimately, Acorn continues to reign supreme as my go-to bitmap editor, and I'll happily pay its price—and upgrade.\n",author:"Tonio Loewald",_created:"2024-12-18T12:53:32.624Z",_modified:"2024-12-30T17:17:13.179Z",_path:"post/ucc7x3oj34mk"},{keywords:[],title:"Signing and Notarizing Tauri Apps",path:"signing-and-notarizing-tauri-apps",content:" \n\nIf you're building an app (for Apple platforms) with Tauri \n(or pretty much any similar tool) then you're going to need to set up developer credentials.\n\nThe Tauri website provides instructions \nbut they're kind of ambiguous and needlessly complicated, so having gone\nthrough the process, I thought I'd try to explain a simpler option.\n\n## You'll need XCode and an Apple Developer account\n\nAll of this assumes you're a registered developer and you have XCode installed.\nIf not, then go do that. Tauri's instructions similarly require you to be a\nregistered developer, and you can't really build a Tauri app without installing \nXCode.\n\n## Signing Identity\n\n \n\nRun this command in a terminal:\n\nbilling-report-viewer % security find-identity -v -p codesigning\n
\n\nIf you get something like this, you're good to go.\n\n1) XXXXXXXXXXXXXXXXXXXXXXXXXXX \"Developer ID Application: ...\"\n
\n\n \n\nIf not, in XCode's Settings dialog, go to Accounts and click Manage Certificates and\nuse the + button to install all the stuff you don't already have (and if you\nhave stuff that's borked, just install new ones).\n\nOnce you've done all this you should be able to run the preceding command.\n\nThe \"XXXXXX\" bit goes inside tauri.config.json
as\nbundle.macOS.signingIdentity
.\n\n## Notarization\n\nIn the Apple dev portal… \n\n\nNote : you may need to request API access along the way (I did)
\n \n\nGo to Users and Access > Integrations > Team Keys \n\nClick the + button, give your key a name and pick Developer access.\n\nNow download the key and put it somewhere safe on your machine and set\nup the following environment variables:\n\nAPPLE_API_ISSUER=the value under Issuer ID (above the word Active)\nAPPLE_API_KEY=the KEY ID in the table for the file you downloaded\nAPPLE_API_KEY_PATH=the path do the file you downloaded\n
\n\nI just create a package.json
command to export these variables.\n\n",date:"2024-12-18T18:30:52.372Z",summary:"Frustrated with Tauri's convoluted developer credentials setup? This post breaks down the process into bite-sized chunks, using simple commands and visual guides to walk you through the often-confusing world of signing identities and notarization. From finding your correct signing identity to configuring environment variables within your `package.json`, here's a simplified process with illustrations which should help you distribute your installers.\n",author:"Tonio Loewald",_created:"2024-12-18T11:31:03.329Z",_modified:"2024-12-22T17:35:23.948Z",_path:"post/o4ewuf8xvxpt"},{keywords:[],title:"Dogfooding the xinjs stack",path:"dogfooding-the-xinjs-stack",content:" \n\nI'm writing this post on my iPhone 14 Pro\n(i.e. not a big screen) on a train between \nOulu and Helsinki.\n\nFor the last decade or more my blog was hosted\non WordPress installes in a shared server. \nIt cost me about $140 per year (it supported\nmore than just my blog) and ran the standard\nLAMP stack.\n\nAnyway, the annual subscription came due just\nas the hosting provider's servers started\nmisbehaving quite badly and I decided to move\nmy main site over to Google Firebase and\nrebuild it using the stack I was planning to\nrelease as xinie. (There's a placeholder site\nup on the url.)\n\nAnyway, this blog is now my testbed for xinie\nwhich is basically a full stack based on\nxinjs, that incorporates lessons learned\nfrom my work at Airtime, Uber, Google, \nParallel Learning, No No No, and textreceipts.\n\nAnyway, I've been going through recent blog\nposts, finding mistakes, and quixkly fixing \nthem, and it's all bwen so painless that I\ndecided fo try writing a poat from start to\nfinish.\n\nNow if's not quite as slick as the Wordpress\niOS App but then this is aomething I've built\nin my spare time and supporting mobile\nuse was not my top priority.\n\nI'll call it a win \n\n",date:"2024-12-12T18:53:55.832Z",summary:"My WordPress blog, hosted on a dodgy shared server for over a decade, finally bit the dust. So I rebuilt it on Google Firebase using my own full-stack framework, xinie. This post is a test run, straight from my iPhone 14 Pro, and a chance to see how xinie performs in the field. Expect a few typos, but maybe, just maybe, some elegant code snippets too.\n",author:"Tonio Loewald",_created:"2024-12-12T18:45:16.800Z",_modified:"2024-12-12T18:57:18.883Z",_path:"post/242twlcv1jzs"},{keywords:[],title:"Tauri Revisited",path:"tauri-revisited",content:" \n\nElectron has gotten so bloated (and it was pretty \nbloated to start with) and they keep changing the API so every time I try to \nbuild a new version of an old Electron project it feels like I have to do \neverything again from scratch, only now it's more complicated. Ugh.\n\nBefore I used Electron I used nw.js , but the last time\nleft a bad taste in my mouth, and it's just as bloated as Electron. I've also\ntried Flutter , but I don't want to learn Dart and a \nnew UI library. React Native is, of course, React.\n\nAnyway, I tried playing with Tauri again. Tauri is a \nrust-based alternative to Electron that uses the platform-native web client \ninstead of a copy of Chromium, and native code instead of nodejs and javascript)\nFor whatever reason, last time I gave up before getting very far.\n\nIf you don't know what Electron is, it's a framework for turning web-apps or\napps with web-tech user interfaces, into standalone \"native\" apps. Under the\nhood, it's Chromium and a web-server running on nodejs. You've almost certainly\nused Electron: VS Code, Slack, and Discord are all built with Electron. (Atom,\nthe precursor to VS Code, was the reason Electron was created, hence the naming\nconvention.)\n\nHello, World as an Electron App is something on the order of 200MB. Slack is\nnearly 500MB, Discord is over 400MB. I'd tell you how big VS Code is, but I'm\ncurrently living without it and using Zed instead.\n\nZed (which is a native app) is about 200MB. Panic Nova, a far richer app written\nusing Apple's own tooling, is 165MB.\n\nAnyway, on Friday, faced with a simple issue of wanting to deploy an internal\nweb-demo without having to set up a new deployment (and explain how to access\nit), I decided to do a quick revisit of Tauri.\n\nFirst thing to note: first class support for [bun](https://bun.sh). Yes!\n\nAnyway, I got what I needed working so fast I was hooked. This is how Electron\nand NW.js used to be before they become horribly complex and \"security\" focused\n(which is a necessity when you're fundamentally shipping a browser and nodejs\nin your package).\n\nSo I decided to figure out if I could call shell commands from the javascript\nside (which is what I need to be able to make my RAW manager app a reality).\n\nI do not know anything about Rust, and the code is pretty ugly to my eyes. But\nI have ChatGPT and I'm not afraid to try it out…\n\nIt took a few hours but I can run processes, wrap them in Javascript \ngenerator functions, and kill them. It's beautiful.\n\nBut, does it blend…?\n\n \n\nIf you're not impressed by a sub 12MB bundle size then I don't know what\nto tell you. I haven't seen executables built with high-level libraries that \nsmall since the early 2000s. The only apps on my machine that are smaller than\nthat are Apple-provided built-in apps that are thin wrappers around OS functions.\n\nThe Calendar app is bigger.\n\n",date:"2024-11-30T14:05:11.368Z",summary:"Tired of Electron's bloat and endless API changes? I dove back into Tauri, a surprisingly lightweight Rust-based alternative, and was blown away. Tauri lets you build apps smaller than a lot of built-in Mac apps, lets you build snappy, secure executables, all while keeping things simple. Plus, the bundle size? Seriously impressive. Read on to find out how I got shell commands working with Javascript, and why this might just be the web-to-native solution you've been waiting for.\n",author:"Tonio Loewald",_created:"2024-11-30T14:05:18.175Z",_modified:"2024-12-12T18:33:29.120Z",_path:"post/tiayw0anhy08"},{keywords:[],title:"Further Thoughts on the Meta Quest 3 and Vision Pro",path:"further-thoughts-on-the-meta-quest-3-and-vision-pro",content:" \n\nI still haven't seen an actual Vision Pro.\n\nPart of the reason is that, thanks to Finnish Migri, I am confined to Finland \nfor the time being. There are no Apple Stores in Finland.\n\nBut, I have had three different Oculus devices, including the Quest 3, and\nit seems like the Quest 3 basically does most of what the Vision Pro does, only\nnot as well.\n\nE.g. text entry on the Oculus is pretty torturous. In the last year there have\nbeen at least three system updates that changed the behavior of the on-screen\nkeyboard. Right now, the keyboard is quite nice, except that it can get stuck\nin weird locations.\n\nI've got a folding bluetooth keyboard/trackpad paired with my Quest 3, and \nthis works quite well in the sense that (a) it pairs, and (b) when I hit a key\nthen the signal reaches the device. Unfortunately it's a Finnish (or German)\nQWERTZ keyboard (with dedicated ä and ö keys and a bunch of very common\nnon-alphabetic characters—such as hyphen, slash, and the different kinds\nof parentheses—in unfamiliar locations). Also, it doesn't hold its unfolded\nposition, so while it works on a desk, it does not work in one's lap.\n\nWhile both the Quest 3 and the Vision Pro work very well (as far as I know)\nwhen used as a Mac or PC display, I would like them to work as standalone\ndevices (the Vision Pro's battery is an issue here but, then, I don't have\none). As I understand it, the Vision Pro has the horsepower to run a complete\ndeveloper stack but it doesn't and is basically an XR iPad. The third-party\napp La Terminal is\nan SSH client. That's all well and good but if I need to SSH to another \ndevice I might as well carry a laptop or use a fast internet connection.\n\nBut living web-native is a lot less restrictive now than it was when the\niPhone or iPad were released. Tools like Sandbox basically give you the\ntools for building web apps in a web browser, and deploying near-native experiences\nin web apps is pretty viable.\n\n \n\nIt's not perfect. But it will do for now.\n\n",date:"2024-11-28T18:41:20.430Z",summary:"XR devices like the Quest 3 and Vision Pro promise a revolutionary way to work and create, but I've found a frustrating reality. They're surprisingly limited in their standalone capabilities, forcing developers to settle for compromises and workarounds. This post dives into the specific pain points I've encountered, from clunky text entry to the limitations of web-native development stacks, and how the current state of XR development stacks falls short of the potential.\n",author:"Tonio Loewald",_created:"2024-11-28T12:41:33.265Z",_modified:"2024-12-12T18:29:47.632Z",_path:"post/efj4nj5y4ebl"},{keywords:[],title:"Amadine is my new favorite vector graphics editor",path:"amadine-is-my-new-favorite-vector-graphics-editor",content:" \n\n> Note that I've since discovered some pretty serious issues with Amadine's\n> booleans and text-on-path tools, so it's back to my old favorites for a while.\n> Hopefully, Amadine will sort out these issues.\n\nIt's not perfect (if you look carefully at the screenshot, you'll see a few \nunnecessary vertices in this boolean) but right now Amadine \nseems to be the best lightweight vector graphics tool in the Mac App Store.\n\nAnd it has a lovely icon!\n\nSince Graphic seems \nto be abandonware, and never quite lost its rough edges,\nAffinity Designer \nfeels bloated, Sketch \nstopped added a subscription model and stopped adding features, I've been \nlooking for a new tool for editing UI elements.\n\nAmadine passed my \"create the xinjs XR logo\" test, which all of these apps\n(and several besides) comprehensively failed the last time I looked at the\nlandscape.\n\n## Post Script\n\nAmadine may not be perfect, but the stray vertices aren't its fault. Turns\nout they're in the subtracted object.\n\nAnyway, I went back to the constituents of the boolean and tried to \nrecreate them more cleanly, and Amadine introduces a similar number of \nspurious vertices to the original \"best in class\". \n\nSo, while this doesn't make Amadine worse than any of its competitors, \nit does make its superiority more marginal. \n\nSo Amadin wins on functionality by a whisker, plus it beats\nAffinity Designer on bloat, Sketch on not having a subscription model, and\nGraphic on still being under development.\n\n",date:"2024-11-18T17:04:56.213Z",summary:"Tired of clunky vector graphics apps? Amadine, a new Mac App Store gem, might just be the lightweight, elegant solution you've been searching for. While not flawless, it surprisingly nailed a design challenge others consistently missed. Is this the vector editor we've been waiting for? Find out more in my full review.\n",author:"Tonio Loewald",_created:"2024-11-18T17:04:32.719Z",_modified:"2024-12-18T12:44:31.941Z",_path:"post/1bmtchcg6uxo"},{keywords:[],title:"Note to self—preventing images from triggering layout shifts",path:"note-to-self-preventing-images-from-triggering-layout-shifts",content:" \n\nIt took me a minute to figure this out, so I think it's worth a quick note to my\nfuture self and who knows, it may save someone a minute or two one day.
\nThe question is, how to set an image's size so that it displays at the \nlargest size that makes sense, but doesn't trigger a refresh when it loads.
\nAssuming you're width constrained, the answer I came up with is:
\n\nmax-width: 100%
so it doesn't overflow the available space \nwidth: (image.naturalWidth)px
so it doesn't exceed its actual width \naspect-ratio: width / height
so it keeps the right aspect ratio \n \nIf you're height-constrained then just set the max-height
and height
\ninstead.
\n\nNote that if you set both width
and height
the max-width
squashes the image,\nand if you use object-fit
it ends up cropped or with empty \"bars\".
\n \nThe way this works in my blog editor is that when it generates markdown it also\ngoes through all the <img>
elements in the wysiwyg editor and if they're loaded\nit sets the attributes of the element.
\nIt's simple and it works, and now I'm not getting dinged by Lighthouse for layout\nshifts.
\n",date:"2024-11-15T22:09:25.673Z",summary:"A reminder to myself on how to use `max-width`, `width`, and `aspect-ratio` to get nicely sized images that load without triggering unnecessary layout shifts.\n",author:"Tonio Loewald",_created:"2024-11-15T21:54:23.713Z",_modified:"2024-12-12T19:04:06.518Z",_path:"post/vn370g9b38ao"},{keywords:[],date:"2024-11-15T21:10:18.071Z",summary:"CSV. It's a mess. A truly awful format for data, used either through ignorance or stupidity. It is to data storage as the Imperial System is to weights and measures. Why do we keep using it? \n",path:"why-csv",author:"Tonio Loewald",_created:"2024-09-05T15:41:46.007Z",title:"Why CSV?",_modified:"2024-11-28T09:05:32.330Z",content:" \n\nCSV is a terrible formst for data. TSV is\nsuperior in every way , and XML is far more robust (but horribly verbose).\nSomehow, CSV remains popular.\nSoftware engineers seem completely resigned to\nit, often supporting CSV and not even allowing\nTSV, let alone subtly suggesting it by, say,\ndefaulting to it in export options.\n\n## What's wrong with CSV?\n\nthe basic problem with CSV is that it uses\nfairly common characters (comma and the\ninch symbol commonly used as a quotation\nmark) to separate data fields that may include\nboth commas and quotation marks.\n\nThis means that, despite its apparent simplicitly,\nCSV requires a stateful parser to ingest\nand in fact cannot be handled perfectly. \n\nThere is in fact no standatd for CSV.\n\nA quick search for the \"best csv parser for javascript\"\nyields lots of articles listing alternatives. fast-csv
\nhas a bundle-size of 57kB (16kB gzipped) .\n\nIt's ridiculous.\n\n## What's great about TSV\n\nTSV uses the ASCII control charactets (tab and\nnewline) that were literally designed for the express\npurpose of delimiting tabular data. If you need to\ninclude tab and newline characters within the data,\nthere's a simple standard way to do it.\n\nNot only are there dedicated keys on every keyboard for dealing\nwith TSV, many text editors and word-processors will do a pretty\ngood job of displaying the content of TSVs nicely, and the files\nwill tend to be slightly smaller.\n\nParsing TSVs is so simple that it doesn't really deserve the word \"parsing\". You don't need a library,\nin TypeScript / Javascript it's basically one line of code…\n\nfunction parseTsv(source: string): string {\n return source.split('\\n').map(line => line.split('\\t'))\n}\n
\n\nPlease, please stop using CSV.\n\n### Post Script\n\nAt a meeting yesterday I discussed this issue with several of my non-technical (Finnish) colleagues\nand they did not know TSV is even an option but pointed out that they have huge issues with European\ndecimal points (i.e. commas) in spreadsheet output. So it's even worse in Europe.\n",_path:"post/cuyd49oxcl1z"},{keywords:[],title:"xinjs blueprints",path:"xinjs-blueprints",content:"There's this really cool annoying feature of Chrome's Lighthouse\ntool that nags you about unused code in a project. Here's the report I just\ngenerated from ui.xinjs.net —a site that Lighthouse \ngives a score of 100 (desktop) and 89 (mobile) to that is hosted on Github.
\n \n\nNow a lot of this stuff is kind of wrong (the fact that a piece of code hasn't\nbeen used yet doesn't mean it won't be used, and it's inefficient to load\neverything in pieces), but underneath it all, when you're looking at real\nprojects (vs. demo sites intended to exercise pretty much everything in a\npackage), dead code stripping works really badly and there are deep issues\nwith the way packaging code handles dependencies.
\nIf you write a package, foo, that uses some stuff from another package named\nbar then there's a pretty good chance most or all of whichever version of bar\nyou used will end up embedded in the distributable version of foo.
\nThis is fine—possibly even good—for small things. E.g. if you use someone's\nleftpad \nfunction in your library it gets broken in a way that would destroy your \ncode, your code will be fine until you update it, and then presumably your \ntests will discover the issue and you can fix it before you ship a new version.
\nBut when you're writing very popular foundational code, like, say,\nReactComponent
it could mean that a project that uses many differently sourced\nsubclasses, e.g. UI components, built at different times by different people,\nyou may get many copies of the same code (inefficient) or even versions (scary)\nin the same codebase.
\nBoth b8rjs and xinjs \nhave an additional problem with this because they rely on the \nSingleton pattern \nto manage application state. The basic idea is that if you look at app.user.name
\nyou'd prefer it to be the only app.user.name
. And if you've somehow spun\nup multiple registries, you no longer have a single source of truth .\nThis is both inefficient (waste of memory etc.) and bad (bugs!).
\nThe problem manifests in React in a more subtle, but actually far worse, way.\nIn React, if there are breaking changes in React then components built on\none version will not play nicely with one another, either obvious or (worse)\nsubtle ways. This leads to React projects getting \"stuck\" on certain weird \ncollections of dependencies.
\nxinjs blueprints \nAs xinjs-ui has evolved, it has grown quite big, and \ndead code stripping just doesn't seem to work properly, so if I want to use \none or two components from xinjs-ui
you end up with all of it in your\npackaged code. The more I've looked into it, the gnarlier it seems to get.
\nSo, I wanted to allow components to be built in as lightweight a fashion as\npossible, and the solution I came up with was to turn the class definition\ninside out. Instead of importing the base Component
class and subclassing\nit, export a function that expects to be passed the base Component
class\nand anything else that seems handy (since none of it costs anything). Then\nyou just import the function, pass it to makeComponent
and voila!
\nBut this idea turned out to be even better. E.g. if you wanted to use\ntwo components with the same name (because how many different ways are there\nto name a popup-menu component?) you could take the function and pick a name\nfor it.
\nOld way (without blueprints) \nimport { Component, elements } from 'xinjs' // congratulations, you just got a copy of xinjs!\n\nexport class MyComponent extends Component {\n content = () => elements.div('hello, world')\n}
\n
export const myComponent = MyComponent.elementCreator({tagName: 'my-component'})\n
\n\nNew way \nimport { XinBlueprint } from 'xinjs' // just a type declaration!\n\nexport default myBlueprint: XinBlueprint = (tag, factory) => {\n const { Component, elements } = factory
\n class MyComponent extends Component {\n content = () => elements.div('hello, world')\n }
\n
return {\n type: MyComponent\n }\n}\n
\n\nAt the cost of a little boilerplate (sigh) you get a component that has zero\ndependencies and whose tagName is chosen by the consumer. But, it gets better…
\nConsuming Blueprints \nIf you just want to bundle everything together, you can just do the usual stuff.
\nimport { makeComponent, elements } from 'xinjs'\nimport myBlueprint from 'my-blueprint'\n\nconst { creator } = makeComponent('foo-bar', myBlueprint)
\n
document.body.append(creator())\n// same as document.body.append(elements.fooBar())\n
\n\nBut if you want to load a component from a cdn or a service endpoint on \ndemand, you can use the new xin-loader
and xin-blueprint
components:
\n<xin-loader>\n <xin-blueprint \n tag=\"my-container\" \n src=\"path/to/my-container/blueprint.js\"\n ></xin-blueprint>\n <xin-blueprint \n tag=\"my-widget\" \n src=\"path/to/my-widget/blueprint.js\"\n ></xin-blueprint>\n</xin-loader>\n<my-container>\n <my-widget></my-widget>\n</my-container>\n
\n\nAll the blueprints inside a loader will be loaded asynchronously in parallel,\nand of course you can generate them in code:
\nimport { blueprint, blueprintLoader, myContainer, myWidget } from 'xinjs'\ndocument.append(\n blueprintLoader(\n blueprint({tag: 'my-container}, src: '.../my-container.blueprint.js'})\n blueprint({tag: 'my-widget}, src: '.../my-widget.blueprint.js'})\n ),\n myContainer(myWidget())\n)\n
\n\ncreate-xinjs-blueprint \n \n\nAnd to wrap all this in a nice developer experience, there's \ncreate-xinjs-blueprint .
\nThe goal here starts with wanting to break xinjs-ui \ninto a collection of easily integrable lightweight blueprints, which means\nI want to be able to:
\n\nminimize friction to ship a component done \nself-hosted demo done \nselt-hosted tests in the browser done \nplaywright support done \nprovide self-hosted documentation with live examples etc. (which, basically,\nbecomes trivial once xinjs-ui
is turned into a collection of blueprints) \n \nWhile playwright is a fantastic tool for end-to-end testing, having tests run\nright in front of you on refresh during development is simply amazing, and this\nwas accomplished by turning the test component I use internally for xinjs
\ndevelopment into a standalone blueprint.
\n \n\nIt's also incredibly liberating to be able to just build something cool without\nworrying if it is going to add fat to a component library. (There used to be\na bunch of babylonjs-based 3d components in xinjs-ui
but I removed them \nbecause no-one needed them and they bloated the library.)
\nThe first non-trivial blueprint I built was based on the \nSVG clock demo from b8rjs .\nThing is, this demo lives inside b8rjs, but it isn't exported to prevent bloat.\nBut here's the xin-clock component, \nand here's the live demo .
\n \n\nThe Bottom Line \nYou're currently looking at the xinie
project, which is my current side-project.\nAmong other things, it replaced wordpress as my blog engine. I'm dogfooding it.
\nxinie
is a brand new soup-to-nuts, service compris application framework, \n(initially build on Google Firebase) and it supports code-as-a-service \nout of the box, and blueprints are just code, of course.
\nAlready, you can import a module directly into xinie and then embed it in \na page, but ultimately you'll be able to create and edit them, add tests,\nand so forth. And all of it lives inside the fine-grained access framework\nbaked into xinie
.
\n",date:"2024-11-08T19:25:38.999Z",summary:"Tired of Lighthouse nagging you about unused code? This blog post dives deep into the surprisingly complex issue of dead code stripping in JavaScript packages, especially when dealing with dependencies like React components. Learn how a new \"blueprint\" system in the `xinjs` framework dramatically improves modularity and efficiency, allowing you to create reusable components without the bloat of unnecessary dependencies. Get ready to streamline your projects and finally silence those nagging Lighthouse warnings.\n",author:"Tonio Loewald",_created:"2024-11-08T08:04:16.770Z",_modified:"2024-11-15T19:31:01.951Z",_path:"post/paul1rckydws"},{keywords:[],date:"2024-10-01T18:44:59.962Z",summary:"Ever wish CSS had a built-in variable for the current color? Turns out, it does! Learn how `currentcolor` lets you dynamically style SVG icons and more, but discover a frustrating limitation along the way. A surprisingly useful and surprisingly obscure CSS feature, perfect for those times you want a monochromatic svg icon to match the text color.",path:"currentcolor",author:"Tonio Loewald",_created:"2024-10-01T09:46:02.560Z",title:"currentcolor",_modified:"2024-11-15T19:43:13.139Z",content:" \n\n/* Keyword values */\ncolor: currentcolor;\n\n/* <named-color> values */\ncolor: red;\ncolor: orange;\ncolor: tan;\ncolor: rebeccapurple;
\n
/* <hex-color> values */\ncolor: #090;\ncolor: #009900;\n...\n
\n\nEvery so often I discover that CSS has some feature I've kind of wanted for\nyears and didn't realize was there. Today I learned about currentcolor
.\nThis feature is mentioned here \nand it links you to here ,\nbut you'll need to go find blog posts that explain just how darn useful it is.\n\nWhat is it? It's basically a special variable that contains the currently set\ncolor. It's pretty much exactly what you want for, say, monochromatic svg icons\nalmost all of the time:\n\nsvg {\n fill: currentcolor;\n}\n
\n\nNow, wouldn't it be nice if there were a similar value for background-color
?\nYou can set background-color
to currentcolor
(which is not helpful) but no\nthere is not.\n\nDoes this seem like a hack? You bet. Do I wish I'd known about this, I dunno,\nsome time between 2010 when it seems like it was implemented in webkit? Um,\nyeah.\n\nGod damn it.\n\n",_path:"post/ukw6zqkqjtti"},{keywords:[],date:"2024-09-20T17:33:48.145Z",summary:"These are the notes I took while watching Peter Evans' excellent YouTube video with the same title.",path:"15-elements-of-composition",author:"Tonio Loewald",_created:"2024-09-20T11:33:32.658Z",title:"15 Elements of Composition",_modified:"2024-11-15T21:13:22.459Z",content:" \n\nI came across a nice video entitled 15 Elements of Composition in Photography \nby Peter Evans, and I really liked it. It's free of fluff and has really nice examples.\n\nAnyway, if you don't want to watch the video, here are my notes, but the video is well worth a look.\n\nThe photograph above is one of mine, so don't blame Peter for that!\n\n## Vital\n\n- Focal Point (hook)—can be more than one, but one will be primary\n- Framing (as much in camera as possible, or crop in post; ignore aspect ratio, just make it look good)\n- Decide what you want to see and not to see (decide before you bring the camera to your eye)\n- Avoid background clutter\n- Resist temptation to “get everything in”\n\n## Non-Vital\n\n- Rule of Thirds (definitely not vital, but don’t get in the habit of centering your primary point of interest)\n- Leads-ins (usually diagnonal lines or curves that create depth) and foreground objects (don’t need to be obvious)\n- Diagonals, triangles, and curves (can be implied)\n- Keep horizons level\n- Give living creatures (people or animals) “looking/moving space” (also facing). This can even apply to inanimate objects like flowers and vehicles.\n- Rules of threes (or odds)—people weirdly prefer looking at three things or, failing that, odd numbers of things. \n- Positive & Negative Space (subjects of importance vs everything else)—try for a pleasing balance\n- Color, Shape, and Form\n- Strong Color (if related to the point of interest) can help, otherwise it is likely distracting\n- Frame within a frame (overused)\n",_path:"post/q6pw2t0a4y5j"},{keywords:[],date:"2024-09-10T14:27:03.999Z",summary:"Apple's announcements weren't terribly exciting, but the hearing aid feature of the AirPods Pro and the Sleep Apnea diagnosis feature for the new and recent Apple Watch models are very nice. The iPhone 16 Pro's new ultra wide fusion camera is also very welcome.",path:"apple-event-september-2024",author:"Tonio Loewald",_created:"2024-09-10T09:30:56.867Z",title:"Between the lines of Apple's September 2024 announcements",_modified:"2024-11-16T13:19:46.575Z",content:" \n\nI watched the announcements last night, Finnish time, and I have a few thoughts.\n\n## A bad day to own a Hearing Aid company\n\nPrescription hearing aids cost ~$1500 at Costco. Cheap hearing aids cost ~$80.\nBut now you can buy Apple Airpods Pro and get the diagnosis and the solution for\none low fee, and it's also possible this will remove some of the stigma of \nhearing loss.\n\nMy hearing isn't what it once was, and I'd rather buy Airpods Pro and get both\na really nice set of Airpods and get my hearing corrected than go to a doctor.\n\n## Also, maybe not a great day to own a Sleep Lab\n\nGetting diagnosed with Sleep Apnea used to be kind of inconvenient. Assuming\nyou had coverage, you'd have to sleep overnight in a lab. Again, an Apple Watch\nSeries 9/10 is definitely more convenient, likely cheaper, and you get a watch. \nNice.\n\n## iPhone 16 [Plus] / 16 Pro [Max]\n\nDespite the change in nomenclature (A16 vs A17 in the iPhones 15, A18 Bionic\nvs A18 Pro in the iPhones 16) and the purported focus on AI, the new phones\nbasically just seem to be ~15% faster than last year's models. That's fine,\nI only upgrade every 2-3 years and I've been drooling over the 100mm telephoto.\nNow, by waiting the extra year, I also get a 48MP fusion camera for ultrawide\nand macro photography.\n\n(It should be noted that Apple seldom talks about bytes or GHz in its consumer\nproduct introductions, so it may be that the A18 has substantially more RAM\nthan the A16/A17 and earlier, we'll have to wait until developers get to kick\nsome tires.)\n\nFrom the perspective of my wallet, it's very bad. I feel like the Airpods,\nthe iPhone 16 Pro, and the Watch Series 10 are all must buys. Only the EU's\ninsistence on crippling Apple might temper my desires.\n\n## Apple Watch Series 10 and Ultra\n\nThe Apple Watch Ultra is amazingly popular in Finland. I see them everywhere,\nespecially in Winter. I think it's very popular with cross-country skiers and\nthere are a lot of cross-country skiers here.\n\nThe new Series 10 seems to bring a lot of the Ultra's advantages down to the\nmain watch, which suits me as the Ultra looks crazy big on my wrist. First\nand foremost, the screen is actually larger than the Ultra's in a slimmer\ncase.\n\nThat said, the Ultra 2 is now a year old, and the only things they added were\nthe black option (which does look sick) and titanium straps (which also look\nsick).\n\n## Airpods Standard\n\nIt seems to me that the lack of active noise cancellation in the entry-level\nAirPods may be pure software, but perhaps not. Anyway, the mid-range AirPods\nare now AirPods Pro aside from interchangeable tips. Very nice. But the \nHearing Aid features of the AirPods Pro are impressive.\n\n \n\n## Vision Pro—down but not out\n\nThe Vision Pro is still being mentioned in dispatches and appearing in the\nfeature infographics. Apple hasn't abandoned it, but it's definitely not\nfront-and-center right now. Might we see a new model in 2025?\n\n## Apple's Neural Hardware & Training\n\nI still don't have a solid handle on how good Apple's neural processors are\nfor training models. The headline terraflops numbers look great, but are they\ncrippled in some subtle way? As I understand it, the way Apple is planning to\nuse AI is somewhere between using RAG \n(which is essentially using data to enhance prompts with a pre-trained LLM) and\nand creating custom models from your personal data on device (possibly to\nbe used in conjunction with pre-trained models).\n\nE.g. you might have a model to do OCR on text in images, and a model trained \non your photos designed to recognize your friends, and both might be used to \nhelp drive image search.\n\nBut it doesn't seem like there's an particularly good reason to train on your\ndevice vs. on a \"private cloud\". Your photos need to be uploaded anyway (for\nbackup, etc.) so they're already going to be in the cloud.\n\n## Final Thoughts\n\nNothing terribly exciting, and yet I think the new products are each quite\ncompelling, especially the Watch Series 10 and the new Airpods Pro. The \niPhone 16 Pro wouldn't be compelling if you already have the iPhone 15 Pro,\nbut it's quite compelling to me as it covers all the most important focal\nranges I use a camera for except longer telephoto, which means when I do have\nmy camera with me, I don't need to swap lenses as often.\n\n",_path:"post/3kse5yg6kd75"},{keywords:[],date:"2024-09-08T18:53:59.317Z",summary:"I've been playing with LLMs a lot over the last few weeks, including going back to generating images with Dall-E (and similar tools), and I see common patterns of errors, but the nice thing about generating images is that you can literally see the problems immediately (in most cases) in ways that are much harder to spot with text and code.",path:"what-generating-images-wth-chatgpt-and-dalle-tells-me-about-llms",author:"Tonio Loewald",_created:"2024-09-08T18:55:10.790Z",title:"What creating images with ChatGPT and Dall-E tells me about LLMs",_modified:"2024-12-11T23:21:32.945Z",content:" \n\nDespite all the \"progress\" on LLMs, I see consistent patterns of failure when using LLMs to \nattempt useful work. All of this makes perfect sense when you realize that all they are, at\nbottom, is a really, really impressive probabilistic autocomplete.\n\n1. LLMs do not reflect on their output. A person might reply to a question, and,\nrealize half-way through their response that it is unclear, or getting off topic,\nor that they themselves don't know the answer. ChatGPT might seem to do this,\nbut if so it will only be because sometimes the material they were trained with\ncontained sources that did it.
\n\n2. A lot of the \"progress\" consists of using the same technology to try to improve the\nprompt for you automatically, and to clean it up after it's created. We see this\nvery transparently will Dall-E where sometimes it actually loses the plot and just\ntells you what prompt it generated from your input.
\n\n3. If ChatGPT tells you what it's done, it doesn't in fact mean that it's done\nit, but just that a lot of source material contains such post summaries.
\n\nI've seen various AI pundits talking about how \"hallucinations have gotten so much\nless common\". I see them constantly, and they're often masquerading as self-assessment.\n\nBut the really interesting thing is that the Dall-E image generator seems to \n\"hallucinate\" results in a way that is a visual analog of what's going on with the\ntext in ChatGPT.\n\n1. I ask for an image.
\n\n2. ChatGPT embellishes the description (often rather well) and then either\nrenders that, or offers to render it.
\n\n3. The image usually fails to meet the specification in really obvious ways.
\n\n4. ChatGPT tells me that here is the image which has everything I asked for\n(restated, usually accurately, from the prompt, but not matching the\nactual picture).
\n\nThere's usually two options from here. One is to ask for the image to be tweaked\nin some specific way. This typically results in a completely different image\nthat is wrong in new and different ways and doesn't fix the problem.\n\n \n\nBasically images are generated probabilistically from input. There's no\nconcept of \"adjusting\" an image, merely blowing it away and doing it over based\non additional content text and without any way of reflecting on the previously\ngenerated image.\n\nThe other option (this is in the iOS app, and doesn't work in the Mac desktop\napp—I haven't tried other platforms) is to scribble over the problem area\nand tell it what you want done. E.g. you could scribble over the face of\na character and say \"make this guy more cheerful\".\n\nThis is hit-and-miss, but the damage is restricted to the area you scribbled over.\n\n \n\nI tried this repeatedly, including specifying the new content, and the billboards/screens \nstayed stubbornly blank.\n\n## Generalizing\n\nI don't think image generation is a special case. I just think it makes it\neasier to immediately see that things have gone wrong and what's going wrong.\n\nWorking with code is much harder, because it's not obvious from inspection what\nnon-trivial code does or whether it has subtle errors hidden inside it.\n\nWhen I ask ChatGPT (or Cursor) to do something with code, sometimes it actually\nworks, but more usually it's very similar to what happens with an image request.\n\n1. I ask for a coding task to be completed.
\n\n2. ChatGPT explains that of course it can do this, and elaborates on what\nI ask for, usually very well.
\n\n3. ChatGPT then produces a bunch of code that (a) probably won't actually\ncompile or run, and (c) doesn't do what it said it would.
\n\n4. ChatGPT explains how this code does some fairly accurate restatement\nof what it said it was going to do.
\n\nNow, you can basically do the same things at this point. You can tell it to\nmake a slight change and it will probably not do everything over again, because\nits earlier output is in fact part of the prompt, but it might.\n\nOr you can, say, copy-and-paste a specific problematic piece of code and ask\nit to fix that.\n\nYou can also tell it the errors its code is throwing, and it might fix them.\nWith text output, the text output is part of the prompt, so the probabilistic \ncompletion includes the conversation thus far (windows permitting), but this isn't\nthe same as actually knowing what the hell it's doing.\n\n",_path:"post/2xirh9mxpkg8"},{keywords:[],date:"2024-09-04T15:38:19.675Z",summary:"Tired of wrestling with Git's command line and clunky GUIs? I've finally found a Git client that lets you *edit your history like a god*. Retcon, a surprisingly powerful tool, allowed me to rebuild a complex pull request – fixing scattered config file modifications – in a fraction of the time it would have taken using my usual workflow. Read on to see how this $50/year subscription might just change how you manage your repos forever.\n",path:"retcon-first-impressions",author:"Tonio Loewald",_created:"2024-09-04T12:37:30.403Z",title:"Retcon—First Impressions",_modified:"2024-11-16T08:23:03.278Z",content:" \n\nIt's been a while since a new tool for managing repo state has made an impression\non me. Basically my history with git
has been:\n\n1. git
command line (ugh)\n2. GitBox which is actually\ndefunct (the website gets you to a domain squatter and it has disappeared\nfrom the App Store) but it was once a pretty solid GUI client for git.\n3. Tower was cool for a while.\n4. Github Desktop eventually got good enough \n(and handled sign-in to GitHub well) that there didn't seem much point in \nusing anything else.\n5. Merge from the developers of Sublime, \nallowed you do do some really nice things like going back and editing commit \ncomments easily.\n\nFor the last few years, that's where things stood. I used Github Desktop to\nhandle authentication, and would pop into Merge to clean up messes that were\nimpossible to deal with in Github.\n\nMore recently, Panic's Nova integrates Git right into the \nIDE while providing similar levels of functionality to Merge. Unfortunately, \nNova doesn't quite cut it for me, so I use it for hobby projects.\n\nBut a recent note on Daringfireball mentioned a\nGit client called Retcon that allows you to just click on your history and\nedit it like a God. I filed it away and thought I'd take a look at it later.\n\nWell, today I found myself trying to fix a PR that had modified config files\nhere and there during its history and it was just beyond my ability to fix it\nin either Sublime Merge or Git. I was on the verge of googling the git command\nline arcana necessary to fix the problem when I remembered the article on\nDaringfireball…\n\nAnyway, it costs $50/year (after the first year discount). I don't love \nsubscription pricing, but…\n\nI was able to rebuild my PR exactly the way I wanted to in a few minutes.\n\nLike a God.\n\n",_path:"post/ruwvg6vnhn0x"},{keywords:[],date:"2024-08-27T04:38:30.977Z",summary:"Hooks have settled in to the React world, so let's see how maintainable the code looks.",path:"react-developer-experience-revisited",author:"Tonio Loewald",_created:"2024-08-26T09:30:13.970Z",title:"React Developer Experience Revisited",_modified:"2024-11-16T13:20:38.121Z",content:" \n\nI'm returning to the world of React development after a hiatus of several years.\nWhen I last did any serious React development, hooks were the shiny new thing\non the block and now they've well-and-truly set in.\n\nThe good news is that hooks seem to have replaced redux as the state \nmanagement approach du jour, which means that how state gets from x to y is\nat least borderline comprehensible, but the down side is that the virtual\nDOM hierarchy, which was already pretty terrible, has gotten crazy.\n\n \n\nThe preceding image is of the React home page with React Dev Tools open. The\narrow shows the SelectionProvider wrapped around the top-left-most widget\non the screen.\n\nI picked this because (a) it's not my employer's highly confidential code and\n(b) I wouldn't want to be accused of cherry-picking a particularly gnarly piece\nof code written by non-genius-level-React-engineers.\n\nSo this is the code that the React team chooses to show off to the world\nand it's a page which doesn't do a whole heck of a lot.\n\nEach of those things in the virtual DOM represents code that someone has to \nwrite, test, and maintain. And when they maintain that code they have to find, \nread, and grok the code around it. When you look at this, you're looking at\nwasted development effort up-front and compounded into the future.\n\n \n\nAbove, by comparison, is this blog displayed in the elements panel. There's\nno virtual DOM because with xinjs
the DOM is the DOM is the DOM.\n\nI don't know how long hooks-based apps last. In my experience Redux-based apps\nbecome incomprehensible to maintainers within three years. Maybe hooks apps\nlast a bit longer.\n\n## Navigating a Codebase\n\nOne of the problems with React development has always been a high\nnoise-to-signal ratio in developer tools. E.g. in the debugger you tend to\ndiscover your functions are deeply nested in anonymous wrappers that do \nthings I for one don't understand. (But hey, at least it's not Angular,\namirite?!)\n\nSimilarly, your DOM nodes are wrapped in lots of DOM nodes that tend to do\nfairly comprehensible things (e.g. I assume a SelectionProvider holds\nselection state for a widget, which is totally awesome and not something you\nwouldn't need to bother with if you just used the DOM APIs). So while it's\nnot quite as bad as trying to figure out the execution stack of an event\nhandler it's not ideal that just figuring out where in this huge mess the\nthing you're pointing at is and then finding your way to the thing you're\nactually interested in reminds me of trying to find traps and treasure in\nold school CRPGs by waving the mouse around and waiting for the cursor to \nflicker.\n\nGenerally, when you're trying to fix an issue in the user interface, either\nyou know exactly where the code is or you don't. If you don't you might\nsearch for some unusual text in the UI. Failing that you might look for\n(say) a class-name or a component-name as a clue.\n\nJust look at this garbage (here I can't use the React example because their\nsymbols have been intentionally obfuscated):\n\n \n\nThere is virtually no useful information in this sea of genericity. I find\nit hilarious that the React Dev Tools hierarchy has minimized the indentation\nlevel just to avoid having the hierarchy totally disappear to the right.\n\nAnd what is it we're looking at here? We're looking at a small part of the\nlanguage select widget that allows the user to pick the UI language for this\napp.\n\n \n\nLast Thursday evening, after seeing a minor problem in the language menu of\nthe app I was working on, I decided to see how hard it would be to implement\nall of this nicely in xinjs-ui
, so I did. The resulting widget is shown\nin action above.\n\nNow, I haven't fully dislclosed the hierarchy inside this example, because \nI don't have to . The stuff I disclosed in React Dev Tools was the bare\nminimum needed to find what I was looking for.\n\n## But the code is nice, right?\n\nNo. No it isn't. The elements you're seeing in the xinjs-ui
example directly\nmap onto the code that creates them:\n\nfooBar(bazLurhmann())\n
\n\nGenerates:\n\n<foo-bar>\n <baz-luhrmann></baz-luhrmann>\n</foo-bar>\n
\n\nWhen I try to find the source code for Locale
it's helpfully in one of\ngod-knows-how-many files named index.tsx
.\n\nWe already know that xinjs
lets you write dramatically more compact code\nthan React
, but that if you want to code in React
then react-xinjs
lets\nyou write more compact and maintainable code than React
does (by making\nhooks far more powerful). But I hadn't seen this effect compounded \nexponentially in mature production code.\n\nAs I am quite fond of saying, I've never known something that is complex for\nsimple cases become simple for complex cases. There are some examples in Math,\ne.g. using log tables to do multiplication and division, but it's very easy to\ndemonstrate the simplification.\n\nBack in the glory days of redux, I once spent two days trying to fix a typo\nin a user-interface, with considerable help from members of the five-person\nteam that was responsible for the app. Sure, it's tough getting up to speed\non a new code-base, but I never actually fixed the typo. At least with this\ncodebase I was able to solve the problem in just an hour or so.\n\nSo, in my opinion, the React-with-hooks developer experience is better than \nReact-with-redux, but it's still pretty bad.\n\n",_path:"post/e8yplo1gxd8o"},{date:"2024-08-24T13:31:51.660Z",summary:"A working example of a CSS-to-javascript (specifically xinjs XinStyleSheet) converter.",path:"converting-css-to-javascript",keywords:[],author:"Tonio Loewald",_created:"2024-08-24T13:31:56.729Z",title:"Converting CSS to Javascript",_modified:"2024-08-26T14:39:32.475Z",content:"\n\n`xinjs` does a really nice job of generating CSS from Typescript / Javascript code, but\nI've written a lot of CSS both before I started working on `xinjs` and while I was working\non it but before the `css` module was mature. Notably, `xinjs-ui`'s styles were in css.\n\nConverting CSS into `xinjs`'s `XinStyleSheet` form is a bit of a pain, and it should\nbe easy to automate, so I knocked up this conversion tool as a \"live example\" in the\n`xinjs-ui` documentation.\n\nI thought it would be a good test of live examples in the `xinie` context, so here it is.\n\n```js\nconst tabs = preview.querySelector('xin-tabs')\nconst [css, js] = preview.querySelectorAll('xin-code')\nconst convertButton = preview.querySelector('button')\n\nfunction quoteTrim(s, symbol = false) {\n s = s.trim()\n if (s.match(/[^\\w_]/) || !symbol) {\n s = s.replace(/'/g, \"\\\\'\")\n return `'${s}'`\n } else {\n return s\n }\n}\n\nfunction kebabToCamel(s) {\n s = s.replace(/--/, '_')\n return s.replace(/\\-(\\w)/g, (_, c) => c.toLocaleUpperCase())\n}\n\nfunction css2js () {\n const source = css.value\n const lines = source.split('\\n')\n const output = ['{']\n let rule = ''\n for(const line of lines) {\n if (!line.trim()) {\n continue\n }\n try {\n rule = rule ? rule + ' ' + line.trim() : line\n if (rule.match(/@import .*;/)) {\n const [,url] = rule.match(/@import url\\(['\"](.*)['\"]\\);/)\n output.push(`'@import': ${quoteTrim(url)},`)\n rule = ''\n } else if (rule.match(/\\{\\s*$/)) {\n const [,whitespace, selector] = rule.match(/(\\s*)([^\\s].*)\\{/)\n output.push(`${whitespace}${quoteTrim(selector, true)}: {`)\n rule = ''\n } else if (line.match(/[^\\s]*\\}\\s*$/)) {\n output.push(line + ',')\n rule = ''\n } else if (rule.match(/.*:.*;/)) {\n let [,whitespace, prop, value] = rule.match(/(\\s*)(.*):(.*);/)\n prop = kebabToCamel(prop)\n output.push(`${whitespace}${quoteTrim(prop, true)}: ${quoteTrim(value)},`)\n rule = ''\n }\n } catch(e) {\n console.error(e, line)\n }\n }\n output.push('}')\n let code = output.join('\\n')\n code = code.replace(/'var\\(--([^)]*)\\)'/g, (_,v) => `vars.${kebabToCamel(v)}`)\n\n js.value = `import { vars } from 'xinjs'\\n\\nexport const styleSpec = ${code}`\n}\n\nconvertButton.addEventListener('click', () => {\n css2js()\n tabs.value = 1\n})\n```\n```html\n\nConvert \n\n@import url('https://fonts.googleapis.com/css2?family=Aleo:ital,wght@0,100..900;1,100..900&famiSpline+Sans+Mono:ital,wght@0,300..700;1,300..700&display=swap');\n\ntr:nth-child(2n) {\n background: var(--background-shaded);\n}\n\nth,\ntd {\n padding: calc(var(--spacing) * 0.5) var(--spacing);\n}\n\n@keyframes fade-in {\n 0% {\n opacity: 0;\n }\n 100% {\n opacity: 1;\n }\n}\n\nheader xin-locale-picker xin-select button {\n --brand-color: var(--brand-text-color);\n background: transparent;\n gap: 2px;\n}\n\nheader xin-locale-picker xin-select button svg {\n fill: var(--brand-text-color) !important;\n}\n \n \n \n```\n```css\n.preview xin-tabs {\n background: var(--panel-bg);\n}\n.preview button {\n margin: 4px;\n}\n.preview xin-tabs, .preview textarea, .preview xin-code {\n width: 100%;\n height: 100%;\n resize: none;\n}\n```",_path:"post/ox185071n7gx"},{date:"2024-08-21T06:38:45.161Z",summary:"Netflix.com now runs as it should on the Quest3 (instead of misidentifying its browser\nas Internet Explorer and complaining about Silverlight the way it used to).\n\nYay.",path:"netflix-now-works-on-the-quest3",keywords:[],author:"Tonio Loewald",_created:"2024-08-21T06:38:51.733Z",title:"Netflix now works on the Quest 3",_modified:"2024-08-21T15:59:03.498Z",content:"\n\nNetflix has had a dedicated (but horrible) app for the Quest for some time (it's now \n[deprecated](https://www.techradar.com/computing/virtual-reality-augmented-reality/the-meta-quest-3s-netflix-app-has-been-shut-for-good-but-the-alternative-is-better-than-it-ever-was#:~:text=The%20Meta%20Quest%203%20Netflix,get%20a%20playback%20error%20instead.)).\nMeanwhile, last time I checked, the Netflix website was misrecognizing the \nQuest's Chromium-based browser and telling me that Silverlight support had been\ndropped, or somesuch.\n\nAnyway, I was happy to discover that [Netflix.com has been fixed and works\nperfectly on the Quest3](https://www.theverge.com/2024/6/4/24171322/meta-quest-3-netflix-browser). \nWatching a video on a jumbo-sized screen (*much* bigger than shown in the screenshot) is actually\npretty great, and it struck me how awesome the experience would be on a Vision\nPro, because you wouldn't need to use or keep track of the controllers.\n\n(Yes, I know the Quest 3 *tries* to do hand-tracking.)\n\nAnyway, thank you Netflix for finally taking the ten minutes to fix front-end\ncode to correctly recognize that the Quest3 isn't running Internet Explorer.",_path:"post/b2b3mk66u40s"}],blogDataTimestamp:"2025-10-22T10:07:36.756Z",blogVersion:4,"post/path=the-owl-has-landed":{title:"The Owl has Landed",path:"the-owl-has-landed",content:"\n \n \n\nTwo years ago I decided that instead of porting [b8rjs](https://b8rjs.com) to Typescript, I'd build a completely new library with the same ambitions but encapsulating lessons learned in the preceding eight years of (a) using b8rjs to build several products, and (b) working at Google, whose engineering requirements b8rjs did not satisfy (in particular, b8rjs could not survive *minification with variable renaming*, which Google considered an absolute security requirement).\n\n[tosijs](https://tosijs.net) (originally xinjs) is the result. In addition to being written in 100% typescript, it is more tightly focused than b8rjs (which included a bunch of utility libraries and a collection of user interface elements) and offers the following key capabilities:\n\n- slicker UI creation than JSX, but it's just javascript\n- a Component base class for creating web-components, with or without a shadow DOM.\n- improved tooling for modern CSS\n- powerful and largely automatic state management\n- performant rendering of massive datasets\n\n> ### The Benefits and Costs of using library code\n>\n> `tosjs` does all this with no transitive dependencies and minimal creeping lock-in (what a colleague referred to as \"cancerous code\").\n>\n> E.g. the color math used by `tosijs`'s CSS functions was built from scratch because all the libraries I could find that offered the needed functionality were, on their own, at least half as large as `tosijs` altogether. And as [native support for color math](https://caniuse.com/css-relative-colors) is added to modern browsers, this code can be gradually removed.\n>\n> Creeping lock-in is when using a library forces you to write or rewrite your existing code in a way that locks you in.\n>\n> I think all software libraries should be considered not only in terms of the benefit they deliver, but the costs (present and future) they impose, which I call a **disposal tax**. Once you've written a bunch of code using this library, how hard is it to switch to a different library? In the case of redux, rxjs, React, Angular, et al, you pretty much need to throw away everything and start over. In the case of `tosijs` just wrap your existing library in a proxy and you're good to go.\n\n## Building UIs with code\n\nIn React, JSX lets you build out the UI using an XHTML-like DSL which is almost as nice as just writing HTML. E.g.\n\n```\nimport React from 'react' // JSX transform required\n\nexport default ({ caption, value }) => \n {value} \n \n \n```\n\nPretty cool, but the thing the function returns is actually a sack of stuff for \"internal use only\" by React, which you best shouldn't mess with. And all this requires specific tooling to work. And the language you write the DOM nodes in is kind of a kludge.\nIn `tosijs` you would write:\n\n```\n// elements is a proxy\nimport { elements } from 'tosijs'\n\n// elements creates functions that produce HTML elements\nconst { label, span, input } = elements \n\nexport default ({ caption, value }) => label(\n span( caption ),\n input({ value })\n)\n```\n\nThere's no magic needed in the tooling. You're actually writing standard Javascript (or Typescript) and there's less typing.\n\nAnd the exported function returns actual DOM nodes that behave and can be reasoned about and manipulated exactly as you'd expect, versus buckets of undocumented office-use-only properties. Instead of a virtual DOM, we use the *actual* DOM.\n\n> `tosijs` also provides similar proxies for SVG and MathML elements.\n\n## Web Components\n\n`tosijs` provides a `Component` base class that makes it easy to define custom-elements both with and without a [shadow DOM](https://developer.mozilla.org/en-US/docs/Web/API/Web_components/Using_shadow_DOM). The former are easier to style and lighter weight, while the latter allow you to build protected formatting contexts (e.g. for graphs or embedded word-processors or structural layout elements with no actual intrinsic appearance).\n\nE.g. React only supports having child elements in one place. Anything beyond that involves clumsily passing factory functions or children as props. web-components support multiple named `slot` elements, and `tosijs` extends this by supporting slots (via the `xin-slot` custom-element) even in web-components that don't use a shadow DOM.\n\n`tosijs` also detects tag-name clashes (e.g. if you mix in components from other libraries) and automatically renames its components if necessary.\n\nFinally, `tosijs` supports the concept of Component `blueprint` functions that can be hydrated by any compatible version of tosijs into a web-component on-the-fly, and without creating dependencies on a specific version of tosijs.\n\n> ### Interoperability\n>\n> It's quite common to want to use components from different sources. But what happens if you use two different components that want the tagName `labeled-input`? Components built with the `Component` base class will detect name collision, pick a new unique name, and rewrite CSS in their helper sheet where needed to use the new selector.\n>\n> Similarly, it's quite common to use components built on top of different versions of the same core library. This can lead to present or future conflicts. The usual solution to this is \"peer dependencies\". But that only goes so far: component blueprints allow components to be loaded dynamically and hydrated using any compatible version of `tosijs`. (In essence, components are hydrated via dependency injection.)\n\n## Leveraging CSS and CSS-variables\n\nModern CSS is very powerful and most of the libraries and tools intended to address shortcomings in CSS target old world CSS before CSS-variables were broadly supported and don't work with the shadow DOM of web-components.\n\n`tosijs` provides syntax sugar and tools for leveraging modern CSS.\n\n- define CSS variables using an underscore\n- refer to CSS variables using the `vars` proxy\n- use scaled CSS variable dimensions using the `vars` proxy\n- use modified CSS variable colors using the `vars` proxy\n- use CSS variables with defaults using the `varDefault` proxy\n\nFor dimensions, this is more powerful and versatile than defining a bunch of sizes and then trying to remember whether your large radius is called \"huge\" or \"xxl\". E.g. you can define a base `_fontSize: '16px'`, then base `_spacing: vars.fontSize75` on that size for most cases, and then reduce `_fontSize` when needed, automatically scaling `_spacing`, but tighten up spacing without reducing font size as much for very tight fits, all without defining numerous idiosyncratically named classes that mostly don't get used or have weird side effects.\n\n> When converting `XinStyleRule` objects to CSS, the `css` module treats leading `_` characters as syntax sugar for creating CSS-variables (`_fooBar` becomes `--foo-bar`). The `vars` proxy emits variable references, so `vars.fooBar` becomes `var(--foo-bar)`. `varDefault` emits functions that add default values, so `varDefault.fooBar('10px')` produces `var(--foo-bar, 10px)`.\n\nIn particular, you can retain semantic class names and make the classes simpler and more flexible using CSS variables.\n\nFor colors, this means that instead of defining hundreds of color ramps because you might need a 40% brighter version of `var(--brand-color)`, you can just define `_brandColor` at the :root level and then write `vars.brandColor40b`. And if you want to have multiple or dynamic themes, you can directly use the included `Color` class, which is used to make all this work, to manipulate colors on-the-fly.\n\nE.g.\n\n```\nimport { StyleSheet, vars, varDefault } from 'tosijs'\n\nStyleSheet('demo', {\n ':root': {\n _fontSize: '16px',\n _textColor: '#222',\n }\n \n body: {\n fontFamily: varDefault.bodyFont('Helvetica Neue, Arial, Sans-serif'),\n fontSize: vars.fontSize,\n color: vars.textColor,\n lineHeight: vars.fontSize150\n }\n \n ':disabled': {\n color: vars.textColor50o\n }\n})\n```\n\nbecomes:\n\n```\n\n```\n\nA typical \"modern\" stack with CSS pre-processing does things like generates thousands of style definitions just in case you use five of them, and then uses all kinds of build magic to throw away the definitions that don't actually get used. This just lets you use exactly what you need.\n\n## State Management\n\nSomething like 70% of the code in typical web applications is simply updating the UI because state has changed or updating state because the user or the back end has changed something. Sometimes the code is doing something incredibly clever, but usually it isn't, and `tosijs` (like `b8rjs` before it) aims to automate the dumb code out of existence, while making the clever code easier to write.\n\n```\nimport { tosi } from 'tosijs'\n\nconst { app } = tosi({\n app: {\n name: 'My App',\n docs: [\n {\n name: 'Autobiography',\n content: 'In the beginning there was the semicolon.'\n },\n {\n name: 'Shopping List',\n content: '- eggs\\n- milk'\n }\n ]\n }\n})\n```\n\n`tosi()` returns a proxy of application state. Anything you pass to it is assigned to application state, so `app` now contains a proxy wrapped around the `app` property of the object passed to `tosi`. This is mostly \"invisible\" at runtime except for strings, booleans, and numbers, which are returned as a proxy wrapped around an object wrapper of the value (e.g. `app.name` will be a proxy wrapped around `new String('My App')`).\n\nThe proxy does one useful thing: it tracks changes to the wrapped state and transparently updates elements built around that state.\n\nE.g.\n\n```\nimport { elements } from 'tosijs'\n\n// assume we are using app from before...\nelements.div(app.name)\nelements.input({ bindValue: app.name })\n```\n\nProduces a `` element containing a `
` containing 'My App', but it will be updated if `app.name` is assigned a new value, and an ` ` element that is initialized to 'My App' but, if edited by the user, will automatically update `app.name` and this in turn will update the text in the div. \n\nAll this is done for you and you don't need to worry about the ` ` losing track of its internal state or being changed unnecessarily. So you can use native ` ` elements when that makes sense and get all the validation and accessibility functionality for free.\n\nAlso note that you can wire the app state directly to the relevant elements. You don't need to pipe them through a hierarchy, so you don't end up paralyzed by early decisions about user interface layout. You don't need to worry about nested providers. Get data direct from application state and set application state directly.\n\nHere's a live demo of this:\n\n```js\nimport { tosi, elements } from 'tosijs'\n\nconst { liveAppDemo } = tosi({\n liveAppDemo: {\n name: 'My App',\n docs: [\n {\n name: 'Autobiography',\n content: 'In the beginning there was the semicolon.'\n },\n {\n name: 'Shopping List',\n content: '- eggs\\n- milk'\n }\n ]\n }\n})\n\nconst { label, input, span, h3, div, template, details, summary, textarea } = elements\n\npreview.append(\n h3(liveAppDemo.name),\n label(\n span('Edit the app name'),\n input({ bindValue: liveAppDemo.name })\n ),\n div(\n {\n bindList: {\n value: liveAppDemo.docs,\n }\n },\n template(\n details(\n summary(\n input({ bindValue: '^.name', placeholder: 'enter document name' })\n ),\n textarea({ bindValue: '^.content' })\n )\n )\n )\n)\n```\n\nTypescript makes it a little tricky to write simple assignment statements since `app.name` is declared as a `String` with benefits, even though you can assign strings to it, so `tosijs` provides some syntax sugar. Any of its proxied values has a `xinValue` property that allows getting and setting the underlying property, so you can write Typescript:\n\n> Note that old naming conventions (such as `xinValue`) remain supported. They will ultimately be replaced or streamlined, but this is work-in-progress.\n\n```\napp.name.xinValue = 'new value'\n```\n\nWhich is almost as clean as Javascript:\n\n```\napp.name = 'new value'\n```\n\nFinally, if you need to touch the underlying data directly (behind 'tosi''s back), `tosijs` provides a `touch()` function that simply lets you touch the proxy or its path to trigger needed updates.\n\nE.g.\n\n```\nimport { touch } from 'tosijs'\n\napp.xinValue.name = 'another value' // changes the underlying object directly\ntouch(app.name) // tells the state observer that app.name was changed\ntouch('app.name') // same thing\n``` \n\n## Virtualized Lists\n\nRendering small amounts of data has long since been trivial for user interfaces, but all user interfaces, native or web-based, reach a breaking point when faced with huge amounts of data.\n\nBut, typically, the problem isn't what's visible, but what's scrolled out of view. It's common for web-applications to either \"page\" large amounts of data or produce incredibly long lists that have to be scrolled through. This presents both a performance problem (it tends to use a lot of memory and CPU) and usability (e.g. it sucks to not be able to see a table's headers when you've scrolled down through its content).\n\n`tosijs` provides virtualization support out-of-the-box. You just need to tell it that you want a list of things to be virtualized and how big they are.\n\n```js\nimport { elements, tosi } from 'tosijs'\nconst { div, template } = elements\nconst list = Array.from({ length: 10000 }, (_, id) => ({ id }))\n\nconst { bigBindTest } = tosi({\n bigBindTest: { list }\n})\n\npreview.append(\n div(\n {\n class: 'virtual-grid-example',\n bindList: {\n value: bigBindTest.list,\n idPath: 'id',\n virtual: {\n height: 40,\n visibleColumns: 7,\n rowChunkSize: 2,\n }\n },\n },\n template(\n div({\n class: 'cell',\n bindText: '^.id'\n })\n )\n )\n)\n```\n```css\n.virtual-grid-example {\n height: 100%;\n width: 100%;\n overflow-y: auto;\n display: grid;\n grid-template-columns: 14% 14% 14% 14% 14% 14% 14%;\n}\n\n.virtual-grid-example .virtual-list-padding {\n grid-column: 1 / 8;\n}\n\n.virtual-grid-example .cell {\n height: 40px;\n line-height: 40px;\n text-align: center;\n}\n\n.virtual-grid-example .cell:nth-child(14n+2),\n.virtual-grid-example .cell:nth-child(14n+3),\n.virtual-grid-example .cell:nth-child(14n+4),\n.virtual-grid-example .cell:nth-child(14n+5),\n.virtual-grid-example .cell:nth-child(14n+6),\n.virtual-grid-example .cell:nth-child(14n+7),\n.virtual-grid-example .cell:nth-child(14n+8) {\n background: #0001;\n}\n```\n\nIf you look at the above demo in the **Elements** inspector of your browser, you'll see that only the minimum number of elements are actually in the DOM at any given time. The `tosijs-ui` [table](https://ui.tosijs.net/?data-table.ts) component uses this to provide highly performant data-tables with live column resizing, etc. and the [filter](https://ui.tosijs.net/?filter-builder.ts) component lets you build compound filters.\n\n\n \n ",format:"markdown",date:"2025-10-21T18:51:35.405Z",keywords:[],summary:"tosijs is a from-the-ground rebuild of b8rjs that matches its ambition but is written in Typescript, meets more rigorous engineering standards, and has a simpler and more powerful API.\n\nThis summarizes the five core capabilities and benefits tosijs brings to the table, allowing you to just write javascript but get more done with fewer dependencies than any other framework.",author:"Tonio Loewald",_created:"2025-10-18T12:04:33.304Z",_modified:"2025-10-22T09:41:49.107Z",_path:"post/uvgjmiqp2n3p"},"post/path=css-registerproperty-and-property-considered-harmful":{title:"CSS.registerProperty and @property considered harmful",path:"css-registerproperty-and-property-considered-harmful",content:" \n\nI recently discovered what looked like a cool new feature in CSS. After working on leveraging it to make CSS variables work even better with [xinjs's css library](https://xinjs.net/?css.ts) I realized it's actually a complete disaster and must be avoided at all costs.\n\nOops.\n\nHere's what I've written covering this matter in the `xins` documentation:\n\n> ## @property and CSS.registerProperty() considered harmful\n>\n> This [new CSS feature](https://developer.mozilla.org/en-US/docs/Web/CSS/@property) \n> is well-intentioned but ill-considered. I advise\n> against using it yourself until its serious flaws are addressed. The problem\n> is that if someone registers a variable you're using or you register\n> a variable someone else is using then your CSS may be broken. And\n> you can't re-register a variable either. \n> \n> This is a bit like the problem\n> that xinjs `Component` works around with tagNames, but in practice far more\n> difficult to solve. It is impossible to tell if a given instance of \n> a given variable name is an intentional reuse (probably) or a new separate variable.\n> No-one intentionally defines two different components with the same tag.\n\nIn a nutshell, instead of making CSS variables more dependable, it makes them inherently fragile and there's no real workaround except for generating all properties via a single javascript library that throws fits if anyone does the wrong thing. You can't know if a third party library will break your code or you will break its. It's horrible.",format:"markdown",date:"2025-06-27T17:07:10.275Z",keywords:[],summary:"I love CSS variables, but this new feature of CSS makes them fragile and there's no obvious workaround. Don't use it.\n",author:"Tonio Loewald",_created:"2025-06-27T17:04:29.087Z",_modified:"2025-06-29T13:58:30.597Z",_path:"post/0cs6kyt7zm9r"},"post/path=what-should-a-front-end-framework-do":{title:"What should a front-end framework do?",path:"what-should-a-front-end-framework-do",content:"## Why `tosijs` is the way it is\n\n> tosijs is the new name of xinjs. Updating the documentation is a work in progress.\n\n```html\n\n \n \n \n \n```\n```css\nxin-icon {\n margin: 10px;\n display: block;\n height: 240px;\n}\n\nxin-carousel::part(pager) {\n scrollbar-color: transparent transparent;\n}\n```\n\nMost front-end programming today is done using a framework—a library that makes it easier to create chunks of user interface and allows developers to avoid the nitty-gritty details of the DOM APIs. While many frameworks simplify aspects of web development, `tosijs` is designed to provide a robust, highly sophisticated foundation for building the most demanding and complex web applications, and even desktop apps, with unparalleled efficiency and maintainability.\n\n`tosijs` original design was spurred by a simple observation: you don't make complex things simpler by making simple things complex. Our aim isn't just to slightly reduce boilerplate for trivial examples like a to-do list—though tosijs certainly does that. Instead, our true focus is on compounding these simplifications across an entire, large-scale project, and by getting out of the way of things that work, rather than wrapping them in layers of abstraction that reduce performance and mask functionality.\n\n> What do we mean by \"wrapping things that work in layers of abstraction that reduce performance and mask functionality?\"\n> \n> Take the native HTML ` ` element. It offers mature functionality, built-in accessibility, and sophisticated, customizable validation support that is often obfuscated or absent from framework-specific \"wrapper\" components. These wrappers are \"necessary\" because frameworks like Angular and React are prone to re-rendering the element at the slightest provocation, losing crucial state (like what you selected or where the caret is).\n\nBy consistently minimizing overhead, reducing lines of code, and leveraging native browser capabilities where it makes sense, `tosijs` aims to make fundamentally complex architectures manageable, debuggable, and performant in a way that other frameworks often fail to achieve either initially, or as they evolve and scale.\n\nIn my opinion, the main goal of a **framework is to minimize the amount of code you need to write to accomplish your goal**, but not at the expense of functionality, performance, maintainability, or scalability.\n\nIn my experience, for building user interfaces on a web stack, this involves:\n\n* writing reusable _composable_ components\n* managing state\n* creating DOM elements\n * binding data to those elements\n * binding event handlers to those elements\n * keeping those elements up-to-date\n* using specialized libraries for specific tasks, like rendering charts\n* communicating with services, file systems, etc.\n\n## Reusable _Composable_ Components\n\nCreating reusable components used to be quite difficult, and a number of frameworks provided systems for doing this, notably React and Angular, but after a bad start—the initial design was not fully thought out and wasn't consistent across browsers—[web-components](https://developer.mozilla.org/en-US/docs/Web/API/Web_components) (a.k.a. custom elements) do this very well, work consistently across browsers, and offer capabilities that React and Angular don't match. Web-components:\n\n* can compose their children into multiple \"slots\"\n* function as drop-in replacements for HTML elements (e.g. you can write your own `` replacement that can literally be swapped for the standard one with no code changes).\n* don't depend on a specific version of a specific library, so you can mix web-components from different libraries. `tosijs` even allows you to provide component blueprints that can be created using one version of `tosijs` and used with another.\n\nE.g. you can target different \"slots\" within a web-component using the `slot` attribute:\n\n```html\n\n \n Navigation Stuff\n \n \n \n
Hello world \n\n
This is inside the default slot
\n \n\n```\n\nHow hard is this kind of thing to do? Here's a simple example implemented using `tosijs`'s `Component` base class.\n \n```js\nconst { Component, elements } = tosijs\n\nconst { slot, div } = elements\n\nclass Slotted extends Component {\n static styleSpec = {\n ':host': {\n display: 'inline-flex',\n flexDirection: 'column',\n padding: 10,\n alignItems: 'center',\n boxShadow: 'inset 0 0 0 2px #aaa'\n }\n }\n \n content = [\n slot({name: 'above'}),\n slot(),\n slot({name: 'below'})\n ]\n}\n\nconst slotted = Slotted.elementCreator({tag: 'my-slotted'})\n\npreview.append(\n slotted(\n div('I am in the middle'),\n div({slot: 'below'}, 'I end up at the bottom'),\n div({slot: 'above'}, 'I end up on top'),\n )\n)\n```\n\n> **Note** if you provide a `static styleSpec` in a Component subclass, it gains a shadowDOM.\n> If you provide a styleSpec in the `elementCreator()` call, this becomes a stylesheet in the\n> outer DOM, `:host` is changed to the component's tagName (which might get changed to avoid\n> collisions), and `
` elements are replaced with `` elements that have the same\n> compositing behavior.\n\nIn React you can implement this using a combination of children and props—which works—but it's clumsy.\n\nAnd beyond this, web-components:\n\n* isolate their internal DOM (but this is _optional_), allowing things like embedded word-processors that don't get \"polluted\" by surrounding style rules.\n* are _asynchronous_ (e.g. you can server-side render the elements) and hydrate them when the implementing code arrives.\n\nEven if you don't use the shadowDOM, web-components leverage custom HTML tag-names to make scoping styles very easy, without resorting to a proliferation of `class` identifiers. Using a `class` to specify a component built out of `` elements, causes problems with managing CSS on large projects because a class overrides a tag, whereas a custom tag-name does not and effectively creates its own namespace.\n\nThis last bullet is quite significant. A React component with server-side rendering needs to run on the server _and_ on the client, you end up hydrating data into HTML, sending the rendered HTML, and then sending code across that knows how to render the HTML (again) to handle updates, and all this wastes bandwidth and CPU cycles, and the user may experience a performance \"thunk\".\n\n## Creating Custom Elements\n\n`tosijs` provides a convenient abstract base class—`Component`—that lets you build robust components that even handle things like name collisions (e.g. if you use web-components from two different libraries that register the same tag-name, subclasses of `Component` will automatically register themselves under a unique name if needed).\n\n> ### Registration tagName Collision Handling\n> \n> How does `tosijs` handle tagName-collision? It automatically picks new names until the collision disappears. `tosijs` provides extensive support for web-components that don't use the shadowDOM (which makes them more performant and easier to style).\n>\n> If you write (light DOM) style rules for a web-component using ':host' these will be rewritten using the component's registered tag-name. So if you ask for the tag `my-component` and it ends up being renamed to `my-component2` then its styles will still be targeted correctly.\n\n```typescript\nimport { Component, elements, PartsMap } from 'tosijs'\n\ninterface MyComponentParts extends PartsMap {\n input: HTMLInputElement,\n caption: HTMLElement\n}\n\nconst { label, span, input } = elements\n\nexport class MyComponent extends Component
{\n disabled = false\n caption = ''\n value = ''\n \n constructor() {\n super()\n \n this.initAttributes('disabled', 'caption')\n }\n \n handleChange = () => {\n this.value = this.parts.input.value\n }\n \n content = () => label(\n span({\n part: 'caption'\n }),\n input({\n part: 'input',\n onInput: this.handleChange\n })\n )\n \n render() {\n super.render()\n \n this.parts.caption.textContent = this.caption\n this.parts.input.disabled = this.disabled\n if (this.parts.input.value !== this.value) {\n this.parts.input.value = this.value\n }\n }\n}\n\nexport const myComponent = MyComponent.elementCreator({\n tag: 'my-component'\n})\n```\n\nA few notes on this:\n\n* `initAttributes` tells Component to track those attributes and trigger `render` if they change. It will also track the type of values (so a `number` attribute will stay a `number`, even if initialized with a `string`, and `boolean` attributes will behave as expected: ` ` will do the correct thing. This allows the element to operate nicely as a drop-in replacement for an ` `.\n* Note that `value` isn't defined as an attribute, but if value is initialized to anything other than `undefined` then the `Component` class will automatically give it attribute-like qualities, e.g. changing it will trigger the `Component`'s `queueRender(true)`.\n* `queueRender()` is a Component method that uses `requestAnimationFrame` to trigger the `render()` method and, if passed `true` dispatches 'change' event on the element.\n\n`tosijs` also supports `Component` \"blueprint\" functions that can be hydrated by any compatible version of `tosijs` at run-time (so you don't run into issues where two components created with different versions of the same library can't co-exist).\n\n```typescript\nimport { PartsMap, XinBlueprint } from 'tosijs'\n\ninterface MyComponentParts extends PartsMap {\n input: HTMLInputElement,\n caption: HTMLElement\n}\nconst blueprint: XinBlueprint = ({ elements, Component }) => {\n const { label, span, input } = elements\n class MyComponent extends Component {\n // same contents as before\n }\n return {\n type: MyComponent\n }\n}\n``` \n\n## Managing State\n\nA huge amount of user-facing code is devoted to managing application state and synchronising it to the DOM. All of this code needs to be written, tested, maintained, and so on. Ideally this code should be as minimal and robust as possible. Throughout my career, I have constantly strived to make this code trivial or unnecessary to write, test, or maintain.\n\nA very simple bit of binding code might synchronise `app.user.name` with an ` ` field:\n\n```javascript\nconst app = {\n user: {\n name: 'Anne Example'\n }\n}\n\nconst nameField = document.createElement('input')\nnameField.value = app.user.name\nnameField.addEventListener('input', () => {\n app.user.name = nameField.value\n})\n``` \n\nBut what happens if `app.user.name` is changed somewhere else?\n\nIn `React` you'd either handle this using a hook or a custom effect or pass the value down through the virtual DOM using something like Redux.\n\nIn `tosijs` you'd use a `boxedProxy`:\n\n```javascript\nimport { boxedProxy } from 'tosijs'\nconst { app } = boxedProxy({\n app: {\n user: {\n name: 'Anne Example'\n }\n }\n})\n```\n\n`boxedProxy` passes back a `xin` observer proxy wrapped around the value you passed to it. Any change made through the proxy triggers updates as needed. So, this code:\n\n```javascript\nimport { elements } from 'tosijs'\nconst { input } = elements\n\ndocument.body.append(input({ bindValue: app.user.name }))\n```\n\nWill create an ` ` that updates `app.user.name` when the user edits it, but will also be updated if `app.user.name` is changed elsewhere.\n\n> How does the `xin` observer proxy work? It wraps an object with a proxy that remembers where the value it contains came from. If the value isn't an `object` (e.g. a `string`, `number`, or `boolean`) it wraps it in the corresponding Javascript object (e.g .`String`, `Number`, `Boolean`) and wraps _that_ in a proxy.\n> \n> So, `app.user.name` is a proxy wrapped around `new String('Anne Example')` has a `xinPath` ('app.user.name') and a `xinValue` ('Anne Example').\n> \n> The root object, `xin` is a singleton. And the `xinPath` is basically the Javascript path to the value within `xin`, but it also supports the concept of \"id-paths\" which allow unique and stable references to objects in arrays, e.g. imagine `app` contains an array of `messages` which each look like: `{id: '12345', title: 'a message',...}`. You could refer to a message as `app.messages[id=12345]` rather than `app.messages[2]`.\n> \n> Debugging issues with `xin` state is usually straightforward. A common trick is to simply make `xin` (the singleton state object) available as a global and interrogate it. If `xin.foo.bar` contains the right value, the problem is in the binding. If it isn't, you haven't put the data where you think you did.\n\n## Creating DOM Elements\n\nOne of the most popular aspects of React is JSX, a [domain-specific language](https://en.wikipedia.org/wiki/Domain-specific_language#:~:text=A%20domain%2Dspecific%20language%20\\(DSL,is%20broadly%20applicable%20across%20domains.) that combines XHTML and Javascript while being neither, allowing a developer to write something that _looks like_ HTML to produce code that ultimately generates the HTML somewhat resembles the source code.\n\n```tsx\ninterface MyProps {\n caption: string\n}\n\nexport default (props: MyProps) => {props.caption}
\n```\n\nThis produces a function that returns a sack of properties that then can be inserted in a \"virtual DOM\" and ultimately turned into the `` you first thought of.\n\nNote that if you import this function as `MyComponent` you'll end up with a virtual DOM element named `
` that contains the actual (and virtual) ``. Typically there will be more purely virtual DOM elements (with no actual corresponding DOM element) compared to real ones, and the most important elements for purposes of debugging will be those components which cannot be found in the DOM and require using the custom React debugging extension, which is surprisingly crude despite over a decade of development.\n\nAlso, every real DOM element (which is implemented in highly optimized native code) will have multiple virtual DOM elements (implemented in far less optimized Javascript) that occupy memory and consume processor cycles.\n\nIt's also worth noting that JSX is intrinsically tied to the concept of a virtual DOM. It's easy to write your own JSX implementation, but it's impossible to divorce it from the concept of the \"virtual DOM\". So, even if you think having to use a DSL is just great, there's that.\n\nFinally, if you care about this, LLMs tend to have a huge amount of mediocre to bad React code in their training data and will hallucinate React solutions to problems given the slightest provocation (this goes beyond LLMs—a lot of lint tools will tell you that a web-component's `render` method is wrong because it doesn't look like a `React` component's `render` method). JSX is supported by the Angular tech stack these days, and I imagine Angular developers are driven nuts by LLMs mis-identifying their code as React code and helpfully suggesting horribly incorrect changes (on top of getting confused between the two Angular trees…)\n\nIn my opinion, a good framework doesn't define new and unnecessary DSLs, even if they didn't have the issues JSX has.\n\n**The best way to produce DOM nodes, in my opinion, is either HTML or pure Javascript**. The former option benefits from decades of careful optimization, but suffers from weaker code-quality support (HTML does not have the kind of code quality tooling Typescript and Javascript have) while the latter can be clumsy. (See the vanilla js example under _Managing State_, above.)\n\n`tosijs` provides `elements` a [proxy](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Proxy) that produces `ElementCreator` functions. This is easiest to explain by example:\n\nIn React you can produce a list component like this:\n\n```tsx\ninterface MyProps {\n list: string[]\n}\n\nexport default (props: MyProps) =>
\n ...props.list.map((s: string) => {s} )\n \n```\n\nWith `elements` you can write this:\n\n```typescript\nconst { ul, li } = elements\n\nexport default (list: string[]) => ul(...list.map((s: string) => li(s)))\n```\n\n* `elements` lets you build DOM elements using pure Typescript / Javascript, not a DSL\n* the resulting code is more compact (e.g. `
` becomes `ul(...)`).\n* `elements` code produces DOM nodes directly. There's no virtual DOM.\n* an `ElementCreator`—such as `li()`—treats string parameters as `Text` nodes, object parameters as attribute/property maps, and `Element` parameters as children.\n\nSo, without prepropressing or transpilation you get to build the DOM with less code, the code you write traffics in actual DOM elements with _standard_ APIs, and that code is written in Typescript / Javascript, not a DSL.\n\n## Synergies\n\nBecause all these things work together there are some powerful synergies.\n\nE.g. if you write this vanillaJS:\n\n```javascript\nconst app = {\n user: {\n name: 'Anne Example'\n }\n}\n\nconst input = document.createElement('input')\ninout.value = app.user.name\ndocument.body.append(input)\n```\n\nYou've simply created an `
` element with the correct initial value.\n\nAdding code to sync the state to the DOM is quite a bit of extra work to handle user edits. But to handle cases where the state is changed directly and now the DOM needs to be updated, things get really nasty.\n\nIf you stick `app.user.name` in some places in the DOM, and then it is changed, how do you make sure what's on the screen is updated? React literally just renders everything again from scratch, with some optimizations to avoid rendering some things unnecessaily, and having to explicitly work around pathologies in the behavior of weird and unusual things like `
` fields. Oh right, those aren't weird and unusual, they're absolutely everywhere and pretty much the whole reason you need to manage state in applications.\n\nBy contrast, in `tosijs` if you write:\n\n```javascript\nconst app = boxedProxy({\n app: {\n user: {\n name: 'Anne Example'\n }\n }\n})\n\ndocument.body.append(input({ bindValue: app.user.name }))\n```\n\n`tosijs` will automatically bind the proxy to the `value` of the `
`. That's the really hard problem solved straight out of the box with no extra code. In fact, for complex operations, that's usually all you want because storing changed state may involve a round-trip to a server, failure handling, etc.\n\nNow, if you write:\n\n```javascript\napp.user.name = 'Some other example'\n```\n\nThe `
` will be updated.\n\nHere's all of this in action:\n\n```js\nconst { elements, boxedProxy } = tosijs\n\nconst { exampleApp } = boxedProxy({\n exampleApp: {\n user: {\n name: 'Anne Example'\n }\n }\n})\n\nconst { h1, label, input, div, pre, span } = elements\n\npreview.append(\n h1(\n 'Hello ',\n span({bindText: exampleApp.user.name})\n ),\n div(\n label(\n span('Edit name'),\n input({bindValue: exampleApp.user.name})\n )\n ),\n pre(\n`Try opening the developer console and interacting with exampleApp.\nE.g. you can type exampleApp.user.name = \"some other example\"`\n )\n)\n\nwindow.exampleApp = exampleApp\n```\n\nAnd this is pin-point efficient. Only things bound to `app.user.name` get updated. When you bind data to a DOM element with `tosijs` it tracks exactly what data is bound where. Try using the Chrome debugger tools and turn on Rendering > Paint flashing.\n\n## Performance\n\n`tosijs` is designed from the ground up to avoid worst-case performance issues.\n\n* there is no virtual DOM\n* DOM elements are created from pure javascript and cloned when practical\n* updates to the DOM are performed only when strictly necessary\n* typically simple `tosijs` code is shorter and simpler than equivalent React (or—shudder—Angular) code. And this effect tends to multiply across more complex cases.\n* `tosijs` leans heavily into modern CSS, especially CSS variables. These have the advantage of being natively supported on all major browsers, penetrating the shadowDOM of custom elements, and offering the benefits of frameworks like TailwindCSS in a more flexible, efficient, and maintainable way.\n* CSS created on top of css-variables tends to be more compact than CSS compiled using things like SCSS and Less, and still supports semantic classes (unlike TailwindCSS).\n* the `xin` proxy has almost no overhead compared with (for example) passing data through the React virtual DOM. And if you want to avoid even that overhead, you can simply modify the underlying object directly (e.g. `foo.bar.xinValue.baz = [...]`) and then simply notify `tosijs` using `touch()` when you want the DOM updated.\n\n### Virtualized List Binding\n\n`tosijs` supports virtual list binding out of the box. As long as you know the height (or minimum height) of an item within a list and the container you put the list in has a set size, `tosijs` will virtualize the list.\n\nE.g. imagine you have a list of all the currently defined emoji (or some recent subset thereof) and you want to render an emoji picker.\n\n\nThis gives you a scrolling list of thousands of emoji (each of which is quite expensive to render) but only the _visible_ DOM elements are rendered. This example does exactly that.\n\n```js\nconst { elements, boxedProxy, throttle, touch } = tosijs\n\nconst request = await fetch(\n 'https://raw.githubusercontent.com/tonioloewald/emoji-metadata/master/emoji-metadata.json'\n)\nconst list = await request.json()\n\nconst { emoji } = boxedProxy({ emoji: {\n filter: '',\n list\n} })\n\nconst filterEmoji = throttle(\n () => {\n const needle = emoji.filter.trim().toLocaleLowerCase()\n console.log(needle)\n const useFilter = !!needle\n for(const item of emoji.list.valueOf()) {\n item.hidden = useFilter && !(item.name.toLocaleLowerCase().includes(needle))\n }\n touch(emoji.list)\n }\n)\n\nconst { input, div, span, template } = elements\n\npreview.append(\n input({ \n placeholder: 'filter emoji', \n bindValue: emoji.filter,\n onInput: filterEmoji\n }),\n div(\n {\n style: {\n height: '250px',\n overflowY: 'auto'\n },\n bindList: {\n hiddenProp: 'hidden',\n idPath: 'name',\n value: emoji.list,\n virtual: {\n height: 30\n }\n }\n },\n template(\n div(\n { class: 'row' },\n span({bindText: '^.chars'}),\n span({bindText: '^.name'}),\n )\n )\n )\n)\n```\n```css\n.preview .row {\n display: grid;\n grid-template-columns: 80px auto;\n}\n```\n\n## Using Specialized Libraries\n\n```html\n\n
\n```\n```css\n.preview xin-map {\n width: 100%;\n height: 100%;\n}\n```\n\n\nTypically, complex libraries are shipped as vanilla javascript, and need to be wrapped using host components. `tosijs` typically can use non-rendered libraries directly, and simply wrap rendered libraries in a very thin wrapper, allowing full and direct access to the underlying API.\n\nE.g. `tosijs-ui` (a component library provided for use with `tosijs`) provides lightweight wrappers for `mapboxjs`, `babylonjs`, and `bodymovin` that provide complete access to the underlying API with no overhead.\n\nAnd in cases—like mapbox or webrtc tools—where a given library is large, isn't always needed, and/or using it requires the user is online, you can load a **blueprint** asynchronously, and that blueprint can itself load its dependencies asynchronously, as needed.\n\n
\n \n \n\n## Radically Simplifying Complex Apps: The `tosijs` Advantage\n\nEven if your current workflow revolves around React and JSX, `tosijs` offers a fundamentally simpler approach to managing and mutating application state. Instead of relying on manual \"effect\" handlers or cumbersome prop drilling through partial DOM hierarchies, `tosijs`'s **`boxedProxy` system** provides automatic, pin-point DOM updates. You can simply `useXin()` to create hooks to `tosijs` data-paths, letting `tosijs` handle the synchronization efficiently and precisely.\n\n> `useXin` is part of the tiny [react-tosijs](https://www.npmjs.com/package/react-tosijs) library that allows you to create a hook for any data-path. You can just shove data into `xin` and pipe it anywhere it's needed.\n\nBeyond state management, `tosijs` provides a genuine path to escape the confines of \"invasive\" frameworks that can lead to tangled dependencies and escalating complexity. We've all experienced the \"trauma\" of maintaining sprawling and convoluted Redux codebases or navigating an intricate web of interacting React Hooks. With `tosijs`, you can seamlessly integrate vanilla JavaScript libraries, leveraging `boxedProxy` to pipe data and track changes, allowing for **clean, side-by-side operation with existing legacy functionality**. Over time, this enables a gradual, strategic migration towards a codebase that's cleaner, more maintainable, and free from burdensome framework-specific dependencies.\n\n```js\nconst { elements, boxedProxy, touch } = tosijs\n\nconst { h1, div, input, span, template, button } = elements\n\nclass Todo {\n items = [] // { done: boolean, task: string }\n \n constructor(tasks = []) {\n this.items = tasks\n }\n \n addTask(task) {\n if (task) {\n this.items.push({\n done: false,\n task\n })\n }\n }\n}\n\nconst todo = new Todo([\n {\n task: 'Get milk',\n done: false\n },\n {\n task: 'Feed cat',\n done: false\n }\n])\n\nconst { todoProxy } = boxedProxy({\n todoProxy: {\n items: todo.items,\n newItem: ''\n },\n})\n\npreview.append(\n h1('To Do'),\n div(\n {\n bindList: {\n value: todoProxy.items\n }\n },\n template(\n div(\n { class: 'todo-item' },\n input({type: 'checkbox', bindValue: '^.done'}),\n span({bindText: '^.task'})\n )\n )\n ),\n div(\n input({ \n placeholder: 'add item', \n bindValue: todoProxy.newItem, \n onKeyup(event) {\n if (event.key === 'Enter') {\n event.target.nextElementSibling.click()\n }\n }\n }),\n button(\n { \n onClick() {\n todo.addTask(todoProxy.newItem.valueOf())\n todoProxy.newItem = ''\n touch(todoProxy.items)\n },\n bindEnabled: todoProxy.newItem\n }, \n 'Add Todo Item'\n )\n )\n)\n```\n```css\n.todo-item {\n display: grid;\n grid-template-columns: 44px auto;\n}\n\n.todo-item input[type=checkbox] {\n height: 16px;\n}\n\nbutton[disabled] {\n pointer-events: none;\n opacity: 0.5;\n}\n```\n\n\nFor those building fresh, `tosijs` empowers you to use your favorite **Web Component libraries alongside standards-driven HTML**. You can effortlessly drop in custom Web Components as direct, superior replacements or enhancements for often-substandard native HTML elements like `
` or `
`, significantly improving user experience without compromising on standard web practices.\n\nOne of the most common performance bottlenecks in web applications is handling large tabular data. `tosijs` tackles this head-on by providing **nearly trivial support for virtualized list bindings**. This means you no longer have to devise complex strategies to break up large datasets to maintain a smooth user experience. Instead, you can focus on simply ensuring you're not loading unnecessarily massive amounts of data in the first place.\n\nWhile `tosijs` might initially remind you of focused, lightweight libraries like Svelte, SolidJS, or Vue.js, there's a crucial distinction. Those libraries primarily concentrate on simplifying the development of _simple applications_. `tosijs`, by contrast, is laser-focused on **radically simplifying the development of rich and genuinely complex applications**. Making simple apps even simpler is a welcome side-effect, but it's not our ultimate goal.\n\n`tosijs` represents a significant evolution in front-end frameworks. Our objective is to deliver **native-like usability, performance, and robustness** on the web stack. This allows developers to accomplish more with **less code written, fewer dependencies, smaller package sizes, reduced bandwidth consumption, and significantly easier maintenance**. It's about a fundamental shift: write less code, build less code, read less code, fix less code, and—ultimately—get far more done.\n\n> ### Colophon\n> \n> This website replaces my old Wordpress-based blog because I was sick of paying for lackluster PHP hosting, didn't like with Wordpress did to my text, was sick of plugins breaking, and wanted to see how quickly I could build a functional blog site using the `tosijs` stack. \n>\n> After a couple of weeks of coding, the site has been stable and I honestly put about as much work into maintaining it as I used to put into fixing issues with Wordpress. (Obviously, if I paid for wordpress hosting specifically it would involve almost zero effort.)\n>\n> Blog posts are written in [markdown](https://en.wikipedia.org/wiki/Markdown), edited using the [Ace Editor](https://ace.c9.io/), rendered using [marked](https://marked.js.org/), and the blog itself is hosted on [Google Firebase](https://firebase.google.com/). All the glue and UI is done using [tosijs](https://tosijs.net), [tosijs-ui](https://ui.tosijs.net), and [xinie](https://xinie.net). The latter is a work-in-progress and this is the testbed for it.\n>\n> Finally, all the interactive examples are themselves embedded using `tosijs-ui`'s `` web-component.",date:"2025-06-03T22:24:57.708Z",keywords:[],summary:"This article introduces tosijs, a highly opinionated front-end framework designed to radically simplify complex web and desktop application development. Unlike frameworks that complicate simple tasks or rely on inefficient virtual DOMs, tosijs leverages native Web Components, direct DOM manipulation, and a unique proxy-based state management system to minimize boilerplate, boost performance, and enhance maintainability, allowing developers to achieve more with less code.",author:"Tonio Loewald",_created:"2025-05-31T07:01:44.486Z",_modified:"2025-09-13T16:53:51.602Z",format:"markdown",_path:"post/s51blf4tsx65"},"post/path=squirrels":{keywords:[],title:"Squircles",path:"squirrels",content:"\n \n \n\nProducing squircles is not especially easy. There's a number of Youtube tutorials on the subject.\n\nThing is, once you've seen a squircle, it's hard to go back to rounded rectangles… they just look so much better because of the way the curve straightens.\n\n \n\nI should note that in the diagram above the two figures have the same \"bevel radius\". In practice with a squircel you'd use a larger bevel radius.\n\nAnyway, **Amadine** doesn't do them perfectly, but it does offer elliptical curved corners as a beveling option and it's damn close.",date:"2025-05-05T16:14:46.879Z",summary:"Tired of those boring rounded rectangles? Squircles are the next evolution in smooth, visually appealing shapes, and while perfect execution is tricky, Amadine's does a nice job with them.",author:"Tonio Loewald",_created:"2025-05-05T12:15:00.202Z",_modified:"2025-06-03T20:51:23.168Z",format:"markdown",_path:"post/nh3jt0g2b6wl"},"post/path=blender-gets-real":{keywords:[],title:"Blender Gets Real",path:"blender-gets-real",content:" \n\nThe Oscar for best animated movie went to Flow , a 3d animated movie made with Blender by director [Gints Zilbalodis](https://www.imdb.com/name/nm6478535/?ref_=tt_ov_dr_1).\n\nNot only is a tiny team beating out the likes of Pixar and Dreamworks an incredible achievment on its own, \nit's also a big win for open source and anyone with a great idea and limited access to financing. \nThis is the first Oscar-winning animated movie to be made without dedicated engineering support \n(e.g. Pixar uses tons of custom software and has two engineers for every artist). These folks had a total budget $4M.\n",date:"2025-03-26T15:04:54.659Z",summary:"Flow, the Blender-animated film, took home the Oscar for Best Animated Feature. But it's more than just a win for a small team; it's a monumental victory for open-source software and anyone with a vision and a limited budget.\n",author:"Tonio Loewald",_created:"2025-03-26T11:54:52.605Z",_modified:"2025-03-26T15:05:04.371Z",_path:"post/5lxsjwunwnjb"},"post/path=futures-so-bright-i-want-to-wear-ar-glasses":{keywords:[],title:"The future's so bright… I want to wear AR glasses",path:"futures-so-bright-i-want-to-wear-ar-glasses",content:" \n\nThe collapse of Democracy, loss of trust in institutions,\nthe corruption of social norms, the rise of ignorant bigotry and wilful \nstupidity. All of this sucks.
\nIt also sucks that the dictator of Russia thinks he's some kind of historic\ntsar destined to have \"the great\" after his name, if he can only commit enough\npointless atrocities, or at least keep his population distracted from his own \nabject incompetence as a leader. Imagine if he'd spent all that oil money\non actually fixing the Russian economy?
\nIt also sucks that the US electorate has chosen to elect a felon, fraudster, and sex predator to its highest office, aided by corrupt justices in its highest court, and a bunch of weak-minded, weak-willed fools in its Congress.
It's all a huge shame, since technology is making incredible strides\nand it's incredibly exciting. Sure, we don't have Jetsons-style aircars, but \nhere's a list of stuff we do have that's frankly mind-blowing.
\nTricorders, Datasticks, Wrist Computers \nThe iPhone, Apple Watch, and Android-based equivalents are better and more \nuseful than pretty much any SF device of the same kind. An iPhone strapped to\nyour wrist would be a superior device (ignoring ridiculous magic) than any\nsimilar device shown in SF. Even futuristic \"post iPhone\" devices shown in\nSF (e.g. the glass slates in The Expanse ) don't really match modern smart\nphones.
\nThe communicators in Star Trek: Next Generation didn't even have call waiting,\nwhich was something we had on regular phones at the time .
\nSolar Power \nI don't know if you've been paying attention, but the price of solar power is\nnow so low that the only thing stopping it from simply replacing all other \npower sources is the fact it isn't sunny everywhere all the time. If we just \nhad a grid that could send power from where it's sunny to other places (and\nit's being built) and conveniently store and recharge power on things that\nneed to move, there'd really no reason to generate power any other way.
\nYes, climate change is bad, but if we're not actively stupid we already have\nall the technology we need to solve the problem both in the near term (build\nmore modern nuclear power plants) and the long (create a worldwide solar grid).
\nOh yeah, and we seem to have solved the population crisis so bad we might need\nto encourage folks to have more kids at some point. Maybe not treat women (or\nbiologically female persons if you prefer) who have kids as third class \ncitizens? I dunno.
\nNuclear Power… Fusion Power? \nWe can make nuclear power plants that are safer and have a smaller environmental\nimpact than any other form of power generation. Right now, it's not cheap, but\nnuclear power is exactly the kind of technology that would get super cheap if\nwe just scaled it.
\nNever mind, fusion seems to have gone from being \"fifty years away\" to actually \ngetting serious private investment. Someone might actually get it to work!
\nBattery Technology \nElectric cars are actually the least interesting thing that we've gotten from\nimprovements in battery techology. Moore's law predicted we'd have cheap\nsupercomputers that could fit on our wrists by now, but it didn't say they'd\nbe able to run all day on battery and have a fast wireless network connection.
\nFor quite a while, Moore's Law actually seemed to be saying the opposite. A\nmodern gaming PC consumes an embarassing amount of power compared to, say,\nthe very expensive graphics workstation I had custom made for myself back in\n2001.
\nInsanely Good Cellular Networks \nWe have a very fast cable connection here in Oulu, Finland (2h south of the\nArtic Circle by car…). But when it wobbles and I switch to sharing my iPhone's\nunlimited data plan, the network gets better .
\nAI (sort of) \nOK, it's not really intelligent, but it does pretty much ace the Turing Test,\nso while AGI may be a big con, the fact is we've now got something that passes\nthe Turing Test and can run locally on your laptop or phone.
\nCan it think? No. Can it conduct an \"intelligent\" conversation? Sure, it may\nmake shit up or hallucinate, but find me a person who doesn't sometimes do\nthat.
\nAnd if you're a programmer, the interface for these things is trivial . \nYou pass it a bunch of text and you get back a bunch of text. For advanced\ncases you can add images and so on. It's so much easier than, say, displaying\na window with \"hello world\" in it was on a 1984 Mac.
\nTo give an example, I have an AI-powered tool for my blog to create summaries\nof my posts, and the API is not much more than:
\nconst post = ... // get the post source\nconst response = await ai(`Here is a blog post I've written in markdown format, please write a blurb for it or extract a key paragraph. ${post}`)\nconst summary = response.text\n
\n\nSure, it doesn't write very well, but it was super easy to implement.
\nOh, and I can right-click and have an AI proofread anything.
\nAR and VR headsets \nThe Meta Quest 3 is amazing and really quite cheap. The Vision Pro seems even\nmore amazing, but not cheap. But both cost peanuts compared to even mediocre \nVR headsets from 10 years ago.
\n3D Printers \nYou can 3D print in plastic, metal, or wood. You can print subtractively\nor additively. The precision of 3D prints is such that you can print parts that\nshould fit together perfectly, and they fit together perfectly.
\nAnd this is no longer for toys. Even with inexpensive printers, you can print \ncar and motorcycle parts or entire drones (aside from the circuits and motors),
\nAnd the devices to do this can be bought for the price of a high-end power tool.\nIt's nuts.
\nVideo Conferencing \nYou can do business from anywhere via video conferencing on laptops, phones,\nand TVs. In fact it's so common, we've seen several generations of social norms\nevolve (e.g. today, there's an unspoken threshold at which almost everyone turns\noff their camera).
\nShared Office Space \nWe Inc may have been a huge scam, but the idea remains brilliant and there are\nplenty of people offering shared office space sustainably today.
\nThe business I founded in 2001 had to commit to a multi-year AUD $8000/month \nlease. If we had shared office space we'd have had $100k/year more to spend on \nanything we needed… or just salaries.
\nAnd don't even talk about the horrific price we paid for a 56kB ISDN connection.
\nCube Sats \nGetting to be an astronaut has never been a reasonable goal for more people. But\nfor the price of a new Porsche 911, BMW M5, or Tesla Cybertruck, you can launch \nyour own satellite.
\nTiny, Customizable, Powerful, Embeddable Computers \nYou can get fully functional computers like the Arduino or Raspberry Pi for \nalmost nothing, and program them to do AI, drive displays, take pictures, \nyou name it.
\nSo, if you want to build your own satellite, it's not that big a deal to hook\nup a radio to a tiny computer and a fancy camera, solar cells, batteries, etc.\nand you've got your own spy satellite. And quite a few startups have done\nbasically this.
\nVery Fast Trains \nYou might not know this if you live in the US, bu in many countries outside the\nUS, trains are so cheap and convenient that it's hard to justify driving long\ndistance by car. In Finland the intercity trains can reach 200km/h—you simply\ncan't get places as fast in a car—and the cities tend to be highly walkable.\nIt doesn't hurt that gas is highly taxed, but if you had a choice of driving or\ngetting there quicker and being able to work and go to a restaurant while \nzooming through the landscape at 150km/h, would you drive?
\nIt would be great if the WiFi on the trains were a little better. I mean, \nsometimes I have to wait a few minutes for the network to be quick enough for\nme to push a PR.
\nGenetic Engineering \nYou can literally buy a kit for very little money that allows kids to use\ngene shears to make a plant glow in the dark. This isn't even all that recent.
\nMore recently, there was this worldwide pandemic. It was in the news.
\nVery shortly after COVID-19 was first detected, scientists in China were able \nto sequence the virus's DNA and publish it. Other scientists at Moderna \ndownloaded the genome and tailored a highly effective vaccine against it over \na weekend. While it needed about a year to pass safety testing, the team \nthemselves were sufficiently\nconvinced of their technology that they all innoculated themselves immediately.\nIt seems highly likely to me that a lot of Trump's bravado about the pandemic\ncame from he and those close to him having early access to these vaccines.
\nSoftware Development is Crazy Easy \nAnd no, I'm not talking about using AI to write a to do list manager.
\nI literally wrote a utility program yesterday using npx create-xinjs-blueprint
\nin less time than it would have taken me to do the same thing using \nRealbasic .
\nI actually have a stack that lets you build web-apps top-to-bottom, desktop \napps, and tiny utilities as easily as I used to be able to in the 1990s. You \ncan't draw the user interface (which is the final missing piece) but everything \nelse is as easy or easier—I just need to wrap some convenience tooling around\nit and write documentation. So much documentation.
\nAir Cars \nI lied. You can actually buy air cars now.
\n",date:"2025-02-04T17:57:55.814Z",summary:"So much bad news right now…\n\nIt's all a huge shame, since technology is making incredible strides and it's incredibly exciting. Sure, we don't have Jetsons-style aircars, but \nhere's a list of stuff we do have that's frankly mind-blowing.",author:"Tonio Loewald",_created:"2025-02-04T17:49:54.240Z",_modified:"2025-02-04T18:07:11.527Z",_path:"post/2n8p4qfong3r"},"post/path=how-i-learned-to-stop-worrying-and-love-blender":{date:"2009-05-08T21:23:52.000+00:00",summary:"What is my perspective on using Blender for 3D animation and modeling after switching from Cheetah 3D?",keywords:["blender","3d animation","cheetah 3d","rigging","animation","character animation","cinetools","unity","unity development","fbx exporter"],author:"Tonio Loewald","$createdAt":"2024-06-05T09:47:38.234+00:00","$databaseId":"665dfafe000014726b3d",title:"How I learned to stop worrying and love Blender","$collectionId":"665dfb040030d12ada24","$permissions":[],content:" \n\nA lot of software today tends to be virtually modeless when used for common purposes. Modes were recognized as a bad thing in user interface design when Apple produced the original Human Interface Guidelines for the Mac.\n\nEven so, the early Mac programs were very modal. For example, if you pressed command-F in MacWrite a \"Find Dialog\" appeared which was modal. You couldn't, for example, select some text in the underlying document and paste it into the text field. If you were \"finding\" stuff and saw a typo you had to leave \"Find Mode\" to correct it, and then resume \"Find Mode\". In MacPaint when you zoomed in you were in \"fat bits\" mode and the only tools you could use were the pencil and eraser. Much of the complexity in modern software has resulted from eliminating modes -- in essence making all things possible all the time.\n\nIt's not acceptable for a modern word processor not to offer all its formatting functionality in headers and footers when, once upon a time, you essentially entered a crippled mini word-processor to modify your header or footer.\n\nThe worst kinds of modes are hidden or implicit modes -- i.e. modes you can be in without being able to tell. A classic example of such a mode is selecting stuff in Photoshop. If you \"make a selection\" that's all you're doing -- making a selection. You aren't actually selecting anything. If you then \"lift\" part of the underlying image (i.e. selecting that part of the layer underneath your selection) the user interface doesn't change even though you're now in a different mode. Understanding this odd behavior was, for me, the big aha moment in learning to use Barneyscan Photoshop. Before I figured that out, Photoshop just seemed perversely difficult to use.\n\nAnd, in fact, Photoshop is perversely difficult to use. MacPaint didn't do things the way Photoshop did, and Deluxe Paint (and later Studio/8 and Studio/32) managed to extend MacPaint's abilities without breaking MacPaint's paradigm to the extent that they allowed you to do everything Photoshop managed with its \"selections as entities in and of themselves\" without requiring users to unlearn their existing habits. No-one cares any more because every graphics pro knows Photoshop and has committed all the shortcuts for manipulating selections to muscle memory, but it's easy to forget how perverse Photoshop seemed to people who knew how to use other graphics programs back in 1990.\nBlender Revealed \nTo my mind, figuring out Blender requires at least two major realizations. Perhaps by documenting them I may help someone else \"get over the hump\" faster than I have.\n\nThe first realization is that Blender is built around the concept of a 3d cursor . You need to learn how to work with the cursor. (Shift-C snaps the cursor back to the origin, and learn to use the Shift-S menu -- it's wonderful.) The 3d cursor is a very powerful concept -- analogous to a text cursor -- but other 3d programs make do without such a concept, so realizing that Blender has this concept and is to a considerable extent built around it is analogous to understanding Photoshop's concept of a selection as an entity of itself -- once you realize that Photoshop is all about selections and learn how to work with them everything becomes second nature.\nAside : one thing I tried early with Blender was switching its mouse behavior over so that left-click selects and right-click sets the cursor. Unfortunately, Blender becomes somewhat broken with this option. In my opinion Blender would be a much easier program to use (and master) if left-click selected, and if -- indeed -- major thought were put into making more common actions left-clicks and less common actions right-clicks. This may be addressed (at least by preferences) in 2.50, but don't hold your breath.
\n\nThe second realization is that Blender is full of modes , that most of these modes are hidden, some modes stick to specific objects while others affect the entire program, and that in some cases which mode you can enter is itself a modality. This isn't as easy to wrap in a nutshell -- but it's probably helpful to know that if Blender is acting in a weird or apparently unpredictable way, you've probably managed to trip over a hidden mode.\nBlender is difficult, but most of that difficulty is necessary \nWhen I say Blender is full of modes , these are modes in every sense -- menus change, the way things are displayed changes, what you can select and do changes, etc. etc. etc.\n\nAnd when I say hidden , I mean that how the user interface behaves can be completely changed by a setting on a panel you can't see, or by the mode you previously got into when you had something else selected.\n\nHere's a particularly hairy example: there are two distinct animation modes in Blender, \"Actions\" and \"NLA\" (Non-Linear Animation).\n\nThe idea is that you animate a figure's basic reusable behaviors (e.g. walking, jumping, waving) as individual \"Actions\" and then you build longer animations using the NLA window which treats actions as layers (much like video layers in After Effects or Final Cut Pro).\n\nWhether you're in NLA mode (where what you see happening and what you are modifying is happening in the non-linear animation context) or Action mode (where what you see and do affect a specific action) is not necessarily visible or apparent at any time. Oh, and, each action you've created is, itself, a different mode (and which you're in is not apparent unless you happen to have the right display visible and the right hierarchy disclosed and scrolled into view).\n\nNow, let's suppose you're editing vertex weights in the course of \"rigging\" a model.\nWhat does this mean? Well, you're trying to choose how the vertices in a mesh are affected by the movement of the bones in its animation rig -- a virtual \"skeleton\" made of \"bones\"; the model is the mesh, the rig is the skeleton, and to say a model is rigged is essentially to say that it has had a skeleton built for it and has been attached to that skeleton using vertex weights).
\n\nSounds pretty complicated, right? Well it is, and so it's kind of stupid to expect the user interface for this to be simple or particularly straightforward. I think it's fair to say that no similarly conceptually complex task is explicitly supported in, say, Photoshop or Word or Excel, each of which has a pretty complex user interface in its own right.\n\nThe usual way of showing bone weights is to show a \"heat map\" indicating how tightly bound vertices in a mesh are to a given (selected) bone. This is how the mind-bogglingly complex Blender does it. This is how the beautifully simple Cheetah 3d does it. But in Blender, to select a bone you ... select the bone. In other words, you right-click (because selection is done by right-clicking) on the bone and voila. Pretty neat, huh? In Cheetah 3d you look for the skeleton deformer modifier tag in the hierarchy view, click on it, and then find the bone in a scrolling list in the properties browser.\n\nNow, in Blender it's important to make sure that the armature (or \"skeleton\" of the model you're working on is in \"Pose mode\" -- armatures can be in one of three modes) and then select the mesh and set it to \"Weight Paint Mode\". If the armature is in the wrong mode, the mesh won't be posted while you're finessing vertex weights, which means you won't be able to see whether what you're doing is working. Also, you may not be able to directly select the bones you're interested in. This is all horribly complex, involves hidden modes, and is utterly counter-intuitive. But the benefits (you can paint weights on a posed model and see the changes in real time, and you can select bones directly) are absolutely huge.\n[qt:https://loewald.com/gallery/3d/weasel_walk.mov 400 300 showcontrol=0 scale=aspect loop=true autostart=true]\nMy first successful character animation in Blender. (Rendered in Cheetah 3d.) Getting there was much harder than in Cheetah 3d, but I can actually reuse and make tweaks to the rigging and the animation itself which are impossible or very difficult in Cheetah 3d. Flaws vs. Missing Features. \n
Eating My Own Words? \nI've ridiculed Blender's lack of usability in the past, and probably will in the future. Even some of my praise for Blender in this post probably seems close to ridicule. It's a ridiculously -- and often unnecessarily -- difficult program to get into. Almost all of my complaints (and then some) are as true now as ever. There are so many simple things that could easily be fixed in Blender to make it easier for new users that it's not even funny. The same was true for Photoshop when it came out, and -- over time -- Adobe changed some things to be less non-standard while others eventually asserted their dominance. Even today I'm pretty sure any Deluxe Paint or Studio/32 thinks that the keyboard shortcuts for Photoshop are idiotic compared to those for their cherished bitmap editors. I know I do.\n\nThe fact is that Blender is an incredibly capable , rock-solid piece of software (I can't remember when it last crashed on me). Its user interface foibles are stupid and ugly, but you can get past them , and in the great scheme of things it's certainly worth never having to pay for another 3D Studio Max upgrade or figure out how to replace that burned out ElectricImage dongle.\n\nWho knows? Maybe in five years I'll be writing a similar article about my newfound love of GIMP.\nAddendum: Blender Hints \nSimple things that may make Blender easier to use.\n\n\tOddly enough Preferences are accessed by \"dragging down the menu bar\". WTF? Well, that's how it works. \n\tHaving opened up preferences, you probably want to go to Edit Methods and go to Add new objects: (on the left) and turn off Switch to Edit Mode and Aligned to View which may be on by default (because they used not to be options). Aligning new objects to your view is hardly ever what you want, and many users have the shortcut for (a) switching out of edit mode, (b) clearing a newly created object's rotation, and (optionally) (c) returning to edit mode burned into muscle memory. \n\tYou'll also probably want to change your default directories to minimize the amount of navigation you have to do in Blender's (horrible) file requester. \n\tOnce you've learned how to create a nice setup in Blender you may want to build a good default scene and File > Save Default Settings . \n\tOften, especially when going through tutorials, there'll be reference to a command that I just can't locate in a menu. If the command has a hotkey you can find it using the Hotkey and Mouse Action Reference (Help menu) -- which has a search command. Then you can at least find out the keyboard shortcut, even if you still won't know where the heck to find it in the menus. \n\tThe Essential Blender (book) is really very good, although it does contain quite a few errors (so if something just isn't working, maybe the command's name or the keyboard shortcut is incorrect). \n\tIf a bunch of things suddenly disappear or become uneditable you may have inadvertently messed with the scene layer settings (which the standard number keys are shortcuts for, so it's easy to do). If you don't use layers deliberately the chances are pressing 1 (above the Q, not on the numeric keypad) will restore everything. If you do then you'll already know how to deal with this. \n\tArmatures and bones have their own independent layer system, so be prepared for completely orthogonal confusion when working with rigs. \n\tIn many cases, Blender reuses key commands in different contexts. E.g. G (grab) is used to move objects in 3d, keyframes in the Action editor, and action layers in the NLA window. N (transform properties) brings up a numeric transform floater in 3d views, and similarly useful floaters in (some) other views. A (select all/nothing) selects all or nothing where appropriate, in the hierarchy view it collapses/expands everything. \n\tIt's well worth learning \"root\" shortcuts which bring up menus. E.g. Shift-S (\"snap\") brings up a menu -- usually the next number you type selects the commands (in order) in that menu. In some cases (e.g. G and R) you can press X, Y, or Z to constrain your action to an axis. \n\tKeyboard commands in Blender apply to the panel the mouse is over , which can be very confusing (especially since each pane has its own menu bar and hence its own keyboard shortcuts). \n\tSelecting things is an action (which you can undo). Changing a view is not an action (and you can't undo it) -- unless you moved a camera. \n\tBlender's camera controls are pretty annoying (absent?), but if you manipulate a camera object in a 3d view with the transform orientation set to \"Normal\" Â or \"Local\" you can dolly the camera around in a relatively intuitive fashion. (If there's a command to, for example, create a camera from a current user view, I haven't found it. Shifty-F lets you fly a camera around, but when you leave Fly mode your camera is back where it started.) Edit : a reader notes that Ctrl + Alt + 0 snaps a selected camera to the current view, which is very handy. \n\tI use Blender on a (Mac) laptop quite a bit, so it's worth knowing the view navigation controls for folks with no MMB (middle mouse button). Option-drag to orbit, option-shift drag to pan, and scroll-wheel (which Mac laptops do just great) zooms. \n\tBlender supports \"virtual screens\" internally -- three complete viewport layouts (more if you want) that can be accessed via control-left and control-right. Very handy given how cluttered things can get. \n\tThe render buffer (F11) can be zoomed and panned using the usual keyboard shortcuts. There are actually two render buffers (containing the current render and the previous render) and you can toggle between them with the J key (very useful for checking what effect that last tiny change had). Click and hold in a render view for an RGBA value. \n\tNote especially for Unity developers : the FBX exporter \"bakes\" subdiv modifiers (at \"render\" level) into meshes on export, so beware! \n\tAnd there's a ridiculous number of useful tips and gotchas here . \n ","$updatedAt":"2024-06-05T09:47:38.234+00:00",path:"how-i-learned-to-stop-worrying-and-love-blender",_created:"2024-07-09T20:32:30.323Z",id:"1090",_modified:"2024-07-09T20:32:30.323Z","$id":"1090",_path:"post/path=how-i-learned-to-stop-worrying-and-love-blender"},"page/path=blog":{path:"blog",css:"",imageUrl:"",prefetch:[{regexp:"^\\/(([\\w\\d]+\\/)*)([\\w-]+)\\/?$",path:"post/path=[3]"}],tags:["public"],source:" ",title:"",description:"",_path:"page/path=blog"}}