\n\n\n\n
I first remember André Braugher from his performance in Glory where he played perhaps—low key—the most important role in the movie. He played the person with the most to lose and the least to gain by joining the army and fighting to end slavery (something the movie later acknowledges is pretty much a fool's errand). He plays the person we—the viewer comfortably separated from the events portrayed by circumstances, time, and knowledge of what will happen—should be but almost certainly won't be. (No more details: watch the movie if you haven't seen it.)
\n\n\n\nMost people will know him either from his role in Brooklyn Nine-Nine, an above average a great sitcom of recent years, or Homicide: Life on the Street, the best police procedural ever made, based on a fantastic non-fiction book by David Simon. (I revised this paragraph after conferring with numerous colleagues and discovering that my daughters' opinion is widely held; I am outvoted!)
In Homicide he is again playing someone who stands for justice despite his own self-interest. He is the black man with obvious intellect and education who chooses to work as a Homicide detective when there are so many better options for him, it ruins his marriage, and it is killing him. He works within a corrupt and under-resourced system and with colleagues he pretty much despises trying to make the tiniest difference when and where he can, and usually to his own disadvantage.
\n\n\n\nAnd, despite its being a comedy, as Raymond Holt in Brooklyn Nine-Nine he somehow again plays someone pretty much in this situation except that, now an older man and a captain, he has somehow navigated an earlier phase of life in which all of… this… was much worse, and today is comfortable enough that the horribleness is purely and not always darkly comic.
\n\n\n\nHomicide is one of my favorite TV shows of all time. Brooklyn Nine-Nine is my daughter's favorite TV show of all time.
\n\n\n\nAndré Braugher is already missed.
\n","$updatedAt":"2024-06-05T09:10:30.273+00:00",path:"andr-braugher-rip",_created:"2024-07-09T20:28:30.241Z",id:"8106",_modified:"2024-07-09T20:28:30.241Z","$id":"8106",_path:"post/8106"}],blogDataTimestamp:"2025-02-06T17:05:43.129Z",blogVersion:4,"post/path=futures-so-bright-i-want-to-wear-ar-glasses":{keywords:[],title:"The future's so bright… I want to wear AR glasses",path:"futures-so-bright-i-want-to-wear-ar-glasses",content:"The collapse of Democracy, loss of trust in institutions,\nthe corruption of social norms, the rise of ignorant bigotry and wilful \nstupidity. All of this sucks.
\nIt also sucks that the dictator of Russia thinks he's some kind of historic\ntsar destined to have \"the great\" after his name, if he can only commit enough\npointless atrocities, or at least keep his population distracted from his own \nabject incompetence as a leader. Imagine if he'd spent all that oil money\non actually fixing the Russian economy?
\nIt also sucks that the US electorate has chosen to elect a felon, fraudster, and sex predator to its highest office, aided by corrupt justices in its highest court, and a bunch of weak-minded, weak-willed fools in its Congress.
It's all a huge shame, since technology is making incredible strides\nand it's incredibly exciting. Sure, we don't have Jetsons-style aircars, but \nhere's a list of stuff we do have that's frankly mind-blowing.
The iPhone, Apple Watch, and Android-based equivalents are better and more \nuseful than pretty much any SF device of the same kind. An iPhone strapped to\nyour wrist would be a superior device (ignoring ridiculous magic) than any\nsimilar device shown in SF. Even futuristic \"post iPhone\" devices shown in\nSF (e.g. the glass slates in The Expanse) don't really match modern smart\nphones.
\nThe communicators in Star Trek: Next Generation didn't even have call waiting,\nwhich was something we had on regular phones at the time.
\nI don't know if you've been paying attention, but the price of solar power is\nnow so low that the only thing stopping it from simply replacing all other \npower sources is the fact it isn't sunny everywhere all the time. If we just \nhad a grid that could send power from where it's sunny to other places (and\nit's being built) and conveniently store and recharge power on things that\nneed to move, there'd really no reason to generate power any other way.
\nYes, climate change is bad, but if we're not actively stupid we already have\nall the technology we need to solve the problem both in the near term (build\nmore modern nuclear power plants) and the long (create a worldwide solar grid).
\nOh yeah, and we seem to have solved the population crisis so bad we might need\nto encourage folks to have more kids at some point. Maybe not treat women (or\nbiologically female persons if you prefer) who have kids as third class \ncitizens? I dunno.
\nWe can make nuclear power plants that are safer and have a smaller environmental\nimpact than any other form of power generation. Right now, it's not cheap, but\nnuclear power is exactly the kind of technology that would get super cheap if\nwe just scaled it.
\nNever mind, fusion seems to have gone from being \"fifty years away\" to actually \ngetting serious private investment. Someone might actually get it to work!
\nElectric cars are actually the least interesting thing that we've gotten from\nimprovements in battery techology. Moore's law predicted we'd have cheap\nsupercomputers that could fit on our wrists by now, but it didn't say they'd\nbe able to run all day on battery and have a fast wireless network connection.
\nFor quite a while, Moore's Law actually seemed to be saying the opposite. A\nmodern gaming PC consumes an embarassing amount of power compared to, say,\nthe very expensive graphics workstation I had custom made for myself back in\n2001.
\nWe have a very fast cable connection here in Oulu, Finland (2h south of the\nArtic Circle by car…). But when it wobbles and I switch to sharing my iPhone's\nunlimited data plan, the network gets better.
\nOK, it's not really intelligent, but it does pretty much ace the Turing Test,\nso while AGI may be a big con, the fact is we've now got something that passes\nthe Turing Test and can run locally on your laptop or phone.
\nCan it think? No. Can it conduct an \"intelligent\" conversation? Sure, it may\nmake shit up or hallucinate, but find me a person who doesn't sometimes do\nthat.
\nAnd if you're a programmer, the interface for these things is trivial. \nYou pass it a bunch of text and you get back a bunch of text. For advanced\ncases you can add images and so on. It's so much easier than, say, displaying\na window with \"hello world\" in it was on a 1984 Mac.
\nTo give an example, I have an AI-powered tool for my blog to create summaries\nof my posts, and the API is not much more than:
\nconst post = ... // get the post source\nconst response = await ai(`Here is a blog post I've written in markdown format, please write a blurb for it or extract a key paragraph. ${post}`)\nconst summary = response.text\n
\n\nSure, it doesn't write very well, but it was super easy to implement.
\nOh, and I can right-click and have an AI proofread anything.
\nThe Meta Quest 3 is amazing and really quite cheap. The Vision Pro seems even\nmore amazing, but not cheap. But both cost peanuts compared to even mediocre \nVR headsets from 10 years ago.
\nYou can 3D print in plastic, metal, or wood. You can print subtractively\nor additively. The precision of 3D prints is such that you can print parts that\nshould fit together perfectly, and they fit together perfectly.
\nAnd this is no longer for toys. Even with inexpensive printers, you can print \ncar and motorcycle parts or entire drones (aside from the circuits and motors),
\nAnd the devices to do this can be bought for the price of a high-end power tool.\nIt's nuts.
\nYou can do business from anywhere via video conferencing on laptops, phones,\nand TVs. In fact it's so common, we've seen several generations of social norms\nevolve (e.g. today, there's an unspoken threshold at which almost everyone turns\noff their camera).
\nWe Inc may have been a huge scam, but the idea remains brilliant and there are\nplenty of people offering shared office space sustainably today.
\nThe business I founded in 2001 had to commit to a multi-year AUD $8000/month \nlease. If we had shared office space we'd have had $100k/year more to spend on \nanything we needed… or just salaries.
\nAnd don't even talk about the horrific price we paid for a 56kB ISDN connection.
\nGetting to be an astronaut has never been a reasonable goal for more people. But\nfor the price of a new Porsche 911, BMW M5, or Tesla Cybertruck, you can launch \nyour own satellite.
\nYou can get fully functional computers like the Arduino or Raspberry Pi for \nalmost nothing, and program them to do AI, drive displays, take pictures, \nyou name it.
\nSo, if you want to build your own satellite, it's not that big a deal to hook\nup a radio to a tiny computer and a fancy camera, solar cells, batteries, etc.\nand you've got your own spy satellite. And quite a few startups have done\nbasically this.
\nYou might not know this if you live in the US, bu in many countries outside the\nUS, trains are so cheap and convenient that it's hard to justify driving long\ndistance by car. In Finland the intercity trains can reach 200km/h—you simply\ncan't get places as fast in a car—and the cities tend to be highly walkable.\nIt doesn't hurt that gas is highly taxed, but if you had a choice of driving or\ngetting there quicker and being able to work and go to a restaurant while \nzooming through the landscape at 150km/h, would you drive?
\nIt would be great if the WiFi on the trains were a little better. I mean, \nsometimes I have to wait a few minutes for the network to be quick enough for\nme to push a PR.
\nYou can literally buy a kit for very little money that allows kids to use\ngene shears to make a plant glow in the dark. This isn't even all that recent.
\nMore recently, there was this worldwide pandemic. It was in the news.
\nVery shortly after COVID-19 was first detected, scientists in China were able \nto sequence the virus's DNA and publish it. Other scientists at Moderna \ndownloaded the genome and tailored a highly effective vaccine against it over \na weekend. While it needed about a year to pass safety testing, the team \nthemselves were sufficiently\nconvinced of their technology that they all innoculated themselves immediately.\nIt seems highly likely to me that a lot of Trump's bravado about the pandemic\ncame from he and those close to him having early access to these vaccines.
\nAnd no, I'm not talking about using AI to write a to do list manager.
\nI literally wrote a utility program yesterday using npx create-xinjs-blueprint
\nin less time than it would have taken me to do the same thing using \nRealbasic.
I actually have a stack that lets you build web-apps top-to-bottom, desktop \napps, and tiny utilities as easily as I used to be able to in the 1990s. You \ncan't draw the user interface (which is the final missing piece) but everything \nelse is as easy or easier—I just need to wrap some convenience tooling around\nit and write documentation. So much documentation.
\nI lied. You can actually buy air cars now.
\n",date:"2025-02-04T17:57:55.814Z",summary:"So much bad news right now…\n\nIt's all a huge shame, since technology is making incredible strides and it's incredibly exciting. Sure, we don't have Jetsons-style aircars, but \nhere's a list of stuff we do have that's frankly mind-blowing.",author:"Tonio Loewald",_created:"2025-02-04T17:49:54.240Z",_modified:"2025-02-04T18:07:11.527Z",_path:"post/2n8p4qfong3r"},"post/path=peak-ai":{keywords:[],title:"Contrary to popular belief, AI may be peaking",path:"peak-ai",content:"\n\n\n## Curation is a Partial Fix\n\nI work for a company that processes invoices using custom AI models. The training\ndata is highly curated and so everything basically just tends to get better over\ntime. Find a mistake? It probably originated in the training dataset so we go\nfix the training data and then see if the retrained AI still makes the mistake.\nBut we (a) control the data and (b) know what the correct answer is. This is\nvery much not how LLMs work. \n\nThe people who build LLMs kind of control the data, but their goal is\nquantity first, quality second, and they have no idea what the question is, let alone what the correct answer is. The answer you'll get is just some kind of statistically generated likely answer that the random contributors to the dataset might give.\n\nThis isn't just a problem with generative AI, although it seems to be an \nespecially bad problem for generative AI. When it comes to front-end coding,\n\"AI\" is currently \"artificial stupidity\". This is what happens when curation \nfails at scale.\n\nAll of this doesn't fully justify my clickbait title. Obviously, AI (and software as a whole) will continue to progress. But the thing that's currently generating all the hype—LLMs and generative AIs powered by all the content they can scrape— those may be peaking as they themselves are used to pollute the very resource they depend on at scale.\n\n",date:"2025-01-21T19:47:46.193Z",summary:"Is artificial intelligence actually getting *smarter*, or just more easily manipulated? This post delves into the surprising ways AI systems can be tricked, revealing a disturbing parallel to the SEO shenanigans of the early 2000s. From generating dodgy medical advice to subtly pushing specific products, the potential for AI to be used for nefarious purposes is not only real but the effects are already visible.",author:"Tonio Loewald",_created:"2025-01-21T08:45:58.238Z",_modified:"2025-02-03T10:30:18.922Z",_path:"post/pcczhmzc23hk"},"post/path=large-language-models-a-few-truths":{keywords:[],title:"Large Language Models — A Few Truths",path:"large-language-models-a-few-truths",content:"And I'm talking about humans using AIs to corrupt other AIs, test whether it's working, and then doing the stuff that works more and faster and harder. This is what SEO did to Google.
\n\n",date:"2025-01-17T20:25:13.992Z",summary:"LLMs, like ChatGPT, excel at manipulating language but lack true understanding or reasoning capabilities. While they can produce acceptable responses for tasks with no single correct answer, their lack of real-world experience and understanding can lead to errors. Furthermore, the rapid pace of open-source development, exemplified by projects like Sky-T1-32B-Preview, suggests that the economic value of LLMs may be short-lived, as their capabilities can be replicated and distributed at a fraction of the initial investment.",author:"Tonio Loewald",_created:"2025-01-17T09:54:17.152Z",_modified:"2025-01-22T12:09:30.679Z",_path:"post/k1p7e2gn552k"},"post/path=adventures-with-chatgpt":{keywords:[],date:"2025-01-17T19:55:24.473Z",summary:"ChatGPT excels at mundane coding tasks, akin to a bright intern who reads documentation and StackOverflow but lacks creativity and testing. While useful for automating repetitive tasks, its code requires refinement and testing.",path:"adventures-with-chatgpt",author:"Tonio Loewald",_created:"2024-08-27T07:57:31.670Z",title:"Adventures with ChatGPT",_modified:"2025-01-17T10:14:49.289Z",content:"## Or, how I learned to stop worrying and just teach ChatGPT to code withI should add that while Sky-T1-32B-Preview allegedly outperformed ChatGPT-o1 on a battery of tests, I played with a 10GB quantized version on my laptop just now, and it produced far worse results, far more slowly than Meta Llama 7B.
\n
xinjs
…\n\n\n\n\nAnd ChatGPT immediately produced this:\n\n```\nimport { elements } from 'xinjs';\n\n// Destructure the element factories from the proxy\nconst { header, nav, h1, section, a, div, p, ul, li, img, footer } = elements;\n\n// Create the HTML structure using element factories\nconst pageContent = div({ class: 'container' }, [\n section({ id: 'bio', class: 'bio' }, [\n h2('About Me'),\n p('I am a software developer with over thirty years of experience. My expertise spans full-stack development, with a career-long passion for usability and creating intuitive user experiences.')\n ]),\n section({ id: 'portfolio', class: 'portfolio' }, [\n h2('Portfolio'),\n h3('Technical Projects'),\n p('Here are some examples of my technical work:'),\n ul([\n li(a({ href: '#' }, 'Project 1: Description')),\n li(a({ href: '#' }, 'Project 2: Description')),\n li(a({ href: '#' }, 'Project 3: Description'))\n ]),\n h3('Artistic Projects'),\n p('Some of my artistic work:'),\n img({ src: 'example-artwork.jpg', alt: 'Example Artwork' })\n]),\n...\n```\n\n> I should note here that there's a major error in this fragment I didn't pick \n> until I started proof-reading this post, weeks after starting it. Code from\n> LLMs is rife with idiotic errors that often aren't immediately apparent.\n\nI got so excited I tried explaining how xinjs's Component class works to see if I could get it to produce web-components the xinjs way. This took a few attempts, and one thing I learned from this is that ChatGPT seems better at learning from simple, clear examples than more complex examples.\n\nI started out by pointing it at an example in GitHub and asking it to build the components along the same lines. This was a disaster, especially in that it looked sorta, kinda right but wouldn't have worked at all.\n\nI spent a bunch of time refining my prompts, and the most helpful prompts provided focused examples, e.g. I used this prompt after the third or fourth attempt to explain howI'm using a library that provides a proxy named
\nelements
. Its properties are element factories, soelements.div()
produces an HTMLDivElement.The factories will take their arguments as children (strings are treated as text nodes) or bags of properties (e.g.
\n{class: 'foo bar', title: 'baz'}
will add those attributes.E.g. this Javascript
\n\nimport { elements } from 'xinjs'\nconst { label, span, input } = elements\n\n
document.body.append(label(span('Date'), input({ type: 'date' })))\n
Would add the following to document.body
\n\n<label>\n <span>Date</span>\n <input type=\"date\">\n</label>\n\n
Could you rewrite the HTML code using the elements proxy?\n
elementCreator
was a static method of Component
that did x and y, and it produced incorrect code with incorrect explanations of what it was doing.\n\n\n No. Here's a simple example that should help.\n\n\n\nAfter a few prompts like this, ChatGPT provided pretty flawless results. But a\nlot of the time it really feels like you're just training a conscientious but\neager fool to imitate examples. It's kind of like how you wished regexp worked,\nwith the understand that, just as with regexp, you really want to verify that\nit didn't blow up in some subtle but exciting way.\n\n## Aside: \"artificial intelligence\"? it's not even artificial stupidity…\n\nIf you're just dealing with text, it can take a while using ChatGPT to realize \nthat ChatGPT isn't \"intelligent\" at all. It is predictive text completion with\nan insane knowledge base, but it simply doesn't understand anything at all.\n\nBut if you start working with image generation, it's immediately obvious that\n(1) the image generator doesn't understand what it's being asked to do, and\n(2) ChatGPT doesn't understand the image it's been given.\n\n\nexport class MyThing extends Component {\n content = () => div('my thing is here')\n }\n\n export const myThing = MyThing.elementCreator({tag: 'my-thing'})\n
The first bit of Apple Intelligence™ I've really played with is the new\nImage Playground (beta) that appeared on my Macbook Pro with the latest\nsystem update.
\nThe first things that come to mind about it are:
\nThis is the default \"animation\" style:
\nHere's the \"illustration\" style:
\nRight now, it's a really cute toy. But while it does produce consistently\nlovely results, they're basically all the same—pictures of a single person\nor animal, pretty much front-on, with some minor modifications allowed\n(e.g. \"scarf\"), some general atmosphere / background stuff (\"sci fi\", \"mountains\"),\nand either from head to toe, head and chest, or portrait.
\nI think this has potential if a lot more stuff is added. It's reliable \nand fast—it just doesn't do a whole lot.
\n",date:"2025-01-15T18:14:38.350Z",summary:"Apple's new Image Playground is focused, and easy to use. If you want to produce cute \"Pixar-style\" people and animals, it quickly churns out consistent, but very limited, results. My M3 Max rendered images in seconds, but right now it's more of a cute toy than a useful tool\n",author:"Tonio Loewald",_created:"2025-01-15T11:14:18.592Z",_modified:"2025-01-15T11:14:47.871Z",_path:"post/xohqdwdgjlxa"},"post/path=bambulabs-p1s-initial-review":{keywords:[],title:"Bambulabs P1S Initial Review",path:"bambulabs-p1s-initial-review",content:"Most user input on the web (e.g. login pages) is done using forms even though HTML forms are not fit for use and need to be prevented from doing their default thing to work properly since \"Web 2.0\". (Basically, if you hit enter or push a button inside a form its default behavior is to reload the page and lose everything. This is weirdly not what anyone wants but made sense kind of before web pages started communicating with servers directly. This is sad, but one of the reasons the web has been so insanely successful is that it is very forgiving and doesn't break old stuff gladly.) xinjs-ui
provides a simple reusable form wrapper that does all the \"usual things\" people want forms to do and stops bad things from happening while trying to leave everything as much alone as possible. So it lets you use <input type="date">
elements to display and modify date values in a robust and standard way. Guess what flat out doesn't work at all on the Quest's built-in browser?
The built-in interactive demos on the site let me actually quickly test a bare <input type="date">
alongside the \"wrapped\" version that was failing to verify that it's not my code that's the problem. You simply can't enter dates via a date input. So, good luck scheduling calendar appointments or booking airfares on any site that uses standard widgets. (Contrast this with mobile Safari which not only supports such things but goes out of its way to provide native experiences around things like auto-complete.)
I should note that the Quest browser does a great job with <select>
elements. This isn't a failure of engineering, this is a failure of emphasis. Clearly no-one cares if you can get work done using this thing. There's no-one coming into the office in the morning and trying to work using their Quest headset for as long as possible until they reach a blocker and then raging out, writing a bug report, and telling the manager of the team responsible to fix their shit.
Interestingly, the Quest 3 offers beta support for desktop sharing out of the box. I actually paid for a third-party solution for this for my Quest 2, which I was planning to try out on the Quest 3 once I sort out the Quest 3 being attached to the wrong account. Anyway, this looks promising.
\n\n\n\n(Addendum: both the free beta of Remote Display and the commercial Virtual Desktop app are discussed in more detail in the follow-up article.)
\n\n\n\nCapturing Video is pretty easy (meta-right-trigger to start and stop video capture), except that by default it won't capture your mic, and I'd rather narrate my experience than capture silent video and then overdub it. After all, don't you want to know what my user name means in the language I made up?
\n\n\n\nYou can capture mic input by using the \"camera\" app to trigger video capture and manually switching the mic on for that capture, but by default it is always off (I hoped turning it on for one video might change the some underlying setting—it does not—or at least that next time I used that dialog it would default to the previous choice—and no it doesn't do that either. AFAICT there's no way to turn it on mid-way.
\n\n\n\nIronically, streaming your experience is also possible via the camera app and here the default is to include mic audio. Just in case you thought Meta suddenly cared about your privacy.
\n\n\n\nAnyway, I haven't figured out a way to conveniently capture video with mic audio nor have I got stuff syncing to my computer yet.
\n\n\n\nIf you put yourself in the shoes of a usability tester at Meta, consider just how little of a damn they must give about you to make doing all this stuff so messed up. Personally, were I on the team building this stuff, I'd be frustrated just in my own ability to capture quick examples of bugs or other issues and share them and fix it just for my own convenience.
\n\n\n\nThe depth of indifference to usability I read into all of this is mind-blowing. But, never ascribe to malice…
\n\n\n\nAt least one of the emoji used in my previous blog post (and likely this one too) does not render on the Quest 3. Apparently Meta isn't even keeping up with Emoji (and it's not like I'm using super modern obscure ones).
\n\n\n\nAs an Apple shareholder I suppose I should be thrilled that the company in the second-best position to make inroads into the VR / XR / AR space is so clueless, but I really wanted to love the Quest 3. As I said to my girlfriend, when Apple made the iPhone they had Eric Schmidt doing industrial espionage for Google on their board. He went back to Google and told the Android team to stop working on their new Sidekick and instead steal Apple's ideas. Despite this, Apple has maintained a durable technical and usability advantage in the smart phone space for fifteen years. How dominant might they be in the VR / XR / AR space when their competition is this clueless?
\n\n\n\nBack during the mass layoffs in Silicon Valley in 2022 Zuckerberg was supposedly furious that there were a ton of people working on Oculus project that weren't using or only grudgingly using the product. Dogfooding is crucial for any consumer product and your goal needs to be a product you use all the time in preference to alternatives and probably in preference to things that aren't even seen as alternatives.
\n\n\n\nI'm sure the Apple Watch team has people who use their Watch instead of their phone as much as they can. They probably have \"leave your phone at home\" days. I'm sure there iPad team has people who use iPads for things other folks use their Macs and iPhones for. I'm sure there are Vision Pro team members who don't have any monitors, who code on their Vision Pros when they can, who attend meetings with them, and when they run into problems they fix them.
\n\n\n\nAs soon as you internalize the idea that the product you're building is for \"other people\" that you are imagining, you are fucked.
\n\n\n\nThe fact that most Facebook employees avoid Facebook outside of work and won't let their kids use it says a lot about it.
\n\n\n\nAnd yes, I worked for Facebook and no I didn't like it and it didn't like me. And yes, I bought Oculus products post FB-buyout and held my nose despite all of this.
\n\n\n\nMore to come once I pair a keyboard and install Opera and/or Chrome.
\n","$updatedAt":"2024-06-05T09:10:30.268+00:00",path:"meta-quest-3-part-two",_created:"2024-07-09T20:28:32.367Z",id:"7986",_modified:"2024-07-09T20:28:32.367Z","$id":"7986",_path:"post/path=meta-quest-3-part-two"},"page/path=blog":{path:"blog",css:"",imageUrl:"",prefetch:[{regexp:"^\\/(([\\w\\d]+\\/)*)([\\w-]+)\\/?$",path:"post/path=[3]"}],tags:["public"],source:"