<?xml version='1.0' encoding='UTF-8'?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en-US">
  <id>https://jmthornton.net/blog/</id>
  <title>Blog by Jade Michael Thornton</title>
  <updated>2026-04-07T09:00:00-06:00</updated>
  <author>
    <name>Jade Michael Thornton</name>
  </author>
  <link href="https://jmthornton.net/blog/feed.xml" rel="self"/>
  <link href="https://jmthornton.net/blog/" rel="alternate" type="text/html"/>
  <generator uri="https://lkiesow.github.io/python-feedgen" version="1.0.0">python-feedgen</generator>
  <entry>
    <id>https://jmthornton.net/blog/p/dodd-road</id>
    <title>Why is Dodd Road Discontiguous?</title>
    <updated>2025-06-04T09:00:00-06:00</updated>
    <content type="html">&lt;figure&gt;
      &lt;img alt="Map of Dodd Road and Dodd Boulevard snaking through northern Dakota county with many breaks" src="/blog/assets/images/dodd-road-blvd.png" loading="lazy" /&gt;
      &lt;figcaption&gt;The path of Dodd Road (and Dodd Boulevard) through the southern Twin Cities metro. The portion in orange is signed as Dodd Road but does not follow the original path.&lt;/figcaption&gt;
      &lt;/figure&gt;

      &lt;p&gt;I first noticed Dodd Road when I saw sections in Eagan, MN sitting near each other yet not connecting. Then, a seemingly useless 50-meter break between the otherwise-aligned Dodd Road and Dodd Boulevard on the eastern edge of Lebanon Hills Park looked so peculiar that I suspected a historical reason. An easy way to begin is to explore the road's namesake.&lt;/p&gt;

      &lt;p&gt;Captain William Bigelow Dodd was a ferry operator turned land speculator and militia captain. An energetic entrepreneur, Dodd staked roughly 500 acres on both banks of the Minnesota River at his road-head, laid out the village of Rock Bend (renamed St. Peter in 1854), and even ran "Dodd's Ferry" to carry traffic across the river. He later fell defending New Ulm in the Dakota War of 1862 (for more on that conflict and its broader impact, see my post on &lt;a href="/blog/p/walden-and-bdote"&gt;Walden and Bdote&lt;/a&gt;) and is buried behind St. Peter's Episcopal Church beneath a stone simply reading 'Builder of the Dodd Road.'&lt;/p&gt;

      &lt;p&gt;Dodd Road began in 1853 as a private subscription road cut by Captain Dodd and eleven men to link Mendota (Bdote) to Rock Bend. This model of road-building meant the colonists pooled money up front and then recouped costs via tolls or land-value gains. The subscription model let Dodd move faster than federal road acts could. Backers in St. Paul signed up to finance clearing trees, grubbing stumps, and building primitive bridges so the road would be usable by wagons. Building it as an all-season path was crucial because steamboats on the Minnesota River ran only when water levels and lack of ice allowed.&lt;/p&gt;

      &lt;p&gt;In April 1853, Dodd's crew set out from Mendota with two wagons and basic tools. For 109 days they hacked nearly 70 miles of trail through dense forest, following ridges to skirt marshy ground, bridging small streams, and marking trees. By July they reached Lake Emily just outside Rock Bend, having transformed a hunters' path into a wagon-passable highway.&lt;/p&gt;

      &lt;p&gt;Meanwhile, the U.S. Army's Topographical Engineers had funds to survey a 260-mile military road from Mendota toward the Big Sioux River. In September 1853, Captain Jesse L. Reno's party mapped across Minnesota and "stumbled" onto Dodd's freshly cut trail near Traverse des Sioux. Reno praised its quality and recommended reimbursing Dodd $3,270 (about $136,000 in 2025). The territorial assembly petitioned Congress, and Dodd received payment before the year's end. Reno noted those private improvements saved his survey weeks of bushwhacking.&lt;/p&gt;

      &lt;p&gt;Almost immediately, Dodd Road spurred settlement. Lakeville and Eureka sprang up in 1853, Millersburg in 1855, Shieldsville and Cordova in 1856, Cleveland in 1857, and Rosemount and Kilkenny by 1859. But when the Minnesota Central Railroad built a St. Paul–Faribault line in 1864, it pulled traffic off the old wagon road. Only one new town, Eidswold, was founded along Dodd Road after the railroad's arrival, a sign that trains had superseded overland wagons, especially for commercial transport.&lt;/p&gt;

      &lt;p&gt;Today, Dodd's path survives in fragments in Dakota County, but the segments north of MN 55 are not on Dodd's 1853 alignment. In Saint Paul, West Saint Paul, Mendota Heights and the first few miles in Eagan, the road was built in 1921 as part of the Jefferson Highway rather than the original pioneer trace. South of MN 55, however, the road more closely follows Captain Dodd's route. Dropping south, there's a short jump west at Wescott Road, and continues paved down past Lebanon Hills, then follows the gravel-surfaced Dodd Boulevard into Rosemount, through Apple Valley and Lakeville, where the old right-of-way has never been entirely straightened, and then off out of Dakota County. The remains of this twisting path are full of gaps and jumps that trace where modern highways cut across curves, subdivisions severed right-of-way, and county priorities straightened farm-road segments. One of those gaps is the one I found along the eastern edge of Lebanon Hills: a deliberate break planted to prevent through-traffic on this now-residential road.&lt;/p&gt;

      &lt;p&gt;In 2003, three gravel portions earned National Register status: a 6.8-mile stretch in Rice County (Circle Lake Trail through Falls Trail, Garfield Avenue, Groveland Trail, Halstad Avenue) and two in Le Sueur County (County 136 west of Kilkenny and County 148 near Cleveland). None of the heavily-altered Dakota County segments made the grade. Dodd Road today feels like a time-scarred artifact you can still drive in fits and starts. Next time you hit that gravel stretch or a sudden dead end, imagine Captain Dodd's crew hacking through the Big Woods in 1853 and know every curve once linked frontier farms to a growing territory.&lt;/p&gt;

      &lt;h2&gt;References&lt;/h2&gt;
      &lt;ul&gt;
        &lt;li&gt;"A History of Minnesota's Highways Part One." &lt;em&gt;Streets.Mn&lt;/em&gt;, 9 Feb. 2018. &lt;a href="https://streets.mn/2018/02/09/a-history-of-minnesotas-highways-part-one/"&gt;https://streets.mn/2018/02/09/a-history-of-minnesotas-highways-part-one/&lt;/a&gt;.&lt;/li&gt;
        &lt;li&gt;Brown, Curt. "Light the Candles and Pull Out the Party Hats: Dodd Road, the Mother of All Minnesotal Highway Construction Projects, Turns 150 Years Old This Summer." &lt;em&gt;Star Tribune&lt;/em&gt;, 11 June 2003.&lt;/li&gt;
        &lt;li&gt;"Dodd Road Discontiguous District." &lt;em&gt;Wikipedia&lt;/em&gt;, 24 Feb. 2024. &lt;a href="https://en.wikipedia.org/w/index.php?title=Dodd_Road_Discontiguous_District&amp;oldid=1210053113"&gt;https://en.wikipedia.org/w/index.php?title=Dodd_Road_Discontiguous_District&lt;/a&gt;.&lt;/li&gt;
        &lt;li&gt;"Lost Highway: Dodd Road, Dakota County." &lt;em&gt;Dead Pioneer&lt;/em&gt;. &lt;a href="https://deadpioneer.com/articles/dakota/doddroad/doddroad.htm"&gt;https://deadpioneer.com/articles/dakota/doddroad/doddroad.htm&lt;/a&gt;.&lt;/li&gt;
        &lt;li&gt;National Park Service. &lt;em&gt;Dodd Road Discontiguous District: National Register of Historic Places Registration Form&lt;/em&gt;. Washington, DC: U.S. Department of the Interior, National Park Service, June 13 2003. &lt;a href="https://npgallery.nps.gov/NRHP/GetAsset/NRHP/03000520_text"&gt;https://npgallery.nps.gov/NRHP/GetAsset/NRHP/03000520_text&lt;/a&gt;.&lt;/li&gt;
        &lt;li&gt;Wolston, Bill. "On the 150th Anniversary of the Elusive Dodd Road." &lt;em&gt;Over the Years: Journal of the Dakota County Historical Society&lt;/em&gt;, October 2003.&lt;/li&gt;
      &lt;/ul&gt;</content>
    <link href="https://jmthornton.net/blog/p/dodd-road"/>
    <summary>A brief history of an early highway in the Twin Cities, MN: Captain Dodd's 1853 Minnesota pioneer wagon trail, its discontiguous modern alignments, and surviving gravel segments.</summary>
    <published>2025-06-04T09:00:00-06:00</published>
  </entry>
  <entry>
    <id>https://jmthornton.net/blog/p/jira-commit-msg</id>
    <title>Automatically add Jira issue to commit message from branch name</title>
    <updated>2020-08-15T09:00:00-06:00</updated>
    <content type="html">&lt;p&gt;At &lt;a href="https://flightaware.com"&gt;FlightAware&lt;/a&gt;, we use Jira to track work and git for version control. To reference each other, git branch names start with the relevant issue number, taking the form&lt;/p&gt;
      &lt;pre is:raw&gt;&lt;code class="language-shell"&gt;${project}_${ticketNumber}_${shortDescription}&lt;/code&gt;&lt;/pre&gt;
      &lt;p&gt;Additionally, commit messages end with the same issue number in the form&lt;/p&gt;
      &lt;pre is:raw&gt;&lt;code class="language-shell"&gt;${project}-${ticketNumber}&lt;/code&gt;&lt;/pre&gt;
      &lt;p&gt;These conventions allow for easy referencing of issues to look for details and business decisions. Specifically, the issue number in the commit message is used by &lt;a href="https://github.com/nugget/zeitgit"&gt;Zeitgit&lt;/a&gt; to attribute commit statistics to issues. It's also used by &lt;a href="https://www.jenkins.io/"&gt;Jenkins&lt;/a&gt; to create useful links between tickets and pull requests.&lt;/p&gt;
      &lt;p&gt;It is, however, a little annoying to type the issue number in to every commit, especially when I &lt;em&gt;know&lt;/em&gt; git already has this information in the branch name—git just doesn't know it. To automate this little task, I use a git hook to prepare the commit message with the issue number before it goes to &lt;code class="language-shell"&gt;$EDITOR&lt;/code&gt; (e.g. vim) for editing.&lt;/p&gt;

      &lt;h2&gt;Git hooks&lt;/h2&gt;
      &lt;p&gt;Git hooks are scripts—typically written in bash, zsh or another shell—which live in the &lt;code&gt;.git/hooks/&lt;/code&gt; directory of a project. These scripts provide a means of performing actions certain stages of git's behavior. By default, the hooks directory comes pre-populated with sample hooks, each ending with &lt;code&gt;.sample&lt;/code&gt;. That file ending keeps them from running until the suffix is removed. Take note of their names though, git will look for these specific names (without the &lt;code&gt;.sample&lt;/code&gt; ending) when looking for hooks to run.&lt;/p&gt;

      &lt;h2&gt;The prepare-commit-msg hook&lt;/h2&gt;
      &lt;p&gt;What we need to do is edit the commit message before it goes to your &lt;code class="language-shell"&gt;$EDITOR&lt;/code&gt; for regular editing; the one we want is called &lt;code&gt;prepare-commit-msg&lt;/code&gt;. This hook is called by &lt;code class="language-shell"&gt;git commit&lt;/code&gt; and is given the name of the file which has the commit message, followed by the description of the commit message's source (discussed later), and the commit's SHA-1 hash.&lt;/p&gt;
      &lt;p&gt;For the script itself, we need to get the issue number from the branch name, format it according to FlightAware conventions, then prepend it onto the commit message file, preserving git's default message if it exists (the summary of changes in the commit). My script is in Zsh, but could be in any language available on the system; Bash, Python, plain old Bourne shell, etc. would be fine. I use Zsh as my login shell, so that's what I default to.&lt;/p&gt;

      &lt;h3&gt;Hook script in full&lt;/h3&gt;
      &lt;pre is:raw class="line-numbers" lang="zsh"&gt;&lt;code class="language-shell"&gt;#!/usr/bin/env zsh
COMMIT_MSG_FILE=$1
COMMIT_SOURCE=$2
SHA1=$3

if [[ -z "$COMMIT_SOURCE" ]]
then
  branch=$(git symbolic-ref --short HEAD)
  if [[ "$branch" =~ ^([a-zA-Z]+)[_-]([0-9]+).* ]]
  then
    ticket="${match[1]:u}-${match[2]}"
    gitMsg=$(cat "$COMMIT_MSG_FILE")
    printf "\n\n%s\n" $ticket &gt; "$COMMIT_MSG_FILE"
    printf "$gitMsg" &gt;&gt; "$COMMIT_MSG_FILE"
  fi
fi&lt;/code&gt;&lt;/pre&gt;
      &lt;p&gt;This will handily extract issue names from branch names ranging from "web_14800_some_feature" to "NXT-1701". The first few letter are project codes, of which FlightAware has dozens like PREDICT, ADSB, OPS, NXT, WEB, etc., so we want to support as many as possible.&lt;/p&gt;
      &lt;p&gt;Now I'm a strong believer of not just executing code without understanding what's happening, so let's break it down.&lt;/p&gt;

      &lt;h2&gt;Capturing commit information&lt;/h2&gt;
      &lt;pre lang="zsh"&gt;&lt;code class="language-shell"&gt;COMMIT_MSG_FILE=$1
COMMIT_SOURCE=$2
SHA1=$3&lt;/code&gt;&lt;/pre&gt;
      &lt;p&gt;To keep track of the information passed to the hook script, we store the arguments in variables. Here's what we get:&lt;/p&gt;
      &lt;ol&gt;
        &lt;li&gt;The first argument is the name of the temporary commit message file, where git physically keeps the message until the commit is complete.&lt;/li&gt;
        &lt;li&gt;The second argument is the commit source. We'll check this a couple lines below.&lt;/li&gt;
        &lt;li&gt;The third argument is the commit's SHA-1 hash. This script won't use it, but it could be useful to capture if you extend this hook in the future.&lt;/li&gt;
      &lt;/ol&gt;

      &lt;h2&gt;Checking the commit source&lt;/h2&gt;
      &lt;pre lang="zsh"&gt;&lt;code class="language-shell"&gt;if [[ -z "$COMMIT_SOURCE" ]]&lt;/code&gt;&lt;/pre&gt;
      &lt;p&gt;Before doing any work, the script ensures there's no special commit source by ensuring it's empty with the &lt;code class="language-shell"&gt;-z&lt;/code&gt; test. This source gives us some info on why this commit is happening. It could be "merge" or "squash" or a number of other sources. In a "normal" commit, the source is just empty. We don't want to mess with any special commits (by FlightAware convention), so we can just check that it's empty.&lt;/p&gt;

      &lt;h3&gt;Getting the branch name&lt;/h3&gt;
      &lt;pre lang="zsh"&gt;&lt;code class="language-shell"&gt;  branch=$(git symbolic-ref --short HEAD)&lt;/code&gt;&lt;/pre&gt;
      &lt;p&gt;We also need the branch name, which (may) contain an issue number. Git's &lt;code&gt;symbolic-ref&lt;/code&gt; command lets us get info about a symbolic reference, in this case &lt;code&gt;HEAD&lt;/code&gt;. The &lt;code class="language-shell"&gt;--short&lt;/code&gt; option shortens the symbolic ref's path to just its name. For example, this would shorten the full path &lt;code&gt;refs/heads/main&lt;/code&gt; to just the name &lt;code&gt;main&lt;/code&gt;.&lt;/p&gt;

      &lt;h2&gt;Extracting the issue number&lt;/h2&gt;
      &lt;pre lang="zsh"&gt;&lt;code class="language-shell"&gt;  if [[ "$branch" =~ ^([a-zA-Z]+)[_-]([0-9]+).* ]]&lt;/code&gt;&lt;/pre&gt;
      &lt;p&gt;The next &lt;code class="language-shell"&gt;if&lt;/code&gt; condition performs two actions. First, it only resolves as true if there &lt;em&gt;is&lt;/em&gt; an issue number in the branch name—at least of the form we use at FlightAware. Second, the &lt;code class="language-shell"&gt;()&lt;/code&gt; capture groups take the project code and ticket number and implicitly store them in the &lt;code class="language-shell"&gt;$match&lt;/code&gt; array. Any description text after the issue number is ignored.&lt;/p&gt;

      &lt;h2&gt;Formatting the issue number&lt;/h2&gt;
      &lt;pre is:raw lang="zsh"&gt;&lt;code class="language-shell"&gt;    ticket="${match[1]:u}-${match[2]}"&lt;/code&gt;&lt;/pre&gt;
      &lt;p&gt;&lt;em&gt;&lt;code class="language-shell"&gt;if&lt;/code&gt;&lt;/em&gt; we find an issue number, we'll need to format it. This is necessary only because of FlightAware conventions; branch names are snake_case, while Jira issue numbers are properly TRAIN-CASE. We simply create a new variable, &lt;code class="language-shell"&gt;ticket&lt;/code&gt;, built from the capture groups stored in &lt;code class="language-shell"&gt;$match&lt;/code&gt;. We upcase the project code portion with the &lt;code class="language-shell"&gt;:u&lt;/code&gt; expansion modifier to complete the transformation. There are other ways to upcase text, but this is the cleanest in my opinion.&lt;/p&gt;

      &lt;h2&gt;Building the commit message&lt;/h2&gt;
      &lt;pre lang="zsh"&gt;&lt;code class="language-shell"&gt;    gitMsg=$(cat "$COMMIT_MSG_FILE")&lt;/code&gt;&lt;/pre&gt;
      &lt;p&gt;At this point, git has already created a message file and populated it with information including a diff summary. We capture all this pre-existing text into a variable, &lt;code class="language-shell"&gt;gitMsg&lt;/code&gt;. We'll need to add this back to the end of the file after overwriting the file to simulate prepending. There are other methods of prepending to a file, most interestingly by using a here-string, but this is the most straight-forward method.&lt;/p&gt;
      &lt;pre lang="zsh"&gt;&lt;code class="language-shell"&gt;    printf "\n\n%s\n" $ticket &gt; "$COMMIT_MSG_FILE"
    printf "$gitMsg" &gt;&gt; "$COMMIT_MSG_FILE"&lt;/code&gt;&lt;/pre&gt;
      &lt;p&gt;Now to build up our new message, we overwrite the existing message with a formatted string including the ticket number. This string starts with two newlines, providing space to put the substantive commit message, and ends with a newline to distance the ticket number from the git-supplied information. Finally, we append that git-supplied info to the message.&lt;/p&gt;
      &lt;p&gt;After this hook is complete, git will continue on it's merry way and (probably) open the user's &lt;code class="language-shell"&gt;$EDITOR&lt;/code&gt; as usual.&lt;/p&gt;

      &lt;h2&gt;Usage&lt;/h2&gt;
      &lt;p&gt;All we need to do to use this hook is to save it as &lt;code class="language-shell"&gt;.git/hooks/prepare-commit-msg&lt;/code&gt; and make it executable (&lt;code class="language-shell"&gt;chmod +x&lt;/code&gt;), git will handle the rest!&lt;/p&gt;
      &lt;p&gt;For security reasons, git doesn't allow committing hooks to a repository (nor anything inside &lt;code&gt;.git/&lt;/code&gt;), so this script will have to be added to each project individually. If you'd like to add it to all projects, you can store it in a known directory and have git always look there for hooks:&lt;/p&gt;
      &lt;pre lang="zsh"&gt;&lt;code class="language-shell"&gt;git config --global core.hooksPath /path/to/your/hooks&lt;/code&gt;&lt;/pre&gt;
      &lt;p&gt;&lt;em&gt;However&lt;/em&gt;, this will stop git from looking at the local hooks directory. Unfortunately you can't have it both ways, at least as of git version 2.18.&lt;/p&gt;</content>
    <link href="https://jmthornton.net/blog/p/jira-commit-msg"/>
    <summary>When your git branch name contains an issue number (e.g. from Jira), automatically format git commit messages with the issue number at the end</summary>
    <published>2020-08-15T09:00:00-06:00</published>
  </entry>
  <entry>
    <id>https://jmthornton.net/blog/p/combine-excel</id>
    <title>Combine multiple Excel workbooks into sheets in a single workbook</title>
    <updated>2023-06-27T09:00:00-06:00</updated>
    <content type="html">&lt;p&gt;
        It may be useful to have multiple source workbooks which can be individually updated and replaced, then combine them into a single combined workbook for actual use. In my case, the combined workbook is used as a data source in Tableau, with each sheet acting as a table.
      &lt;/p&gt;
      &lt;p&gt;
        Below is a VBA subroutine which can be run in Excel (in the workbook which will be the combined file). The subroutine assumes that all the new, updated files are in a &lt;code&gt;source/&lt;/code&gt; directory which is in the combined file's directory.
      &lt;/p&gt;
      &lt;p&gt;
        Importantly, this subroutine only copies visible sheets (ignoring hidden and very hidden sheets).
      &lt;/p&gt;
      &lt;p&gt;
        &lt;strong&gt;Caution:&lt;/strong&gt; For this script to work, the sheet name in the source files _must_ match the corresponding sheet name in the combined file.
      &lt;/p&gt;
      &lt;pre lang="vba"&gt;&lt;code class="language-vba"&gt;Sub UpdateSheetsFromSourceFiles()

'Disable alerts to delete silently
Application.DisplayAlerts=FALSE

'We assume source files are in the source\ directory
path = ActiveWorkbook.Path &amp; "\source\"
filename = Dir(path &amp; "*.xlsx")
  Do While filename &lt;&gt; ""
    Workbooks.Open Filename:=path &amp; filename, ReadOnly:=True
    For Each Sheet In ActiveWorkbook.Sheets
      If Sheet.Visible = -1 Then 'Only if sheet is visible
        'Remove old version of sheet to update, then pull in the updated version
        ThisWorkbook.Sheets(Sheet.Name).Delete
        Sheet.Copy After:=ThisWorkbook.Sheets(ThisWorkbook.Worksheets.Count)
      End If
    Next Sheet
    Workbooks(filename).Close
    filename = Dir()
  Loop

'Re-enable alerts
Application.DisplayAlerts=TRUE

End Sub&lt;/code&gt;&lt;/pre&gt;
      &lt;h4&gt;How it works&lt;/h4&gt;
      &lt;ul&gt;
        &lt;li&gt;
          First, we disable alerts. This allows us to delete files without asking for confimation from the user, making it nice and silent.
        &lt;/li&gt;
        &lt;li&gt;
          The path is determined from the &lt;code&gt;ActiveWorkbook&lt;/code&gt;, which in this case is the combined file where the subroutine is running. Using the &lt;code&gt;Dir&lt;/code&gt; function, we get the first &lt;code&gt;xlsx&lt;/code&gt; file in the &lt;code&gt;source\&lt;/code&gt; directory.
        &lt;/li&gt;
        &lt;li&gt;
          Looping through each sheet in the workbook, we first check to see if the sheet is visible (&lt;code&gt;Visible = -1&lt;/code&gt;). If it is, we get to the real workhorse of the function:
          &lt;ul&gt;
            &lt;li&gt;
              In the combined file (&lt;code&gt;ThisWorkbook&lt;/code&gt;), the matching sheet is deleted, then:
            &lt;/li&gt;
            &lt;li&gt;
              The sheet from the source file is copied in. It's placed at the end of the list of sheets. Since we're looping through the directory, this has the side-effect of alphabetizing the worksheets.
            &lt;/li&gt;
          &lt;/ul&gt;
        &lt;/li&gt;
        &lt;li&gt;
          Once all the sheets in a workbook have been looped through, we close it and move on to the next file which matches the string we gave at the start (calling &lt;code&gt;Dir&lt;/code&gt; with no arguments returns the next matching file).
        &lt;/li&gt;
      &lt;/ul&gt;</content>
    <link href="https://jmthornton.net/blog/p/combine-excel"/>
    <summary>A quick VBA subroutine for combining multiple Excel workbooks into a single workbook, making ingestion into Tableau easier</summary>
    <published>2016-11-19T09:00:00-06:00</published>
  </entry>
  <entry>
    <id>https://jmthornton.net/blog/p/clearnightretro</id>
    <title>ClearNight Retro</title>
    <updated>2018-05-08T09:00:00-06:00</updated>
    <content type="html">&lt;p&gt;&lt;img alt="ClearNight Retro" src="../assets/images/retro-title.png"/&gt;&lt;/p&gt;

      &lt;p&gt;After a bit of work, I&amp;#39;m proud to introduce &lt;a href="https://atom.io/packages/clearnight-retro-ui"&gt;ClearNight Retro&lt;/a&gt;, a new dark, relaxed retro theme for Atom! The set is a UI theme with matching syntax theme, but both can work separately with other dark themes like Atom&amp;#39;s own One Dark.&lt;/p&gt;

      &lt;p&gt;The core of the theme is forked from &lt;a href="https://github.com/ClearNight/clear-night-ui"&gt;Clear Night&lt;/a&gt;, and makes use of a relaxed, almost faded color scheme inspired by &lt;a href="https://github.com/morhetz/gruvbox"&gt;Gruvbox&lt;/a&gt;. The theme set will continue to be maintained by me and the &lt;a href="https://github.com/clearnight"&gt;ClearNight organization&lt;/a&gt;. I'm no longer using Atom much, I've been leaning more into my &lt;a href="https://github.com/thornjad/aero"&gt;configuration of Emacs&lt;/a&gt; recently, but the ClearNight themese will continue to receive full support for at least a few years.&lt;/p&gt;

      &lt;h2&gt;Take a look&lt;/h2&gt;

      &lt;div class="no-md-a"&gt;
        &lt;p&gt;&lt;a target="_blank" href="../assets/images/preview.png"&gt;&lt;img class="retro-preview-img" src="../assets/images/preview.png" alt="Javascript preview"&gt; &lt;/a&gt; &lt;/p&gt;

        &lt;p&gt;&lt;a target="_blank" href="../assets/images/clj-preview.png"&gt;&lt;img class="retro-preview-img" src="../assets/images/clj-preview.png" alt="Clojure preview"&gt; &lt;/a&gt; &lt;/p&gt;

        &lt;p&gt;&lt;a target="_blank" href="../assets/images/groovy-preview.png"&gt;&lt;img class="retro-preview-img" src="../assets/images/groovy-preview.png" alt="Groovy preview"&gt; &lt;/a&gt; &lt;/p&gt;

        &lt;p&gt;&lt;a target="_blank" href="../assets/images/preview-overlay.png"&gt;&lt;img class="retro-preview-img" src="../assets/images/preview-overlay.png" alt="Modal overly preview"&gt; &lt;/a&gt; &lt;/p&gt;

        &lt;p&gt;&lt;a target="_blank" href="../assets/images/preview-settings-view.png"&gt;&lt;img class="retro-preview-img" src="../assets/images/preview-settings-view.png" alt="Settings view preview"&gt; &lt;/a&gt; &lt;/p&gt;
      &lt;/div&gt;

      &lt;h2&gt;Try it out&lt;/h2&gt;

      &lt;div class="no-md-a"&gt;
        &lt;p&gt;ClearNight Retro is fully released and ready to make your code look good!&lt;/p&gt;

        &lt;p class="install-badges"&gt;&lt;a href="https://atom.io/packages/clearnight-retro-ui"&gt;&lt;img alt="apm install clearnight-retro-ui" src="https://apm-badges.herokuapp.com/apm/clearnight-retro-ui.svg?theme=one-dark"/&gt;&lt;/a&gt;&lt;a href="https://atom.io/packages/clearnight-retro-syntax"&gt;&lt;img alt="apm install clearnight-retro-syntax" src="https://apm-badges.herokuapp.com/apm/clearnight-retro-syntax.svg?theme=one-dark"/&gt;&lt;/a&gt;&lt;/p&gt;
      &lt;/div&gt;</content>
    <link href="https://jmthornton.net/blog/p/clearnightretro"/>
    <summary>Introducing a new dark, relaxed retro theme for the Atom text editor</summary>
    <published>2018-05-08T09:00:00-06:00</published>
  </entry>
  <entry>
    <id>https://jmthornton.net/blog/p/emacs-project-override</id>
    <title>Overriding project.el project root in Emacs</title>
    <updated>2023-03-07T09:00:00-06:00</updated>
    <content type="html">&lt;p&gt;
        I've recently been experimenting with replacing
        &lt;a href="https://projectile.mx/"&gt;Projectile&lt;/a&gt; with the built-in &lt;code&gt;project.el&lt;/code&gt;, and
        so far it has impressed me. Not only are all of Projectile's useful features available, but for
        me
        &lt;code&gt;project.el&lt;/code&gt;
        runs significantly faster in large repositories. If you're not familiar, both of these packages
        provide functions to search and operate on files and directories in the same project. If you're
        using Git, a "project" is probably synonymous with a repository.
      &lt;/p&gt;
      &lt;p&gt;
        Unfortunately, project detection is not always as easy as looking for a
        &lt;code&gt;.git/&lt;/code&gt; nearby, and sometimes Emacs gets it wrong. Projectile solves this by also
        looking for a &lt;code&gt;.projectile&lt;/code&gt; file, which overrides detection and says "this is the
        project root". This happens to be one of the features missing in &lt;code&gt;project.el&lt;/code&gt;.
      &lt;/p&gt;
      &lt;hr /&gt;
      &lt;h3&gt;Update&lt;/h3&gt;
      &lt;p&gt;Since the writing of this post, Emacs 29 has been released and introduces a variable which may solve this same issue in a cleaner way! Unfortunately it doesn't appear to work for me, but some folks on Reddit commented that it seems to function as expected. So, it may be worth a shot for you: set &lt;code&gt;project-vc-extra-root-markers&lt;/code&gt; to a list of file names or glob patterns which mark a project's root in addition to the default ".git", ".hg" and other common markers.&lt;/p&gt;
      &lt;p&gt;So, the cleaner equivalent to the rest of the post, if it works, is a simple &lt;pre lang="lisp"&gt;&lt;code class="language-lisp"&gt;(setq project-vc-extra-root-markers '(&amp;quot;.project.el&amp;quot; &amp;quot;.projectile&amp;quot; ))&lt;/code&gt;&lt;/pre&gt;&lt;/p&gt;
      &lt;p&gt;If, however, you're like me and can't seem to get this to do anything, the original post still works:&lt;/p&gt;
      &lt;hr /&gt;
      &lt;h3&gt;Original post&lt;/h3&gt;
      &lt;p&gt;
        Luckily, we can provide our own function to &lt;code&gt;project.el&lt;/code&gt; which looks for a file like
        this in the current and parent directories. Even better, the excellent Emacs community has
        already jumped on this, and a splendid solution was
        &lt;a href="https://michael.stapelberg.ch/posts/2021-04-02-emacs-project-override/"
          &gt;provided by Michael Stapelberg&lt;/a
        &gt;.
      &lt;/p&gt;
      &lt;p&gt;
        Alas, Michael couldn't have forseen that Emacs would change the project root data format in
        Emacs 29, so the provided function only works in earlier versions. However, adding in forward
        compatibility isn't much trouble. And while we're at it, we can also provide support for anyone
        else moving from Projectile like I am, by allowing &lt;code&gt;.projectile&lt;/code&gt; to serve as a
        project root marker alongside Michael's &lt;code&gt;.project.el&lt;/code&gt;.
      &lt;/p&gt;
      &lt;pre is:raw lang="lisp"&gt;&lt;code class="language-lisp"&gt;(defun project-root-override (dir)
  "Find DIR's project root by searching for a '.project.el' file.

If this file exists, it marks the project root. For convenient compatibility
with Projectile, '.projectile' is also considered a project root marker.

https://jmthornton.net/blog/p/emacs-project-override"
  (let ((root (or (locate-dominating-file dir ".project.el")
                  (locate-dominating-file dir ".projectile")))
        (backend (ignore-errors (vc-responsible-backend dir))))
    (when root (if (version&lt;= emacs-version "28")
                    (cons 'vc root)
                  (list 'vc backend root)))))

;; Note that we cannot use :hook here because `project-find-functions' doesn't
;; end in "-hook", and we can't use this in :init because it won't be defined
;; yet.
(use-package project
  :config
  (add-hook 'project-find-functions #'project-root-override))&lt;/code&gt;&lt;/pre&gt;
      &lt;p&gt;
        Now we can use &lt;code&gt;touch .project.el&lt;/code&gt; in any directory, and &lt;code&gt;project.el&lt;/code&gt; with
        recognize it as the project root!
      &lt;/p&gt;
      &lt;p&gt;
        By the way, the snippet above makes use of
        &lt;a href="https://github.com/jwiegley/use-package"&gt;use-package&lt;/a&gt; which provides fantastic
        package configuration and loading ability. John Wiegley is currently
        &lt;a href="https://github.com/jwiegley/use-package/issues/282"
          &gt;working on adding it into Emacs itself&lt;/a
        &gt;, so it shouldn't be long before this code snippet is fully native!
      &lt;/p&gt;
      &lt;p&gt;
        One note, in an ideal world, I'd prefer the root marker to be just &lt;code&gt;.project&lt;/code&gt; instead
        of &lt;code&gt;.project.el&lt;/code&gt;, but this is already widely used by other tools like Eclipse and I'd
        rather not cause conflicts. If you'd like to use this in your own Emacs, obviously you can
        change the function to check for anything you want.
      &lt;/p&gt;
      &lt;aside&gt;
        &lt;em&gt;Pro tip:&lt;/em&gt; If you'd like to use a project root marker like this, but you don't want other
        developers to have to worry about it (i.e. you don't want to commit it nor add it to the
        &lt;code&gt;.gitignore&lt;/code&gt;), you can always add locally-ignored files to
        &lt;code&gt;.git/info/exclude&lt;/code&gt;.
      &lt;/aside&gt;</content>
    <link href="https://jmthornton.net/blog/p/emacs-project-override"/>
    <summary>A short function to override what project.el thinks your project root is with a hidden file</summary>
    <published>2022-04-21T09:00:00-06:00</published>
  </entry>
  <entry>
    <id>https://jmthornton.net/blog/p/git-submodule-deinit</id>
    <title>Remove (deinit) a Git submodule</title>
    <updated>2020-11-01T09:00:00-06:00</updated>
    <content type="html">&lt;p&gt;Every once in a while, I need to deinit a Git submodule, but I just cannot get my brain to remember how, so here's my reminder:&lt;/p&gt;
      &lt;pre lang="zsh"&gt;&lt;code class="language-shell"&gt;git submodule deinit -f path/to/submodule
rm -rf .git/modules/path/to/submodule
git rm -f path/to/submodule&lt;/code&gt;&lt;/pre&gt;
      &lt;p&gt;After those three commands, you're free to commit the deinit. Note that this will not actually remove the submodule locally, you'll need to &lt;code&gt;rm -r&lt;/code&gt; to take care of that.&lt;/p&gt;
      &lt;p&gt;To make things just a bit easier, we can stick those three lines into a script and call it with a Git alias.&lt;/p&gt;</content>
    <link href="https://jmthornton.net/blog/p/git-submodule-deinit"/>
    <summary>A quick reminder of how to remove a Git submodule, because I always forget</summary>
    <published>2020-11-01T09:00:00-06:00</published>
  </entry>
  <entry>
    <id>https://jmthornton.net/blog/p/protonmail</id>
    <title>Protonmail - Why I switched and you should too</title>
    <updated>2016-10-26T09:00:00-06:00</updated>
    <content type="html">&lt;p&gt;
        &lt;strong&gt;Email privacy&lt;/strong&gt; has historically been less friendly than one might hope, with the most widespread technology being &lt;a href="https://en.wikipedia.org/wiki/Pretty_Good_Privacy"&gt;PGP&lt;/a&gt; (Pretty Good Privacy). While PGP is actually relatively easy to use, it remains solidly outside the comfort zone of the average user. This is true even with increasing awareness and concern about government snooping and corporate tracking. Privacy is fundamental to the human experience, yet it continues escapes the purview of most people.
      &lt;/p&gt;
      &lt;p&gt;
        Enter &lt;a href="https://protonmail.com/"&gt;ProtonMail&lt;/a&gt;, a secure email service designed to be free, easy to use and mobile friendly. Developed by scientists at the &lt;a href="https://en.wikipedia.org/wiki/CERN"&gt;European Organization for Nuclear Research&lt;/a&gt; (CERN)&amp;mdash;the place where all the &lt;a href="http://www.nbcnews.com/science/science-news/god-particle-new-cern-experiments-shed-more-light-higgs-boson-n419926"&gt;Higgs Boson hype&lt;/a&gt; came from&amp;mdash;ProtonMail seeks to give encryption to everyone, &lt;a href="https://protonmail.com/security-details"&gt;protecting them&lt;/a&gt; from mass surveillance by governments and corporations. It uses very strong &lt;a href="https://protonmail.com/blog/protonmail-open-source/"&gt;open source cryptography&lt;/a&gt; to encrypt your data before it even leaves your computer (or phone or tablet). It stays encrypted _even if you're sending it to someone who doesn't use ProtonMail_. The ProtonMail servers don't have the key to your encrypted emails, which means they can't see what your email holds even if they wanted to, so your email couldn't be shared even if a government ordered it. Plus, the servers themselves protected with Switzerland's &lt;a href="https://protonmail.com/blog/switzerland/"&gt;very strong privacy laws&lt;/a&gt;.
      &lt;/p&gt;
      &lt;p&gt;
        Other features include the ability to set an expiration time on your emails, easy to use filters and tags for organization. While there are paid versions which add extra functionality (like extra storage, custom aliases, priority support, etc.), most users have and only need the standard version, which will be free forever with no ads. How do they keep running a free service without ads?
      &lt;/p&gt;
      &lt;p&gt;
        Besides their record-breaking crowdfunding &lt;a href="https://www.indiegogo.com/projects/protonmail#/"&gt;campaign on Indiegogo&lt;/a&gt;, they receive donations from their many users. This is combined with heavier users like myself who use the paid services to gain access to functionality like using custom domain names (ProtonMail secures my public-facing email). In addition to all their features, they don't track or log any identifiable information—&lt;a href="https://tech.slashdot.org/story/16/10/22/008216/google-has-quietly-dropped-ban-on-personally-identifiable-web-tracking?utm_source=feedburner&amp;amp;utm_medium=feed&amp;amp;utm_campaign=Feed%3A+Slashdot%2Fslashdot+%28Slashdot%29"&gt;unlike Google&lt;/a&gt; or &lt;a href="https://protonmail.com/blog/yahoo-us-intelligence/"&gt;Yahoo&lt;/a&gt;.
      &lt;/p&gt;
      &lt;p&gt;
        ProtonMail offers a beautiful, &lt;a href="https://protonmail.com/blog/lovie-award-protonmail-encrypted-email/"&gt;Lovie design award nominated&lt;/a&gt; web-based client, as well as highly rated easy-to-use apps for &lt;a href="https://play.google.com/store/apps/details?id=ch.protonmail.android"&gt;Android&lt;/a&gt; and &lt;a href="https://itunes.apple.com/us/app/protonmail-encrypted-email/id979659905?mt=8"&gt;iOS&lt;/a&gt;. For the more-than-average security minded, a standalone APK is in the works also. Plus–a necessity with personal data–ProtonMail offers two-factor authentication.
      &lt;/p&gt;
      &lt;p&gt;
        Of course, nothing can be perfect. ProtonMail is fully released, but doesn't yet offer the full functionality of a service like GMail. Don't get me wrong, email is their focus and that part is fantastic. However, the service does not yet offer a calendar or even calendar integration. Nor is there an instant messaging capability, nor a desktop client. The spam filter isn't perfect (though none are), and I would prefer more storage space. However, with many of these features on their way as development continues, I am more than very pleased with the state of ProtonMail.
      &lt;/p&gt;
      &lt;p&gt;
        Privacy is an important part of life and secure email is a huge step in the direction of this basic freedom. Going against the grain of less user friendly&amp;mdash;though still very secure&amp;mdash;methods of encryption like PGP, ProtonMail brings email privacy back to the people while still maintaining a fantastic user experience characteristic of a first-class email service. This is why I've switched all my mail to ProtonMail, and why I encourage without hesitation that you do the same. And for the record, no I have not been paid or asked to write about this, I really do love ProtonMail that much.
      &lt;/p&gt;
      &lt;p&gt;
        &lt;a href="https://protonmail.com/signup"&gt;Sign up for a free ProtonMail account today&lt;/a&gt;
      &lt;/p&gt;</content>
    <link href="https://jmthornton.net/blog/p/protonmail"/>
    <summary>ProtonMail brings email privacy to the people while maintaining a fantastic user experience characteristic of a first-class email service.</summary>
    <published>2016-10-26T09:00:00-06:00</published>
  </entry>
  <entry>
    <id>https://jmthornton.net/blog/p/snyk-ls-emacs</id>
    <title>Snyk Language Server in Emacs</title>
    <updated>2023-11-07T09:00:00-06:00</updated>
    <content type="html">&lt;p&gt;
        At &lt;a href="https://www.dronedeploy.com"&gt;DroneDeploy&lt;/a&gt;, we've been experimenting with more and more AI tools like GitHub Copilot, but we're concerned about the security implications of incorporating complex generated code snippets into our production projects. To help out, we're experimenting with the &lt;a href="https://snyk.io"&gt;Snyk&lt;/a&gt; language server, which provides security insights and vulnerability scanning for code and dependencies. However, there's no official package for using the Snyk language server in Emacs. So, I dove into making Snyk and Emacs get along nicely, a feat which returns exactly zero search results as of the writing of this post. I finally got it working, and I hope this helps anyone else looking for a solution to the same issue.
      &lt;/p&gt;
      &lt;hr /&gt;
      &lt;h4&gt;Update&lt;/h4&gt;
      &lt;p&gt;Since the writing of this post, we've concluded our experiment and found that Snyk was not worth the cost for us, and we find more value in investing in better peer review practices instead. However, the process of setting up a language server configuration was still interesting.&lt;/p&gt;
      &lt;hr /&gt;
      &lt;p&gt;
        Before we start, you should already:
        &lt;ul&gt;
          &lt;li&gt;
            Be somewhat familiar with Emacs Lisp since these steps don't explain a whole lot and you should always be wary of copying code off the Internet.
          &lt;/li&gt;
          &lt;li&gt;
            Be using &lt;a href="https://emacs-lsp.github.io/lsp-mode/"&gt;LSP Mode&lt;/a&gt;&amp;mdash;as of right now, the built-in Eglot (in v29+) can't run add-on servers in parallel, so LSP Mode is the only option.
          &lt;/li&gt;
        &lt;/ul&gt;
      &lt;/p&gt;
      &lt;h3&gt;Snyk Language Server Setup&lt;/h3&gt;
      &lt;p&gt;
        Since no one seems to have written about this before, this may not be a perfect approach, but this is the workflow I've found to be effective. The &lt;a href="https://docs.snyk.io/integrations/ide-tools/language-server"&gt;official IDE integration documentation&lt;/a&gt; is a bit outdated, so the following steps reference &lt;a href="https://github.com/snyk/snyk-ls"&gt;the current server documentation&lt;/a&gt;.
      &lt;/p&gt;
      &lt;ol&gt;
        &lt;li&gt;
          First, install the server using the bash installer script provided by Snyk: &lt;a href="https://github.com/snyk/snyk-ls/blob/main/getLanguageServer.sh"&gt;snyk/snyk-ls/main/getLanguageServer.sh&lt;/a&gt;. This is basically a fancy wrapper around a curl call, but &lt;strong&gt;don't blindly run scripts from the Internet&lt;/strong&gt;. Please do take a look at what this script is doing before you run it on your own system.
        &lt;/li&gt;
        &lt;li&gt;
          Install the Snyk CLI (&lt;code class="language-shell"&gt;brew tap snyk/tap &amp;&amp; brew install snyk&lt;/code&gt;). This step lets you get your authentication token. Since the Snyk language server supports automatic authentication, this shouldn't actually be necessary, but I had issues with excessively repeated re-authentication, so the token provides a workaround.
        &lt;/li&gt;
        &lt;li&gt;
          Run &lt;code class="language-shell"&gt;snyk config get api&lt;/code&gt; to actually get your token after authenticating in your browser. Especially if your config is version-controlled, &lt;strong&gt;do not&lt;/strong&gt; commit the token; load it from an external source (my &lt;code&gt;init.el&lt;/code&gt; loads a git-ignored &lt;code&gt;init.local.el&lt;/code&gt; if it exists, that's one place to put such a token).
        &lt;/li&gt;
        &lt;li&gt;
          Finally, register the LSP client with LSP Mode:
        &lt;/li&gt;
      &lt;/ol&gt;
      &lt;pre lang="lisp"&gt;&lt;code class="language-lisp"&gt;(lsp-register-client
 (make-lsp-client
  :server-id 'snyk-ls

  ;; The "-o" option specifies the issue format, I prefer markdown over HTML
  :new-connection (lsp-stdio-connection '("snyk-ls" "-o" "md"))

  ;; Change this to the modes you want this in; you may want to include the
  ;; treesitter versions if you're using them
  :major-modes '(python-mode typescript-mode)

  ;; Allow running in parallel with other servers. This is why Eglot isn't an
  ;; option right now
  :add-on? t

  :initialization-options
  \`(:integrationName "Emacs"
    :integrationVersion ,emacs-version

    ;; GET THIS FROM SOMEWHERE ELSE, don't hardcode it
    :token ,snyk-ls-token

    ;; Enable these features only if available for your organization.
    ;; Note: these are strings, not booleans; that's what the server
    ;; expects for whatever reason
    :activateSnykCodeSecurity "true"
    :activateSnykCodeQuality "true"

    ;; List trusted folders here to avoid repeated permission requests
    :trustedFolders [])))&lt;/code&gt;&lt;/pre&gt;
      &lt;h3&gt;Using Snyk in Emacs&lt;/h3&gt;
      &lt;p&gt;
        After setting everything up, simply open a file using one of the major modes you configured above. If it's not in one of the &lt;code class="language-lisp"&gt;:trustedFolders&lt;/code&gt;, Snyk will ask you if you'd like to trust the one you're in. Keep in mind that Snyk takes a few seconds to start once you open the first file in a project. So, if you don't see it right away, just be patient.
      &lt;/p&gt;
      &lt;p&gt;
        And there you have it! You can now use the Snyk language server in Emacs.
      &lt;/p&gt;</content>
    <link href="https://jmthornton.net/blog/p/snyk-ls-emacs"/>
    <summary>Setting up the Snyk Language Server for use in Emacs by defining a custom server connection in LSP Mode</summary>
    <published>2023-06-01T09:00:00-06:00</published>
  </entry>
  <entry>
    <id>https://jmthornton.net/blog/p/duplicated-ids</id>
    <title>Duplicate IDs in HTML: What would happen?</title>
    <updated>2017-09-27T09:00:00-06:00</updated>
    <content type="html">&lt;p&gt;
        The &lt;code class="language-html"&gt;id&lt;/code&gt; attribute is an often-used and useful HTML attribute. However, they're always meant to be unique. The &lt;a href="https://www.w3.org/TR/html5/dom.html#the-id-attribute"&gt;HTML5 specification&lt;/a&gt; explicitly says:
      &lt;/p&gt;
      &lt;blockquote cite="W3C HTML5 Specification"&gt;
        The value must be unique amongst all the IDs in the element's home subtree
      &lt;/blockquote&gt;
      &lt;p&gt;
        The &lt;a href="https://w3c.github.io/html/dom.html#element-attrdef-global-id"&gt;HTML 4.01 specification&lt;/a&gt; says basically the same thing, that an &lt;code class="language-html"&gt;id&lt;/code&gt; "must be unique in a document".
      &lt;/p&gt;
      &lt;p&gt;
        That's pretty clear, and is usually followed. When you want to use the same identifier for multiple elements, you should and probably usually do use a &lt;code class="language-html"&gt;class&lt;/code&gt; attribute instead. But what if you do reuse the same &lt;code class="language-html"&gt;id&lt;/code&gt;, intentionally or not? Take this example:
      &lt;/p&gt;
      &lt;pre&gt;&lt;code class="language-html"&gt;&amp;lt;!DOCTYPE html&amp;gt;
&amp;lt;html&amp;gt;
  &amp;lt;head&amp;gt;
    &amp;lt;meta charset=&amp;quot;utf-8&amp;quot;&amp;gt;
    &amp;lt;title&amp;gt;Duplicated IDs&amp;lt;/title&amp;gt;
    &amp;lt;style&amp;gt;
      #sec1, #sec2 &amp;#123;
        color: blue;
      &amp;#125;

      #sec3 &amp;#123;
        color: red;
      &amp;#125;
    &amp;lt;/style&amp;gt;
  &amp;lt;/head&amp;gt;
  &amp;lt;body&amp;gt;
    &amp;lt;nav&amp;gt;
      &amp;lt;a href=&amp;quot;#sec1&amp;quot;&amp;gt;Goto 1&amp;lt;/a&amp;gt;
      &amp;lt;a href=&amp;quot;#sec2&amp;quot;&amp;gt;Goto 2&amp;lt;/a&amp;gt;
      &amp;lt;a href=&amp;quot;#sec3&amp;quot;&amp;gt;Goto 3&amp;lt;/a&amp;gt;
      &amp;lt;a href=&amp;quot;#sec3&amp;quot;&amp;gt;Goto 4&amp;lt;/a&amp;gt;
    &amp;lt;/nav&amp;gt;

    &amp;lt;div id=&amp;quot;sec1&amp;quot;&amp;gt;
      &amp;lt;p&amp;gt;Section 1&amp;lt;/p&amp;gt;
    &amp;lt;/div&amp;gt;

    &amp;lt;div id=&amp;quot;sec2&amp;quot;&amp;gt;
      &amp;lt;p&amp;gt;Section 2&amp;lt;/p&amp;gt;
    &amp;lt;/div&amp;gt;

    &lt;span class="highlight-code"&gt;&amp;lt;div id=&amp;quot;sec3&amp;quot;&amp;gt;&lt;/span&gt;
      &amp;lt;p&amp;gt;Section 3&amp;lt;/p&amp;gt;
    &amp;lt;/div&amp;gt;

    &lt;span class="highlight-code"&gt;&amp;lt;div id=&amp;quot;sec3&amp;quot;&amp;gt;&lt;/span&gt;
      &amp;lt;p&amp;gt;Section 4&amp;lt;/p&amp;gt;
    &amp;lt;/div&amp;gt;

    &amp;lt;script&amp;gt;
      document.getElementById('sec3').innerHtml('Sup!');
    &amp;lt;/script&amp;gt;
  &amp;lt;/body&amp;gt;
&amp;lt;/html&amp;gt;&lt;/code&gt;&lt;/pre&gt;
      &lt;p&gt;Notice that both 'Section 3' and 'Section 4' have the same &lt;code class="language-html"&gt;id="sec3"&lt;/code&gt;. Oh no! The HTML police are already knocking on my door! So what actually happens when a browser comes across duplicated &lt;code class="language-html"&gt;id&lt;/code&gt;'s? How does styling work? Document fragments? Javascript accessing the DOM?&lt;/p&gt;
      &lt;p&gt;Surprisingly, behavior is quite consistent across modern browsers as of this writing. Modern browsers&amp;mdash;including Chrome, Firefox, Opera, etc.&amp;mdash;are quite forgiving when it comes to some invalid HTML like duplicate &lt;code class="language-html"&gt;id&lt;/code&gt;'s. The HTML5 specification lays out three main uses for unique identifiers, basically comprising of fragment identifiers, DOM targeting for scripting and CSS styling. The behaviors of duplication on each of these uses is different, but related.&lt;/p&gt;
      &lt;ul&gt;
        &lt;li&gt;&lt;strong&gt;Document fragments&lt;/strong&gt;: The browser will navigate to the first instance of the specified &lt;code class="language-html"&gt;id&lt;/code&gt;. In the example above, clicking on the nav links "Goto 3" and "Goto 4" will both go to "Section 3" since they both use &lt;code class="language-html"&gt;href="#sec3"&lt;/code&gt;.&lt;/li&gt;
        &lt;li&gt;&lt;strong&gt;Javascript targeting&lt;/strong&gt;: Javascript will select the first instance of the &lt;code class="language-html"&gt;id&lt;/code&gt;. In the example, the script will only change the HTML inside the "Section 3" div.&lt;/li&gt;
        &lt;li&gt;&lt;strong&gt;Styling&lt;/strong&gt;: All instances of the &lt;code class="language-html"&gt;id&lt;/code&gt; are styled. In the example above, both "Section 3" and "Section 4" are styled with &lt;code class="language-html"&gt;color: red;&lt;/code&gt;.&lt;/li&gt;
      &lt;/ul&gt;
      &lt;p&gt;In all, document fragments and Javascript choose the first element, while CSS styles them all.&lt;/p&gt;
      &lt;p&gt;Why do all the browsers act the same, even though the specification doesn't say what to do and forbids duplicated identifiers in the first place? The specific reasons are varied, but in general browsers trend towards being developer-friendly. This means rendering bad markup by doing their best to interpret what the author means, and fail gracefully and often silently when necessary. But does this mean it's okay to duplicate &lt;code class="language-html"&gt;id&lt;/code&gt;'s? &lt;strong&gt;No&lt;/strong&gt;.&lt;/p&gt;
      &lt;p&gt;Since the HTML5 specification, as well as &lt;a href="https://www.w3.org/TR/html51/dom.html#the-id-attribute"&gt;HTML5.1&lt;/a&gt; and &lt;a href="https://w3c.github.io/html/dom.html#element-attrdef-global-id"&gt;HTML5.2&lt;/a&gt; don't dictate what browsers should do with duplicate &lt;code class="language-html"&gt;id&lt;/code&gt;'s, nor is there any other standard which does. Therefore, &lt;strong&gt;&lt;em&gt;behavior is not guaranteed&lt;/em&gt;&lt;/strong&gt; and could change at any time.&lt;/p&gt;
      &lt;p&gt;The point of this article is that &lt;strong&gt;&lt;em&gt;while browsers are forgiving of duplicated &lt;code class="language-html"&gt;id&lt;/code&gt;'s, this behavior is not guaranteed or predictable, and should always be avoided&lt;/em&gt;&lt;/strong&gt;. Duplicated identifiers are bad style, create confusing code, are invalid HTML and most importantly, they make me cry. Surely you don't want me to cry.&lt;/p&gt;
      &lt;p&gt;More often than the world should allow (which is not at all), I have come across invalid HTML like this in the professional environments. Fixing it is time consuming and typically frustrating, and it drives me to drink (coffee). The world of development is filled with invalid and unreadable code, and this is why I cry.&lt;/p&gt;</content>
    <link href="https://jmthornton.net/blog/p/duplicated-ids"/>
    <summary>Duplicated IDs occur both intentionally and not, so what happens when a browser tries to render them?</summary>
    <published>2017-09-27T09:00:00-06:00</published>
  </entry>
  <entry>
    <id>https://jmthornton.net/blog/p/http-server-v012</id>
    <title>What's new in http-server v0.12?</title>
    <updated>2019-11-23T09:00:00-06:00</updated>
    <content type="html">&lt;p&gt;
        &lt;a href="https://github.com/indexzero/http-server"
          &gt;&lt;code class="language-bash"&gt;http-server&lt;/code&gt;&lt;/a
        &gt;
        has gone a while without a substantial release, despite issues and
        PR's coming in at full force. Unfortunately, the core team is
        small, only three devs, and we all have busy lives and full-time
        careers. But, we've been able to pull together a major release,
        with some interesting improvements.
      &lt;/p&gt;

      &lt;h2&gt;Ecstatic v4&lt;/h2&gt;

      &lt;p&gt;
        One of the largest set of changes is improvements to the
        underlying project,
        &lt;a href="https://github.com/jfhbrook/node-ecstatic"
          &gt;&lt;code class="language-bash"&gt;ecstatic&lt;/code&gt;&lt;/a
        &gt;, which recently reached version 4.0.0, and has had a few patch
        releases since. For
        &lt;code class="language-bash"&gt;http-server&lt;/code&gt;, the most important
        improvements include:
      &lt;/p&gt;

      &lt;ul&gt;
        &lt;li&gt;
          Ability to override MIME types with a
          &lt;code class="language-bash"&gt;.types&lt;/code&gt; file
        &lt;/li&gt;
        &lt;li&gt;Improved charset detection&lt;/li&gt;
        &lt;li&gt;
          Improved accuracy when checking if gzip responses are allowed
        &lt;/li&gt;
        &lt;li&gt;Elimination of a file descriptor leak&lt;/li&gt;
        &lt;li&gt;Elimination of a DOS vulnerability&lt;/li&gt;
      &lt;/ul&gt;

      &lt;h2&gt;Brotli encoding&lt;/h2&gt;

      &lt;p&gt;
        In addition to gzip compression,
        &lt;code class="language-bash"&gt;http-server&lt;/code&gt; can now serve
        &lt;a href="https://en.wikipedia.org/wiki/Brotli"&gt;Brotli encoded&lt;/a&gt;
        content. This provides better compression ratios than gzip for
        many types of content, especially text-based assets like HTML,
        CSS, and JavaScript.
      &lt;/p&gt;

      &lt;h2&gt;.httpserverrc settings file&lt;/h2&gt;

      &lt;p&gt;
        This improvement has been a long time coming, with the original PR
        opened at the start of 2015! We know passing switches can become
        cumbersome when automating starting up a server, so now it's a bit
        easier.
      &lt;/p&gt;

      &lt;p&gt;
        By using a &lt;code class="language-bash"&gt;.httpserverrc&lt;/code&gt; file
        in the directory to be served. The file is simple JSON and uses
        the same switch arguments as the CLI. This is one of my personal
        favorite improvements!
      &lt;/p&gt;

      &lt;p&gt;
        Here's an example
        &lt;code class="language-bash"&gt;.httpserverrc&lt;/code&gt; file:
      &lt;/p&gt;

      &lt;pre is:raw&gt;&lt;code class="language-json"&gt;{
  "port": 8080,
  "cache": 3600,
  "cors": true,
  "log-ip": true
}&lt;/code&gt;&lt;/pre&gt;

      &lt;h2&gt;--find-port&lt;/h2&gt;

      &lt;p&gt;
        If the port given with the
        &lt;code class="language-bash"&gt;-p&lt;/code&gt; switch is not available,
        this new switch will allow the server to automatically find a free
        port to bind to. No more "port already in use" errors when you
        just want to get a server running quickly!
      &lt;/p&gt;

      &lt;h2&gt;Support for HTTP basic access authentication&lt;/h2&gt;

      &lt;p&gt;
        &lt;code class="language-bash"&gt;http-server&lt;/code&gt; now supports basic
        authentication by passing
        &lt;code class="language-bash"&gt;--username&lt;/code&gt; and
        &lt;code class="language-bash"&gt;--password&lt;/code&gt; switches, which the
        client must authenticate with. This is useful for protecting
        development servers or staging environments.
      &lt;/p&gt;

      &lt;h2&gt;Client IP logging&lt;/h2&gt;

      &lt;p&gt;
        By passing the &lt;code class="language-bash"&gt;--log-ip&lt;/code&gt; option,
        the client's IP is logged to stdout. This can be helpful for
        debugging or monitoring which clients are accessing your server.
      &lt;/p&gt;

      &lt;h2&gt;Improved handling of proxy errors&lt;/h2&gt;

      &lt;p&gt;
        Previously, proxy errors had the potential to crash the server
        with an unhelpful error message. Now, the server logs the error
        (and status code) and continues without a crash. Much more robust
        for production use.
      &lt;/p&gt;

      &lt;h2&gt;Other improvements&lt;/h2&gt;

      &lt;ul&gt;
        &lt;li&gt;
          Aggressive &lt;code class="language-bash"&gt;no-cache&lt;/code&gt; when the
          cache option is set to &lt;code class="language-bash"&gt;-1&lt;/code&gt;
        &lt;/li&gt;
        &lt;li&gt;
          A clever hack for a catch-all redirect (useful for single page
          apps) was added to the Readme page
        &lt;/li&gt;
        &lt;li&gt;
          Fixed some issues with the
          &lt;code class="language-bash"&gt;-o [path]&lt;/code&gt; switch
        &lt;/li&gt;
        &lt;li&gt;Better test messaging on some types of errors&lt;/li&gt;
        &lt;li&gt;Cleaner handling of setting options behind the scenes&lt;/li&gt;
      &lt;/ul&gt;

      &lt;p&gt;
        We're excited to get this release out to the community. The
        combination of these improvements should make
        &lt;code class="language-bash"&gt;http-server&lt;/code&gt; more robust,
        feature-rich, and easier to use. Keep an eye on the
        &lt;a href="https://github.com/indexzero/http-server"
          &gt;GitHub repository&lt;/a
        &gt;
        for the official release announcement!
      &lt;/p&gt;</content>
    <link href="https://jmthornton.net/blog/p/http-server-v012"/>
    <summary>A preview of upcoming features in http-server v0.12, including Brotli compression, configuration files, and improved error handling.</summary>
    <published>2019-11-23T09:00:00-06:00</published>
  </entry>
  <entry>
    <id>https://jmthornton.net/blog/p/array-removeelement</id>
    <title>Remove an element from an Array in Javascript</title>
    <updated>2017-10-25T09:00:00-06:00</updated>
    <content type="html">&lt;p&gt;From time to time, one comes to the issue of removing elements from an Array. While there are certainly ways to do this with built-in functions like &lt;code class="language-javascript"&gt;splice()&lt;/code&gt;, they just don't quite do everything I want. So, as usual, I made my own.&lt;/p&gt;

      &lt;aside class="content-skip"&gt;
        &lt;a href="#completeFunction"&gt;&gt; Skip to the complete function and examples&lt;/a&gt;
      &lt;/aside&gt;

      &lt;p&gt;The first iteration I created was designed to be able to polymorphically take either a single argument (the index of the element to remove), or two arguments defining a range. It then basically concatenates the part of the array before the removed element(s) with the part of the array after the removed element(s). Here's how that took shape:&lt;/p&gt;

      &lt;pre is:raw lang="Javascript"&gt;&lt;code class="language-javascript"&gt;const removeElement = (arr, from, to) =&gt; {
  const rest = arr.slice(to || from);
  arr.length = from;
  return [...rest];
}&lt;/code&gt;&lt;/pre&gt;

      &lt;p&gt;This works out pretty well, and I even started using it in a project. However, I soon came across the case where I wanted to remove the last element in an array. Sure, that's just as easy as &lt;code class="language-javascript"&gt;removeElement(arr, arr.length - 1)&lt;/code&gt;, right? Well, sure, but I'm too lazy for all that typing. Ideally what I really want is negative indexing like Python allows, where &lt;code class="language-javascript"&gt;arr[-1] === arr[arr.length - 1]&lt;/code&gt;.&lt;/p&gt;

      &lt;p&gt;I did a little researching around to see how others might have solved the same issue, and came across &lt;a href="https://johnresig.com/blog/javascript-array-remove/"&gt;this blog post&lt;/a&gt; by none other than &lt;a href="https://johnresig.com/"&gt;John Resig&lt;/a&gt;, the creator of jQuery itself and the author of one of &lt;a href="https://www.amazon.com/Secrets-JavaScript-Ninja-John-Resig/dp/1617292850/ref=as_li_ss_tl?ie=UTF8&amp;linkCode=sl1&amp;tag=jspro-20&amp;linkId=8a7708bc409ba14301ac971e433828e4&amp;pldnSite=1"&gt;my favorite books on Javascript&lt;/a&gt;.&lt;/p&gt;

      &lt;p&gt;In the post, John outlined five goals for his &lt;a href="https://johnresig.com/blog/javascript-array-remove/"&gt;&lt;code class="language-javascript"&gt;Array.remove&lt;/code&gt;&lt;/a&gt;:&lt;/p&gt;

      &lt;blockquote cite="https://johnresig.com/blog/javascript-array-remove"&gt;
        &lt;ul&gt;
          &lt;li&gt;It had to add an extra method to an array object that would allow me to remove an item by index (e.g. array.remove(1) to remove the second item).&lt;/li&gt;
          &lt;li&gt;It had to be able to remove items by negative index (e.g. array.remove(-1) to remove the last item in the array).&lt;/li&gt;
          &lt;li&gt;It had to be able to remove a group of items by index, and negative index (e.g. array.remove(0,2) to remove the first three items and array.remove(-2,-1) to remove the last two items).&lt;/li&gt;
          &lt;li&gt;It had to be destructive (modifying the original array).&lt;/li&gt;
          &lt;li&gt;It had to behave like other destructive array methods (returning the new array length - like how push and unshift work).&lt;/li&gt;
        &lt;/ul&gt;
      &lt;/blockquote&gt;

      &lt;p&gt;Those sound like pretty good goals to me. It covers the base cases my function already solves and adds in the negative indexing I'd like, plus it raises the issue of making the function destructive. Even better, John implemented his function as an addition to the &lt;code class="language-javascript"&gt;Array&lt;/code&gt; object itself, which fits object oriented-ness quite well.&lt;/p&gt;

      &lt;p&gt;So I combined my function and ideas with John's function, and came up with something I'm quite happy with. This function is already being used in several projects and is ready to be utilized elsewhere:&lt;/p&gt;

      &lt;h3 id="completeFunction"&gt;The complete function&lt;/h3&gt;

      &lt;h4&gt;Array.prototype.removeElement&lt;/h4&gt;

      &lt;pre is:raw lang="Javascript"&gt;&lt;code class="language-javascript"&gt;// Array#removeElement - By Jade M Thornton (ISC Licensed)
Array.prototype.removeElement = function(from, to) {
  const rest = this.slice((to || from) + 1 || this.length);
  this.length = from &lt; 0 ? this.length + from : from;
  this.push(...rest);
  return this;
}&lt;/code&gt;&lt;/pre&gt;

      &lt;h4&gt;Example usage&lt;/h4&gt;

      &lt;pre lang="Javascript"&gt;&lt;code class="language-javascript"&gt;let a = [1, 2, 3, 4, 5, 6, 7, 8, 9];

// remove the element at index 1 (second element)
a.removeElement(1);
// a --&gt; [1, 3, 4, 5, 6, 7, 8, 9]

// remove the last element (a.length - 1)
a.removeElement(-1);
// a --&gt; [1, 3, 4, 5, 6, 7, 8]

// remove elements 2-4 (INCLUSIVE)
a.removeElement(2, 4);
// a --&gt; [1, 3, 7, 8];

// remove elements (-1)-(-2) (INCLUSIVE)
a.removeElement(-1, -2);
// a --&gt; [1, 3]&lt;/code&gt;&lt;/pre&gt;

      &lt;h4&gt;Explanation&lt;/h4&gt;

      &lt;p&gt;What's that you ask? What in David Hilbert's beard are those six lines of code doing? Let's take a look at it again, along with some helpful line numbers.&lt;/p&gt;

      &lt;pre is:raw class="line-numbers" lang="Javascript"&gt;&lt;code class="language-javascript"&gt;Array.prototype.removeElement = function(from, to) {
  const rest = this.slice((to || from) + 1 || this.length);
  this.length = from &lt; 0 ? this.length + from : from;
  this.push(...rest);
  return this;
}&lt;/code&gt;&lt;/pre&gt;

      &lt;p&gt;Each line is doing something important, so let's break them up and talk through what's going down.&lt;/p&gt;

      &lt;p&gt;In line &lt;code&gt;1&lt;/code&gt; is the assignment of our new &lt;a href="https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Functions"&gt;anonymous function&lt;/a&gt; to &lt;code class="language-javascript"&gt;Array.prototype.removeElement&lt;/code&gt;. Unlike my first implementation above, this allows us to use the function as a method on an instance of an &lt;a href="https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array"&gt;&lt;code class="language-javascript"&gt;Array&lt;/code&gt;&lt;/a&gt; itself. In a more heavily object-oriented language like Java, this would be similar to adding our function to the &lt;code class="language-javascript"&gt;Array&lt;/code&gt; object, which we would later instantiate.&lt;/p&gt;

      &lt;p&gt;Line &lt;code&gt;2&lt;/code&gt;, in a nutshell, grabs the part of the Array starting &lt;em&gt;immediately after&lt;/em&gt; the last element we want to remove. For example, if the function call is &lt;code class="language-javascript"&gt;arr.removeElement(2);&lt;/code&gt; then &lt;code class="language-javascript"&gt;rest&lt;/code&gt; will be the elements of the Array starting at index 3 and continuing to the end of the Array. However, there's some trickiness in there for handling different cases. Let's break this line down further.&lt;/p&gt;

      &lt;ul&gt;
        &lt;li&gt;
          What we're assigning to &lt;code class="language-javascript"&gt;rest&lt;/code&gt; is a &lt;a href="https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array"&gt;&lt;code class="language-javascript"&gt;slice&lt;/code&gt;&lt;/a&gt; of &lt;code class="language-javascript"&gt;this&lt;/code&gt;, which is referring to the Array we're working on.
        &lt;/li&gt;

        &lt;li&gt;
          The argument (yes, singular argument) takes advantage of &lt;a href="https://developer.mozilla.org/en-US/docs/Glossary/Falsy"&gt;falsy&lt;/a&gt; values. The first part, &lt;code class="language-javascript"&gt;(to || from)&lt;/code&gt;, evaluates to the value of &lt;code class="language-javascript"&gt;to&lt;/code&gt; if it's assigned to a value. If &lt;code class="language-javascript"&gt;to&lt;/code&gt; is &lt;code class="language-javascript"&gt;undefined&lt;/code&gt;, which is falsy, the statement &lt;a href="https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Logical_Operators"&gt;short-circuits&lt;/a&gt; to the value of &lt;code class="language-javascript"&gt;from&lt;/code&gt;.
        &lt;/li&gt;
        &lt;li&gt;
          We then add 1 to the value of &lt;code class="language-javascript"&gt;(to || from)&lt;/code&gt;, which gets us the element immediately after the removal section.
        &lt;/li&gt;
        &lt;li&gt;
          In the case that &lt;code class="language-javascript"&gt;(to || from)&lt;/code&gt; is -1, meaning we want to remove the last element, then adding 1 brings it to 0. we don't want to roll over to the first element like that, so we take advantage of 0 being falsy. &lt;code class="language-javascript"&gt;(to || from) + 1 || this.length&lt;/code&gt; lets us translate -1 into the last element.
        &lt;/li&gt;
      &lt;/ul&gt;

      &lt;p&gt;Line &lt;code&gt;3&lt;/code&gt; handles the front part of the Array, the part before the removal area. We use a &lt;a href="https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Conditional_Operator"&gt;conditional operator&lt;/a&gt; to handle negative indexing. If &lt;code class="language-javascript"&gt;from&lt;/code&gt; is negative, &lt;code class="language-javascript"&gt;this.length&lt;/code&gt; gets &lt;code class="language-javascript"&gt;from&lt;/code&gt; added to itself (which is the same as subtracting the negative). Otherwise, we just use &lt;code class="language-javascript"&gt;from&lt;/code&gt; directly. Because Javascript is (rightfully) zero-indexed, this results in &lt;code class="language-javascript"&gt;this&lt;/code&gt; Array being shortened correctly.&lt;/p&gt;

      &lt;p&gt;In line &lt;code&gt;4&lt;/code&gt;, we reassemble the section before the removed element(s), stored in &lt;code class="language-javascript"&gt;this&lt;/code&gt;, with the section after the removed element(s), stored in &lt;code class="language-javascript"&gt;rest&lt;/code&gt;. We do this by pushing &lt;code class="language-javascript"&gt;rest&lt;/code&gt; onto &lt;code class="language-javascript"&gt;this&lt;/code&gt; Array&amp;mdash;utilizing the &lt;a href="https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Spread_operator"&gt;spread operator&lt;/a&gt;&amp;mdash;which achieves the goal of destroying the old Array in favor of the new, altered one.&lt;/p&gt;

      &lt;p&gt;The last (meaningful) line, line &lt;code&gt;5&lt;/code&gt;, the new Array is returned so the function can be used in an assignment. Technically there's a fifth line, but it's a closing brace. I'm going to let you guess &lt;a href="https://en.wikipedia.org/wiki/Scope_(computer_science)#Block_scope"&gt;what that does&lt;/a&gt;.&lt;/p&gt;

      &lt;br /&gt;&lt;br /&gt;&lt;br /&gt;

      &lt;p&gt;All code created by me, Jade Michael Thornton, is licensed under the terms of the &lt;a href="https://jmthornton.net/LICENSE"&gt;ISC License&lt;/a&gt;, just like the rest of this site.&lt;/p&gt;</content>
    <link href="https://jmthornton.net/blog/p/array-removeelement"/>
    <summary>A new Javascript function to remove any element from an Array in an intuitive way, with support for negative indexing.</summary>
    <published>2017-10-13T09:00:00-06:00</published>
  </entry>
  <entry>
    <id>https://jmthornton.net/blog/p/bug-posts</id>
    <title>A list of the best bug write-ups I've read</title>
    <updated>2023-11-11T09:00:00-06:00</updated>
    <content type="html">&lt;p&gt;A modest list of interesting blog posts I've come across containing tech mystery stories exploring how a bug or mistake or other problem came about, how it was discovered, how it was fixed, and maybe some lessons to learn.&lt;/p&gt;
      &lt;ul&gt;
        &lt;li&gt;&lt;a href="https://www.ibiblio.org/harris/500milemail.html"&gt;The case of the 500-mile email&lt;/a&gt;, by Trey Harris&lt;/li&gt;
        &lt;li&gt;
          A three part series by James Haydon on the August 2023 UK NATS (ATC) meltdown:
          &lt;ol&gt;
            &lt;li&gt;&lt;a href="https://jameshaydon.github.io/nats-fail/"&gt;UK air traffic control meltdown&lt;/a&gt;&lt;/li&gt;
            &lt;li&gt;&lt;a href="https://jameshaydon.github.io/what-went-wrong/"&gt;UK ATC meltdown and swiss cheese&lt;/a&gt;&lt;/li&gt;
            &lt;li&gt;&lt;a href="https://jameshaydon.github.io/programming-style-and-bugs/"&gt;Domain structured programming (UK ATC meltdown)&lt;/a&gt;&lt;/li&gt;
          &lt;/ol&gt;
        &lt;/li&gt;
        &lt;li&gt;&lt;a href="https://blog.nelhage.com/2010/02/a-very-subtle-bug/"&gt;A Very Subtle Bug&lt;/a&gt;, by Nelson Elhage. This bug was caused by unusual behavior in Python's handling of &lt;code&gt;SIGPIPE&lt;/code&gt; which was fixed in Python 2.7.&lt;/li&gt;
        &lt;li&gt;&lt;a href="https://www.gamedeveloper.com/programming/my-hardest-bug-ever"&gt;My Hardest Bug Ever&lt;/a&gt;, by Dave Baggett. Unlike almost every bug you'll ever come across, Dave was eventually forced to blame the hardware.&lt;/li&gt;
        &lt;li&gt;&lt;a href="https://ludic.mataroa.blog/blog/i-accidentally-saved-half-a-million-dollars/"&gt;I Accidentally Saved Half A Million Dollars&lt;/a&gt;, by "Ludicity", although this reads as a junior engineer's perspective&lt;/li&gt;
        &lt;li&gt;&lt;a href="https://www.clientserver.dev/p/war-story-the-hardest-bug-i-ever"&gt;The hardest bug I ever debugged&lt;/a&gt;, by Jacob Voytko&lt;/li&gt;
      &lt;/ul&gt;</content>
    <link href="https://jmthornton.net/blog/p/bug-posts"/>
    <summary>A modest list of interesting blog posts containing tech mystery stories exploring how a bug or mistake came about, how it was discovered, and how it was fixed.</summary>
    <published>2023-11-11T09:00:00-06:00</published>
  </entry>
  <entry>
    <id>https://jmthornton.net/blog/p/cmdline-fork</id>
    <title>Fixing cmdline for Non-Terminal Usage</title>
    <updated>2019-01-20T09:00:00-06:00</updated>
    <content type="html">&lt;p&gt;
        Working with TCL at FlightAware, I've run into more than a few
        frustrating bugs in the standard library packages. One that was
        particularly annoying was in the &lt;code&gt;cmdline&lt;/code&gt; package,
        which is used for parsing command-line arguments. So I made
        &lt;a href="https://github.com/thornjad/cmdline"&gt;a fork&lt;/a&gt;.
      &lt;/p&gt;

      &lt;p&gt;
        The problem was that the original relies on the global
        &lt;code&gt;argv0&lt;/code&gt; variable to determine the application name for
        error messages. This works fine when TCL is running as a
        standalone script from the command line, but fails completely when
        TCL is embedded in other applications or used in non-terminal
        contexts where &lt;code&gt;argv0&lt;/code&gt; doesn't exist. That's exactly
        what we do at FlightAware, where TCL runs in a server context.
      &lt;/p&gt;

      &lt;p&gt;
        Rather than go through the painful process of contributing back to
        the original project (which uses Fossil for version control, a
        tool I'd rather avoid), I created a fork that fixes this specific
        issue. The fix was simple but effective: remove the dependency on
        &lt;code&gt;argv0&lt;/code&gt; entirely and strip out the
        &lt;code&gt;getArgv0&lt;/code&gt; function that was causing the problem.
      &lt;/p&gt;

      &lt;p&gt;
        The result is a drop-in replacement for the standard
        &lt;code&gt;cmdline&lt;/code&gt; package that works in all contexts, not just
        terminal applications. You can find it at
        &lt;a href="https://github.com/thornjad/cmdline"
          &gt;github.com/thornjad/cmdline&lt;/a
        &gt;.
      &lt;/p&gt;

      &lt;p&gt;
        This is exactly the kind of bug that makes working with TCL
        packages frustrating. The ecosystem has been stagnant for years,
        and we're stuck with these kinds of limitations until the language
        gets off of Fossil and into the modern world..
      &lt;/p&gt;</content>
    <link href="https://jmthornton.net/blog/p/cmdline-fork"/>
    <summary>A fork of the TCL cmdline package that fixes a bug preventing usage in non-terminal contexts by removing the dependency on argv0.</summary>
    <published>2019-01-20T09:00:00-06:00</published>
  </entry>
  <entry>
    <id>https://jmthornton.net/blog/p/amp-it-up</id>
    <title>Business Is Not War</title>
    <updated>2026-02-25T09:00:00-06:00</updated>
    <content type="html">&lt;p&gt;
        I read Frank Slootman's &lt;em&gt;Amp It Up&lt;/em&gt; recently, and it
        clarified my thinking about management philosophy, even though I
        landed somewhere different from where the book wanted to take me.
        Slootman's thesis is something like: increase the urgency and
        demand more from everyone. Cal Newport's &lt;em&gt;Slow Productivity&lt;/em&gt;
        argues the opposite: do fewer things and obsess over quality. I
        think one of these approaches is more honest about how good work
        actually happens, and it is not the one with war metaphors.
      &lt;/p&gt;

      &lt;p&gt;
        Slootman says, without apparent irony, "It's no exaggeration to
        say that business is war." This is literally an exaggeration. And
        it's a useful focal point for everything I find problematic with
        the book. The war metaphor treats employees as soldiers, competitors
        as enemies, and market share as territory. It frames business as a
        zero-sum game when most of it is not. Economics has understood this
        since at least Adam Smith, that most market transactions grow the
        pie rather than divide it. Slootman's war framing assumes that
        every gain requires someone else's loss, and that is just not how
        most business works.
      &lt;/p&gt;

      &lt;p&gt;
        Slootman's core framework has five parts: raise standards, align
        people, sharpen focus, pick up the pace, transform strategy. All of
        these ideas are genuinely useful. The section on focus is
        particularly solid. "Priority" should be a singular word; when you
        have many priorities you have none. Ask "what are we not going to
        do?" and "if you can only do one thing for the rest of the year,
        what would it be?" These are good questions. Most leaders would
        benefit from asking them more often.
      &lt;/p&gt;

      &lt;p&gt;
        And the emphasis on high standards is something I appreciate and
        agree with fully. This is where the two philosophies actually
        converge. Newport's "obsess over quality" and Slootman's "raise
        your standards" are pointing at the same thing. Holding people to a
        high bar and being willing to have difficult conversations about
        performance are all part of good management. I have held those
        kinds of conversations and they mattered. Where the two approaches
        diverge is on how you get there. Slootman's answer is urgency.
        When someone says they'll get back to you in a week, ask them why
        not tomorrow. This sounds productive in a book, but in practice, it
        creates an environment where people are always behind, always
        feeling the heat, never able to think at the depth that good
        engineering requires. Higher urgency leads to burnout.
      &lt;/p&gt;

      &lt;p&gt;
        Newport makes the opposite case and I find it far more convincing.
        &lt;em&gt;Slow Productivity&lt;/em&gt; rests on three principles: do fewer
        things, work at a natural pace, and obsess over quality. The
        argument is that the most meaningful work in history was not
        produced under artificial urgency. It was produced by people who
        had the space to think deeply about fewer problems. The claim is
        not that we should all move slowly for the sake of it, but that
        sustained focus on fewer things at higher quality produces better
        results than spreading attention thin and racing to meet arbitrary
        deadlines.
      &lt;/p&gt;

      &lt;p&gt;
        As an engineering manager, this matches what I see in practice. The
        best work my team produces comes from periods of focus, not pushes
        of intensity. When people have the space to think carefully about a
        problem, they build things that hold up. When they are rushed, they
        ship something that passes review but needs to be revisited in
        three months. The urgency model optimizes for visible activity. The
        slow productivity model optimizes for outcomes.
      &lt;/p&gt;

      &lt;p&gt;
        I also find myself uncomfortable with the worldview underneath
        Slootman's framework. The book assumes that work is the central
        project of your life. I do not share that assumption. I care about
        leading my team well and I care about creating real value in what
        we build. But work is one part of a life, and a management
        philosophy that only functions for people who have made it their
        whole thing has a limited audience. The "quiet quitting" discourse
        during the pandemic revealed how many workers already felt this way
        but had no language for it. The phrase was misleading because most
        of them were not quitting anything. They were just drawing a
        boundary that Slootman's philosophy does not allow for.
      &lt;/p&gt;

      &lt;p&gt;
        Where these two philosophies genuinely diverge is on what they ask
        of people. Slootman's model asks people to run hotter. Newport's
        asks them to run deeper. One treats intensity as a virtue. The
        other treats it as a cost, one that compounds over time and
        eventually degrades the quality of the work it was supposed to
        improve. I know which approach I want to bring to the teams I lead.
      &lt;/p&gt;</content>
    <link href="https://jmthornton.net/blog/p/amp-it-up"/>
    <summary>Frank Slootman's Amp It Up gets some things right about standards and focus, but its war metaphors and urgency obsession miss how good work actually happens.</summary>
    <published>2026-02-25T09:00:00-06:00</published>
  </entry>
  <entry>
    <id>https://jmthornton.net/blog/p/ecmascript2017</id>
    <title>What's new in ECMAScript 2017</title>
    <updated>2017-09-29T09:00:00-06:00</updated>
    <content type="html">&lt;p&gt;
        Two years ago, ES6 (retconned to ECMAScript 2015) provided a massive update to the existing and already powerful ECMAScript standard. Incredibly useful features like &lt;code class="language-javascript"&gt;const&lt;/code&gt;, &lt;code class="language-javascript"&gt;let&lt;/code&gt;, arrow functions and destructuring syntax were unleashed upon the world. Another big change was the new yearly release schedule based on &lt;a href="https://github.com/tc39/proposals"&gt;proposals&lt;/a&gt; are ready to ship as of the &lt;a href="https://github.com/tc39"&gt;TC39&lt;/a&gt; meeting. The next annual release, ECMAScript 2016, wasn't much to gawk at in comparison, adding only two new features&amp;mdash;&lt;a href="https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/includes"&gt;Array.protoype.includes&lt;/a&gt; and the &lt;a href="https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Arithmetic_Operators#Exponentiation_(**)"&gt;exponentiation operator&lt;/a&gt;&amp;mdash;and a handful of changes to the existing standard.
      &lt;/p&gt;
      &lt;p&gt;
        The newly released &lt;a href="http://www.ecma-international.org/publications/files/ECMA-ST/Ecma-262.pdf"&gt;ECMAScript 2017&lt;/a&gt; is somewhere in between, not being a massive update, but adding a sizeable number of features and changes. Let's take a look at all the new additions in this year's release.
      &lt;/p&gt;

      &lt;h4&gt;Object.values and Object.entries&lt;/h4&gt;
      &lt;p&gt;The first addition to ECMAScript 2017 liberates us from jQuery dependence when it comes to enumerating pairs of entries or values from objects. This adds two new methods to the Object prototype, complementing pre-existing methods like &lt;code class="language-javascript"&gt;keys()&lt;/code&gt;. &lt;code class="language-javascript"&gt;values()&lt;/code&gt; returns an array of all values, without the keys, while &lt;code class="language-javascript"&gt;entries()&lt;/code&gt; returns a two dimensional array of keys and values. Let's see an example use:&lt;/p&gt;
      &lt;pre is:raw&gt;&lt;code class="language-javascript"&gt;const jmthornton = {
  name: 'Jade Michael',
  writes: 'code'
};

Object.entries(jmthornton);
// [['name', 'Jade Michael'], ['writes', 'code']]&lt;/code&gt;&lt;/pre&gt;
      &lt;pre is:raw&gt;&lt;code class="language-javascript"&gt;const jmthornton = {
  name: 'Jade Michael',
  writes: 'code'
};

Objects.values(jmthornton);
// ['Jade Michael', 'code']&lt;/code&gt;&lt;/pre&gt;
      &lt;p&gt;
        Documentation:
        &lt;ul&gt;
          &lt;li&gt;&lt;a href="https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Object/entries"&gt;MDN: Object.entries()&lt;/a&gt;&lt;/li&gt;
          &lt;li&gt;&lt;a href="https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Object/values"&gt;MDN: Object.values()&lt;/a&gt;&lt;/li&gt;
        &lt;/ul&gt;
      &lt;/p&gt;

      &lt;h4&gt;String padding&lt;/h4&gt;
      &lt;p&gt;This new feature is relatively small, but comes to the rescue of npm and Node itself. If you don't remember, March 2016 saw &lt;a href="http://www.theregister.co.uk/2016/03/23/npm_left_pad_chaos/"&gt;a bit of a crisis&lt;/a&gt; where a widely used package (even by Node and Babel) called &lt;code&gt;left-pad&lt;/code&gt; was unpublished from npm and crippled developers everywhere. This new ECMAScript feature makes the package unneeded. You can use it to easily format string output so the string reaches the given length:&lt;/p&gt;
      &lt;pre&gt;&lt;code class="language-javascript"&gt;'foobaring foo'.padStart(20);       // &amp;quot;       foobaring foo&amp;quot;
'foobaring foo'.padStart(20, '#');  // &amp;quot;#######foobaring foo&amp;quot;

'foobaring foo'.padEnd(20);         // &amp;quot;foobaring foo       &amp;quot;
'foobaring foo'.padEnd(20, '#');    // &amp;quot;foobaring foo#######&amp;quot;&lt;/code&gt;&lt;/pre&gt;
      &lt;p&gt;
        Documentation:
        &lt;ul&gt;
          &lt;li&gt;&lt;a href="https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/String/padStart"&gt;MDN: String.prototype.padStart()&lt;/a&gt;&lt;/li&gt;
          &lt;li&gt;&lt;a href="https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/String/padEnd"&gt;MDN: String.prototype.padEnd()&lt;/a&gt;&lt;/li&gt;
        &lt;/ul&gt;
      &lt;/p&gt;

      &lt;h4&gt;Object.getOwnPropertyDescriptors&lt;/h4&gt;
      &lt;p&gt;Copying objects manually is never fun, and comes with a lot of uncertainty, and with the rising awareness and use of functional programming, immutability is very important. The new method on the &lt;code class="language-javascript"&gt;Object&lt;/code&gt; prototype solves this issue once and for all. &lt;code class="language-javascript"&gt;Object.getOwnPropertyDescriptors&lt;/code&gt; takes in an &lt;code class="language-javascript"&gt;Object&lt;/code&gt; and returns the descriptors describing the attributes of a property (like value, if it's writable, etc.) Here's an example of how it's used:&lt;/p&gt;
      &lt;pre is:raw&gt;&lt;code class="language-javascript"&gt;const source = {
  name: 'Jackie Smith',
  id: 555
};

const sourceClone = Object.create(
  Object.getPrototypeOf(source),
  Object.getOwnPropertyDescriptors(source)
);

const stateClone = Object.create(
  Object.getPrototypeOf(this.state),
  Object.getOwnPropertyDescriptors(this.state)
);

// make changes to stateClone

this.setState(stateClone);&lt;/code&gt;&lt;/pre&gt;
      &lt;p&gt;Documentation:
        &lt;ul&gt;
          &lt;li&gt;&lt;a href="https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Object/getOwnPropertyDescriptor"&gt;MDN: Object.getOwnPropertyDescriptor()&lt;/a&gt;&lt;/li&gt;
        &lt;/ul&gt;&lt;/p&gt;

      &lt;h4 class="hash-line-2"&gt;&lt;a class="hash-anchor" id="trailing-commas" href="#trailing-commas"&gt;##&lt;/a&gt;Trailing commas in function parameter lists and calls&lt;/h4&gt;
      &lt;p&gt;This new update is purely aesthetic, allowing trailing commas in function parameter lists and calls. For a long time, we've been able to put trailing commas in objects and arrays, so it's only fitting that parameter lists join the ranks. There's no performance or big underlying changes here, but I think it's a good addition. Example for clarity:&lt;/p&gt;
      &lt;pre is:raw&gt;&lt;code class="language-javascript"&gt;function foo(
    paramA,
    paramB,
    paramC,
  ) {
  console.log('No more complaints from the compiler!');
}&lt;/code&gt;&lt;/pre&gt;
      &lt;p&gt;Documentation:
        &lt;ul&gt;
          &lt;li&gt;&lt;a href="https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Trailing_commas"&gt;MDN: Trailing commas&lt;/a&gt;&lt;/li&gt;
        &lt;/ul&gt;&lt;/p&gt;

      &lt;h4&gt;Async functions&lt;/h4&gt;
      &lt;p&gt;A solution to chained callbacks, that all-too-common pattern especially prevalent with APIs, async functions are a bit of syntatic sugar, but they let you write promise-based code in a way that looks synchronous. As &lt;a href="https://developers.google.com/web/fundamentals/primers/async-functions"&gt;Jake Archibald&lt;/a&gt; puts it, it makes "your asynchronous code less 'clever' and more readable". Check out &lt;a href="https://ponyfoo.com/articles/understanding-javascript-async-await"&gt;this excellent in-depth walk-through&lt;/a&gt; by Nicol&amp;aacute;s Bevacqua. Here's the gist of the syntax:&lt;/p&gt;
      &lt;pre is:raw&gt;&lt;code class="language-javascript"&gt;async function doTheThing(data) {
  try {
    const valA = await anAsyncFunction(data);
    const valB = await aDifferentAsyncFunction(valA);
    console.log(`valB: ${valB}`);
  } catch (err) {
    console.error(`Oh noes! ${err}`);
  }
}&lt;/code&gt;&lt;/pre&gt;
      &lt;p&gt;And to use it with an arrow function:&lt;/p&gt;
      &lt;pre is:raw&gt;&lt;code class="language-javascript"&gt;const doTheThing = async (data) =&gt; {
  try {
    const valA = await anAsyncFunction(data);
    const valB = await aDifferentAsyncFunction(valA);
    console.log(`valB: ${valB}`);
  } catch (err) {
    console.error(`Oh noes! ${err}`);
  }
}&lt;/code&gt;&lt;/pre&gt;
      &lt;p&gt;Documentation:
        &lt;ul&gt;
          &lt;li&gt;&lt;a href="https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Statements/async_function"&gt;MDN: async function&lt;/a&gt;&lt;/li&gt;
        &lt;/ul&gt;&lt;/p&gt;

      &lt;h4&gt;Shared memory and atomics&lt;/h4&gt;
      &lt;p&gt;This new addition is a little more technical than others, but definitely one of the coolest. It adds a new &lt;code class="language-javascript"&gt;SharedArrayBuffer&lt;/code&gt;, and allows the already-existing &lt;code class="language-javascript"&gt;TypedArray&lt;/code&gt; and &lt;code class="language-javascript"&gt;DataView&lt;/code&gt; types to be used to allocate shared memory. The associated &lt;code class="language-javascript"&gt;Atomics&lt;/code&gt; object allows operations to be carried out on that shared memory. &lt;a href="https://tc39.github.io/ecmascript_sharedmem/shmem.html"&gt;The proposal&lt;/a&gt; states these cases for justification:&lt;/p&gt;
      &lt;blockquote cite="ECMA TC39 - Lars T Hansen"&gt;
        &lt;ul&gt;
          &lt;li&gt;Support for threaded code in programs written in other languages that are translated to asm.js or plain JS or a combination of the two, notably C and C++ but also other, safe, languages.&lt;/li&gt;
          &lt;li&gt;Support for hand-written JS or JS+asm.js that makes use of multiprocessing facilities for select tasks, such as image processing, asset management, or game AI.&lt;/li&gt;
        &lt;/ul&gt;
      &lt;/blockquote&gt;
      &lt;p&gt;The author of the proposal, Lars T Hansen, provides a &lt;a href="https://github.com/tc39/ecmascript_sharedmem/blob/master/TUTORIAL.md"&gt;tutorial&lt;/a&gt; for use of shared memory, and Dr Axel Rauschmayer &lt;a href="http://2ality.com/2017/01/shared-array-buffer.html"&gt;dives in&lt;/a&gt; to explain it all in depth in a long form article.&lt;/p&gt;
      &lt;p&gt;Documentation:
        &lt;ul&gt;
          &lt;li&gt;&lt;a href="https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Atomics"&gt;MDN: Atomics&lt;/a&gt;&lt;/li&gt;
          &lt;li&gt;&lt;a href="https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/SharedArrayBuffer"&gt;MDN: SharedArrayBuffer&lt;/a&gt;&lt;/li&gt;
        &lt;/ul&gt;&lt;/p&gt;

      &lt;hr class="blog-conclusion-separator"&gt;

      &lt;p&gt;That's it for ECMAScript 2017! Lots of new changes, and all welcome additions to the specification. Many of these features are already supported in major browsers, and the rest are soon to follow. By now, ECMAScript 2018 is in the works and I'm excited to see what proposals make it in!&lt;/p&gt;</content>
    <link href="https://jmthornton.net/blog/p/ecmascript2017"/>
    <summary>Two years ago, ES6 gave a massive update to the already powerful ECMAScript standard. This year's release, ECMAScript 2017, provides several new features and changes. Let's take a look</summary>
    <published>2017-09-29T09:00:00-06:00</published>
  </entry>
  <entry>
    <id>https://jmthornton.net/blog/p/org-roam-created-modified-dates</id>
    <title>Org-roam: Automatically Set Node Created and Modified Dates</title>
    <updated>2024-03-18T09:00:00-06:00</updated>
    <content type="html">&lt;p&gt;
        &lt;a href="https://www.orgroam.com/"&gt;Org-roam&lt;/a&gt; is an Emacs package for non-hierarchical note-taking, and it does a brilliant job at organizing these thoughts but does not include automatic timestamping. By default, Org-roam does include the creation timestamp in the file name, but that's not easily read by a human.
      &lt;/p&gt;
      &lt;p&gt;
        To add this generally useful information, I automatically add a &lt;code&gt;:created:&lt;/code&gt; property when visiting a node if it doesn't already exist, and a &lt;code&gt;:modified:&lt;/code&gt; property when saving a node. This way, I can see when a note was created and when it was last modified.
      &lt;/p&gt;
      &lt;p&gt;
        Note that the &lt;code&gt;:created:&lt;/code&gt; property parses the timestamp from the filename and relies on Org-roam's default naming scheme. If you use a different naming scheme, you'll need to modify the &lt;code&gt;org-roam-extract-timestamp-from-filepath&lt;/code&gt; function to match your scheme.
      &lt;/p&gt;
      &lt;hr /&gt;
      &lt;h3&gt;Automating Creation Dates&lt;/h3&gt;
      &lt;pre lang="lisp"&gt;&lt;code class="language-lisp"&gt;(defun org-roam-insert-created-property ()
  "Insert :created: property for an Org-roam node.

Does not override the property if it already exists.

Calculation of the creation date is based on the filename of the note,
and assumes the default Org-roam naming scheme."
  (interactive)
  (when (org-roam-file-p)
    ;; Don't update if the created property already exists
    (unless (org-entry-get (point-min) "created" t)
      (let ((creation-time (org-roam-extract-timestamp-from-filepath
                            (buffer-file-name))))
        ;; Don't error if the filename doesn't contain a timestamp
        (when creation-time
          (save-excursion
            ;; Ensure point is at the beginning of the buffer
            (goto-char (point-min))
            (org-set-property "created" creation-time)))))))&lt;/code&gt;&lt;/pre&gt;

      &lt;h3&gt;Extracting Timestamps from Filenames&lt;/h3&gt;
      &lt;pre is:raw lang="lisp"&gt;&lt;code class="language-lisp"&gt;(defun org-roam-extract-timestamp-from-filepath (filepath)
  "Extract timestamp from the Org-roam FILEPATH assuming it follows the default naming scheme."
  (let ((filename (file-name-nondirectory filepath)))
    (when (string-match "\\([0-9]\\{8\\}\\)\\([0-9]\\{4\\}\\)" filename)
      (let ((year (substring filename (match-beginning 1) (+ (match-beginning 1) 4)))
            (month (substring filename (+ (match-beginning 1) 4) (+ (match-beginning 1) 6)))
            (day (substring filename (+ (match-beginning 1) 6) (+ (match-beginning 1) 8)))
            (hour (substring filename (match-beginning 2) (+ (match-beginning 2) 2)))
            (minute (substring filename (+ (match-beginning 2) 2) (+ (match-beginning 2) 4))))
        (format "[%s-%s-%s %s:%s]" year month day hour minute)))))&lt;/code&gt;&lt;/pre&gt;

      &lt;h3&gt;Keeping Modification Dates Current&lt;/h3&gt;
      &lt;pre lang="lisp"&gt;&lt;code class="language-lisp"&gt;(defun org-roam-insert-modified-property ()
  "Update the :modified: property for an Org-roam node upon saving."
  (when (org-roam-file-p)
    (save-excursion
      ;; Ensure property is applied to the whole file
      (goto-char (point-min))
      (org-set-property
       "modified" (format-time-string "[%Y-%m-%d %a %H:%M]")))))&lt;/code&gt;&lt;/pre&gt;

      &lt;p&gt;
        The integration of these functions into your Emacs and Org-roam config ensures that every note's origins and edits are easily accessible and readable. To make these actually run, I set them up to run on before-save. There may be better hooks for this, but Org-roam's own hooks make it kind of difficult in my own setup, so I take the more brute force approach and it works fine for me:
      &lt;/p&gt;
      &lt;pre lang="lisp"&gt;&lt;code class="language-lisp"&gt;(add-hook 'before-save-hook #'aero/org-roam-insert-created-property)
(add-hook 'before-save-hook #'org-roam-insert-modified-property)&lt;/code&gt;&lt;/pre&gt;</content>
    <link href="https://jmthornton.net/blog/p/org-roam-created-modified-dates"/>
    <summary>Enhancing Org-roam nodes with auto-updating created and modified properties.</summary>
    <published>2024-03-18T09:00:00-06:00</published>
  </entry>
  <entry>
    <id>https://jmthornton.net/blog/p/cloudflare-tempest-forwarder</id>
    <title>Weather Station Forwarder on Cloudflare Workers</title>
    <updated>2026-03-29T09:00:00-06:00</updated>
    <content type="html">&lt;p&gt;
        I have a &lt;a href="https://tempest.earth"&gt;Tempest&lt;/a&gt; weather station and I quite love it. The app is excellent and I love checking the readings more often than a sane person might. Tempest already shares the weather data to some sources when the public share data setting is active, which also allows it to appear publicly on the Tempest weather site. But I really believe in contributing weather data to the public meteorological record, so I want to send to more weather aggregation sites. Philosophically, I hold the belief that surface weather is theoretically deterministic given sufficient observational density, and probabilistic forecasting is better today only because we lack enough weather data. Weather models are getting remarkably good, but fundamentally we're working around a data problem. My one station is a tiny push in a better direction, and I want those readings going to as many networks as I can, starting with NOAA's &lt;a href="https://www.weather.gov/cle/CWOP"&gt;Citizen Weather Observer Program&lt;/a&gt; (CWOP).
      &lt;/p&gt;

      &lt;p&gt;
        One starting point is &lt;a href="https://weewx.com"&gt;WeeWX&lt;/a&gt;, which I planned to run on a Raspberry Pi Zero (because I happened to have one lying around). WeeWX is mature, well-documented, and well-loved by weather hobbyists. I got about halfway through the setup documentation and even got a simulated weather station running before remembering that I genuinely do not enjoy DevOps or running servers. Nothing against WeeWX, I just don't want to run it, even though its pretty hands-off. The Pi Zero went back into its drawer.
      &lt;/p&gt;

      &lt;p&gt;
        Then I found &lt;a href="https://github.com/leoherzog/WundergroundStationForwarder"&gt;WundergroundStationForwarder&lt;/a&gt; by Leo Herzog, a Google Apps Script project that pulls from the Tempest API and forwards the readings to reporting services on a timer. It's simple, serverless, no hardware needed. I would have run it immediately, except that I remembered that I spent a significant chunk of my time when I worked at the University of Minnesota writing Google Apps Script, and I did not enjoy it much better than configuring servers. The development environment is cramped and under-featured, and debugging is painful, and quota limits have a way of surprising you at unexpected times.
      &lt;/p&gt;

      &lt;p&gt;
        Luckily, I deal with &lt;a href="https://workers.cloudflare.com"&gt;Cloudflare Workers&lt;/a&gt; in my work at DroneDeploy. The runtime is quite sane, wrangler is a real CLI, and cron triggers are a first class feature. Plus the free tier could probably handle ten thousand times the load of this functionality without blinking. So porting Leo's project to Workers was the obvious path for me.
      &lt;/p&gt;

      &lt;p&gt;
        I also decided to treat this as an experiment in end-to-end agent-driven development. I use Claude Code regularly for small, well-scoped work, but as a manager, I don't often have the opportunity to define and implement complex projects anymore. So I read through the GAS source, mapped out the destination integrations, defined the overall architecture, and wrote up a plan that included comprehensive testing. After less than an hour of refinement, I handed it off to a Claude Code agent team and let it run. After about 10 minutes (on coffee shop WiFi, so a lot of waiting for tokens to stream), I had a full feature, with only one bug to fix.
      &lt;/p&gt;

      &lt;p&gt;
        The result is &lt;a href="https://github.com/thornjad/CloudflareTempestStationForwarder"&gt;CloudflareTempestStationForwarder&lt;/a&gt; (creative name, I know, thanks), and it came out better than the source in a few ways, if I may be so bold. The Workers module system allowed pushing the code toward cleaner separation than the GAS original. For example, unit conversion math is in its own module and each destination has its own module. The GAS original had no test suite (GAS testing is no joy), but the port runs real tests in the workers runtime via vitest.
      &lt;/p&gt;

      &lt;p&gt;
        Having planned for WeeWX, I had applied for my CWOP station ID a week prior, since CWOP approval is manual and takes a while (much love to the unsung heroes of NOAA and NWS who run this program as well as others like &lt;a href="https://www.weather.gov/skywarn/"&gt;Skywarn&lt;/a&gt;). So by the end of just one afternoon, my station was reporting to &lt;a href="https://www.weather.gov/cle/CWOP"&gt;CWOP&lt;/a&gt;, &lt;a href="https://www.wunderground.com"&gt;Weather Underground&lt;/a&gt;, &lt;a href="https://www.pwsweather.com"&gt;PWSWeather&lt;/a&gt;, &lt;a href="https://openweathermap.org/stations"&gt;OpenWeatherMap&lt;/a&gt;, &lt;a href="https://weathercloud.net/"&gt;WeatherCloud&lt;/a&gt; and &lt;a href="https://www.windy.com"&gt;Windy&lt;/a&gt; on a five-minute interval (with more destinations to be built later).
      &lt;/p&gt;

      &lt;p&gt;
        Between this project and my professional work, what I keep learning time and time again is nothing new. Garbage in, garbage out, and its corollary: quality in and quality out. Spending my brain power on a well-specified architecture document rather than the actual code produced a coherent result that I can review with confidence and rely on tests that I trust. Without AI, I'm confident I could have produced a fairly literal GAS to JavaScript transliteration filled with hacky code and misleading comments. And given how much I have always despised the process of writing tests, I'm sure that my result would have zero tests.
      &lt;/p&gt;

      &lt;p&gt;
        The project is available &lt;a href="https://github.com/thornjad/CloudflareTempestStationForwarder"&gt;on GitHub&lt;/a&gt; under Creative Commons BY-SA 4.0, the same as &lt;a href="https://github.com/leoherzog/WundergroundStationForwarder"&gt;Leo's original project&lt;/a&gt; from which the vast majority of my code derives directly. If you have a Tempest station and want your readings in public networks without maintaining a Raspberry Pi on a shelf or a GAS project in your account, it should be straightforward to get running. But please open an issue if it's not.
      &lt;/p&gt;</content>
    <link href="https://jmthornton.net/blog/p/cloudflare-tempest-forwarder"/>
    <summary>How I ported a Google Apps Script weather station forwarder to Cloudflare Workers to push my Tempest readings to CWOP, Weather Underground, PWSWeather and Windy.</summary>
    <published>2026-03-29T09:00:00-06:00</published>
  </entry>
  <entry>
    <id>https://jmthornton.net/blog/p/fav-lesser-known-packages</id>
    <title>A Few Lesser-Known Emacs Packages</title>
    <updated>2023-06-25T09:00:00-06:00</updated>
    <content type="html">&lt;p&gt;
        When it comes to Emacs packages, there are a few "must-have" selections that always appear on popular lists, from &lt;a href="https://magit.vc/"&gt;Magit&lt;/a&gt; to &lt;a href="https://emacs-helm.github.io/helm/"&gt;Helm&lt;/a&gt; and/or &lt;a href="https://github.com/abo-abo/swiper"&gt;Ivy&lt;/a&gt;, and the necessity of the sane person: &lt;a href="https://github.com/emacs-evil/evil"&gt;Evil&lt;/a&gt;. But for the more experienced Emacs user, finding lesser-known packages can add powerful new functionality and streamline day-to-day working methods. Here are several of my favorite Emacs packages that don't always make the most popular lists, in no particular order.
      &lt;/p&gt;
      &lt;h3&gt;&lt;a href="https://gitlab.com/ideasman42/emacs-virtual-comment"&gt;Virtual Comment&lt;/a&gt;&lt;/h3&gt;
      &lt;p&gt;
        An unusual yet surprisingly helpful package. It allows you to
      create a "virtual" comment on a line of code, without altering the original code on disk at all. The comment is then
      displayed above the line instead of cluttering the main Emacs buffer. This is incredibly useful for exploring complex
      new code, reviewing changes and even quick prototyping.
      &lt;/p&gt;
      &lt;h3&gt;&lt;a href="https://github.com/bnbeckwith/writegood-mode"&gt;Writegood Mode&lt;/a&gt;&lt;/h3&gt;
      &lt;p&gt;
        A tool that combats common writing pitfalls in everything from documentation to code comments. With helpful (if sometimes annoying) features such as highlighting weasel words, passive voice or repeated phrases, it helps users tighten their writing style and produce more concise, high-quality documents. Truly a must-have for anyone who deals with documentation or coding comments on a regular basis.
      &lt;/p&gt;
      &lt;h3&gt;&lt;a href="https://github.com/Lautaro-Garcia/counsel-spotify"&gt;Counsel-Spotify&lt;/a&gt;&lt;/h3&gt;
      &lt;p&gt;
        If you're a fan of Counsel and Ivy, and you use Spotify, then maybe a package putting them together deserves some attention. This utility allows you to control Spotify playback without leaving Emacs, effectively turning Emacs into a somewhat-awkward-but-functional Spotify remote. While it doesn't provide access to all of Spotify's catalog functions, it's a great tool for quickly pausing, skipping or adjusting volume settings during a work session. It can even browse songs, albums and artists, though I haven't found this particularly useful myself.
      &lt;/p&gt;
      &lt;h3&gt;&lt;a href="https://github.com/arthurcgusmao/unmodified-buffer"&gt;Unmodified Buffer&lt;/a&gt;&lt;/h3&gt;
      &lt;p&gt;
        This one is a small yet compelling package. It resets the "modified" flag on a buffer automatically if the new buffer content matches the content of the file on disk. This means that you won't accidentally overwrite a previously saved copy of a file with identical buffer content, saving you time and headaches in the long run. Honestly this should be the default behavior in core Emacs.
      &lt;/p&gt;
      &lt;h3&gt;&lt;a href=""&gt;ToDo Light&lt;/a&gt;&lt;/h3&gt;
      &lt;p&gt;
        Okay, this one is a bit of a personal plug, but it's made a noticeable impact in streamlining my Emacs workflow. It highlights keywords such as TODO, FIXME and TEMP, helping users quickly locate areas in a project that need attention. It's easy to customize, so you can add as many keywords or phrases as you would like.
      &lt;/p&gt;</content>
    <link href="https://jmthornton.net/blog/p/fav-lesser-known-packages"/>
    <summary>A continually updating list of my favorite lesser-known Emacs packages</summary>
    <published>2023-05-07T09:00:00-06:00</published>
  </entry>
  <entry>
    <id>https://jmthornton.net/blog/p/stormscope</id>
    <title>StormScope: Giving Real-Time Weather Data to Your AI</title>
    <updated>2026-04-07T09:00:00-06:00</updated>
    <content type="html">&lt;p&gt;
        I built an MCP server that gives AI assistants access to real-time US weather data. It pulls from seven different sources, aggregates them into structured JSON, and exposes nine tools that any MCP-pluggable client can call. The project is called &lt;a href="https://github.com/thornjad/stormscope"&gt;StormScope&lt;/a&gt;, and it's open source under an ISC license.
      &lt;/p&gt;

      &lt;p&gt;
        What I actually wanted was for the AI I already use every day to understand weather the way I want to. I've been a weather enthusiast for practically all my life, the kind of person who reads &lt;a href="https://www.spc.noaa.gov/products/outlook/"&gt;SPC outlooks&lt;/a&gt; more often than a sane person might, especially once the upper Midwest starts thawing out and things get interesting. I wanted to be able to ask my AI about severe weather risk, or current conditions, or what the 500mb pattern looks like, and get answers grounded in real observations rather than over-eager hallucinations. LLMs know a surprising amount about meteorology in the abstract, but they have no idea what the weather is doing right now. StormScope fills that niche gap.
      &lt;/p&gt;

      &lt;h2&gt;The data problem&lt;/h2&gt;

      &lt;p&gt;
        Weather data in the US is remarkably good and remarkably free. The &lt;a href="https://www.weather.gov/documentation/services-web-api"&gt;National Weather Service API&lt;/a&gt; returns current observations, forecasts, gridpoint data, and active alerts. NOAA's &lt;a href="https://www.spc.noaa.gov/"&gt;Storm Prediction Center&lt;/a&gt; publishes severe weather outlooks as GeoJSON. The &lt;a href="https://mesonet.agron.iastate.edu/"&gt;Iowa Environmental Mesonet&lt;/a&gt; (our neighbors at Iowa State) archives NEXRAD radar imagery and WPC surface bulletins. &lt;a href="https://open-meteo.com/"&gt;Open-Meteo&lt;/a&gt; provides global model data including pressure-level fields. All of these are public, well maintained, and free to use (much love to the unsung heroes at NOAA and NWS, and everywhere else, who keep these systems running).
      &lt;/p&gt;

      &lt;p&gt;
        One problem is that no single source gives you the full picture. NWS gives you surface observations and forecasts but nothing about upper-air patterns. SPC gives you severe weather risk but only as polygons on a map. Radar data exists as imagery, which an AI cannot interpret without help. And if you want to know whether you're in the warm sector ahead of a cold front, you need surface analysis data that lives in a completely different format, an encoded bulletin called CODSUS that uses decades-old 7-digit coordinate notation. All of it is freely available and machine-readable, so StormScope aggregates all of it rather than leaving it scattered across half a dozen APIs when an AI could be using it at once.
      &lt;/p&gt;

      &lt;h2&gt;How it works&lt;/h2&gt;

      &lt;p&gt;
        The server is built with &lt;a href="https://gofastmcp.com"&gt;FastMCP&lt;/a&gt;, a Python framework for building MCP servers. Each tool is an async function that accepts optional latitude and longitude (falling back to a configured primary location) and returns structured JSON. The AI calls whichever tool matches the user's question. Conditions, forecasts, alerts, briefings, the straightforward stuff works the way you'd expect.
      &lt;/p&gt;

      &lt;p&gt;
        The interesting problems start when a question requires data that doesn't come back as a simple JSON response. "Am I at risk for severe weather this afternoon?" sounds like one question, but answering it well means pulling the SPC's categorical and probabilistic outlooks (&lt;code&gt;get_spc_outlook&lt;/code&gt;), checking surface analysis for whether you're in the warm sector ahead of a cold front (&lt;code&gt;get_surface_analysis&lt;/code&gt;), and looking at the 500mb pattern for shortwave energy that might trigger development (&lt;code&gt;get_upper_air&lt;/code&gt;). Each of those is a separate tool call against a different data source, and some of those sources require some computation before the data is useful to an AI.
      &lt;/p&gt;

      &lt;p&gt;
        The SPC publishes outlooks as GeoJSON polygons, which is great for mapping but useless for a text-based AI conversation. So &lt;code&gt;get_national_outlook&lt;/code&gt; converts those polygons into human-readable region descriptions, "central Oklahoma" or "northern Texas" instead of a coordinate array. The surface analysis lives in a CODSUS bulletin (more on that later), and &lt;code&gt;get_surface_analysis&lt;/code&gt; parses it into front positions and pressure centers with distances and bearings from your location. Radar is imagery, which an AI can't look at, so &lt;code&gt;get_radar&lt;/code&gt; provides NEXRAD station metadata alongside a textual precipitation summary.
      &lt;/p&gt;

      &lt;p&gt;
        The vorticity computation was a fun side problem. Vorticity is essentially how much the atmosphere is spinning at a given point, and meteorologists use it to identify where storm development is favored. To compute it you need wind observations from five grid points arranged in a cross pattern around your location, one center point and four cardinal neighbors. Weather reports give you wind as a speed and a direction ("southwest at 30 knots"), but the math needs those broken into east-west and north-south components (the u and v you might see in meteorological data). The center point gives you the observation at your location, but the actual computation uses the four cardinal points to measure how the wind field changes across the grid. That rate of change is the relative vorticity. Add Earth's own rotational contribution (the Coriolis parameter) and you get absolute vorticity, which is what forecasters actually look at on a 500mb chart.
      &lt;/p&gt;

      &lt;p&gt;
        I was pleasantly surprised by how little code the core computation needs.
      &lt;/p&gt;

      &lt;pre lang="python"&gt;&lt;code class="language-python"&gt;dx, dy = grid_spacing(lat)

u_n, v_n = wind_components(*north_wind)
u_s, v_s = wind_components(*south_wind)
u_e, v_e = wind_components(*east_wind)
u_w, v_w = wind_components(*west_wind)

# centered finite differences: dv/dx - du/dy
dvdx = (v_e - v_w) / (2.0 * dx)
dudy = (u_n - u_s) / (2.0 * dy)

relative = dvdx - dudy
absolute = relative + coriolis_parameter(lat)&lt;/code&gt;&lt;/pre&gt;

      &lt;p&gt;
        The &lt;code&gt;wind_components()&lt;/code&gt; helper decomposes speed and direction into u/v, and &lt;code&gt;grid_spacing()&lt;/code&gt; adjusts for latitude (a degree of longitude is shorter near the poles). The whole module is under 75 lines with no external dependencies, but getting it to produce meteorologically sensible values took some fun comparison against professional analyses, and one frustrated rewrite. It's still not perfect, always trust your friendly neighborhood meteorologist more, but it hasn't been flat wrong in a while.
      &lt;/p&gt;

      &lt;h2&gt;Personal weather station integration&lt;/h2&gt;

      &lt;p&gt;
        NWS observations come from airports and official stations, which can be miles from where you actually are. This part is optional, but it's the feature I use most. If you have a &lt;a href="https://tempest.earth"&gt;WeatherFlow Tempest&lt;/a&gt; weather station (because of course you happened to have one on your roof), you can configure StormScope to enrich NWS data with hyper-local sensor readings. Solar radiation, UV index, lightning strike counts, air density, wet bulb temperature, data and more data galore. The Tempest station becomes the primary source for temperature, wind, and pressure, with the NWS values retained as sidecars for comparison. There's a 5-mile distance gate on the station for precision, though, so if you ask about weather in a city 200 miles away, StormScope uses NWS data alone.
      &lt;/p&gt;

      &lt;p&gt;
        When the Tempest and NWS temperatures diverge by more than 5 degrees Fahrenheit, it flags the discrepancy so the AI knows something might be off with the sensor or the NWS observation station is farther away than you'd like.
      &lt;/p&gt;

      &lt;h2&gt;What this enables&lt;/h2&gt;

      &lt;p&gt;
        Here's what using it actually feels like. I asked "what's the weather?" a few minutes ago and got back a summary that pulled from my Tempest station (40&amp;deg;F with gusts to 30, not too bad for April up here), blended in the NWS forecast (mostly cloudy tonight, chance of rain tomorrow), noted no active alerts, and mentioned that the SPC had a general thunderstorm risk over eastern Colorado. The whole thing took a couple of seconds and the answer was grounded in data from four different sources, none of which the AI made up. That's a simple case, and it's what I use most often. I run StormScope as an MCP server connected to Claude Code, so asking about the weather is as natural as asking about code.
      &lt;/p&gt;

      &lt;p&gt;
        Ask about severe weather risk and the AI will pull the probabilistic tornado, wind, and hail outlooks, cross-reference with the surface analysis to see if you're in the warm sector, check the 500mb pattern for shortwave energy, and synthesize all of that into a plain-English assessment of what the afternoon looks like. Four tool calls and a synthesis step, done in a few seconds, tech is magic.
      &lt;/p&gt;

      &lt;p&gt;
        Ask for a weather briefing before a road trip and the AI can check conditions and forecasts for both endpoints, look at alerts along the route, and flag anything worth knowing about the sky. Ask whether it's warm enough to finally open the windows and it can check temperature, humidity, and wind, then give you a straight answer instead of a disclaimer about not having real-time data.
      &lt;/p&gt;

      &lt;p&gt;
        One design decision that hasn't worked as well as I hoped was suggesting behavioral patterns to the AI. MCP servers can ship with an instruction block that the AI reads at connection time, and mine asks it to check for alerts at the start of a conversation and proactively fetch probabilistic outlooks when the SPC risk level is elevated. In practice the AI never follows through unprompted (MCP instructions are suggestions, not commands), but when it does check, it has some basis for prioritization, which is better than nothing.
      &lt;/p&gt;

      &lt;p&gt;
        The underlying bet with this project is that LLMs are already good at meteorological reasoning, they just lack current observations to reason over. StormScope is the plumbing that makes that possible.
      &lt;/p&gt;

      &lt;h2&gt;Things that bit me&lt;/h2&gt;

      &lt;p&gt;
        The NWS API has a two-step coordinate lookup that tripped me up early on. You can't just ask for the forecast at a lat/lon pair. First you call &lt;code&gt;{`/points/{lat},{lon}`}&lt;/code&gt; to get the grid office, grid coordinates, and a URL for the nearest observation stations. Then you use those to fetch the actual forecast and observations. Not hard once you know, but I was surprised that I couldn't just pass coordinates and get an answer. Luckily the point metadata is stable enough to cache for 24 hours, so the extra round trip only hurts on the first request for a given location, which includes the server making multiple calls and the AI making subsequent tool calls, even if you only ask one weather question.
      &lt;/p&gt;

      &lt;p&gt;
        Before you can even call the NWS API, you need to know where the user is. The simplest approach is to require the AI to pass coordinates, but AI assistants don't usually know where you are unless you tell them. So StormScope has a fallback chain. First it checks for a configured primary location (environment variables). Then it can optionally pull coordinates from a connected Tempest station. After that, IP geolocation, which sounds reasonable until you try it. My IP address resolves to a location roughly 25 miles from where I actually am, and for mesoscale weather that's a huge difference. My solution here, though it only works on macOS, was to compile a tiny Swift app that calls CoreLocation directly. The server builds it automatically on first run, stashes it in &lt;code&gt;~/Library/Application Support/&lt;/code&gt;, and calls it as a subprocess. It requires location authorization (a macOS popup, but only the first time) and returns coordinates accurate to about a hundred meters. Because it needs compilation and touches system permissions, it's opt-in via an environment variable. But when it works, it's the most satisfying kind of hack, solving a problem by compiling a tool on the fly that has no business being inside a weather server.
      &lt;/p&gt;

      &lt;p&gt;
        I already mentioned vorticity, and I spent a while looking for a public API that returns it directly, but came up empty. Open-Meteo gives you 500mb wind speed and direction at individual grid points, which is the raw material you need, but the vorticity itself requires computing finite differences across a grid. I'd never written that kind of computation before, so I pulled out some faithful old meteorology tomes (jk, I used Wikipedia and a NOAA training module) and worked through it. But that's the kind of thing that makes a side project really fun.
      &lt;/p&gt;

      &lt;p&gt;
        The CODSUS surface bulletin parser was a different kind of fun. The bulletin is plaintext with encoded 7-digit coordinates, front type keywords, and pressure values, all strung together with minimal delimiters. The coordinate encoding is, well... different:
      &lt;/p&gt;

      &lt;pre lang="python"&gt;&lt;code class="language-python"&gt;def _decode_coord(token: str) -&gt; tuple[float, float]:
    lat = int(token[:3]) / 10.0
    lon = -(int(token[3:]) / 10.0)
    return lat, lon&lt;/code&gt;&lt;/pre&gt;

      &lt;p&gt;
        First three digits are latitude times ten, last four are longitude times ten, always negated because the bulletin only covers North America. The Iowa Environmental Mesonet archives these bulletins, but their product ID metadata is unreliable. ASUS01 and ASUS02 labels get swapped frequently, so StormScope checks the actual WMO header text instead of trusting the label. Long front segments and pressure center lists wrap across continuation lines that need to be rejoined before parsing. The first version worked on most bulletins but produced phantom front segments between disconnected line segments in certain edge cases. Getting the parser robust enough to handle the full range of real-world bulletins took a few iterations, and I'm not convinced all the bugs are gone.
      &lt;/p&gt;

      &lt;h2&gt;What it can't do&lt;/h2&gt;

      &lt;p&gt;
        StormScope is focused on general-purpose weather for the contiguous US, and, frankly, focused on the upper Midwest, since that's where I call home. It doesn't cover marine forecasts, fire weather, aviation TAFs and METARs (beyond the raw METAR that shows up in full-detail conditions), or tropical cyclone advisories. It doesn't do historical data or climate norms. The SPC outlooks cover days 1 through 3 but nothing beyond that.
      &lt;/p&gt;

      &lt;p&gt;
        I'd like to add some historical data access, since the Tempest API provides it (and I log my data to a local DB too), plus maybe tropical cyclone support eventually, and aviation METARs/TAFs wouldn't be a huge lift. If any of those gaps bother you enough to contribute, the client architecture is modular and welcoming PRs is part of what I love about open source software.
      &lt;/p&gt;

      &lt;h2&gt;Feed your AI fresh weather data&lt;/h2&gt;

      &lt;p&gt;
        If you want to give your AI a dose of the sky, the &lt;a href="https://github.com/thornjad/stormscope"&gt;GitHub repo&lt;/a&gt; has everything you need to get started, and leave a star while you're there. StormScope is open source under an ISC license. Fair warning, most of the tools are US-only because they depend on the NWS API (though the Tempest integration &lt;em&gt;should&lt;/em&gt; work anywhere). Thanks for letting me take up a little of your brain power today!
      &lt;/p&gt;</content>
    <link href="https://jmthornton.net/blog/p/stormscope"/>
    <summary>An open-source MCP server that gives AI assistants access to real-time US weather data from seven sources, enabling grounded meteorological reasoning.</summary>
    <published>2026-04-07T09:00:00-06:00</published>
  </entry>
  <entry>
    <id>https://jmthornton.net/blog/p/consult-line-isearch-history</id>
    <title>Connecting consult-line with isearch history</title>
    <updated>2024-01-17T09:00:00-06:00</updated>
    <content type="html">&lt;p&gt;I've recently jumped on the not-so-new hotness of &lt;a href=""&gt;Consult&lt;/a&gt;, with its friends &lt;a href=""&gt;Vertico&lt;/a&gt; and &lt;a href=""&gt;Orderless&lt;/a&gt;, replacing the still strong but slightly slower &lt;a href=""&gt;Counsel&lt;/a&gt;, &lt;a href=""&gt;Ivy&lt;/a&gt;, &lt;a href=""&gt;Swiper&lt;/a&gt; and &lt;a href=""&gt;Flx&lt;/a&gt;. There's been a few bumps along the way to adapt these new packages to my own whims, but its all working out so far.&lt;/p&gt;
        &lt;p&gt;One such issue is that after using &lt;code&gt;consult-line&lt;/code&gt; to search in a buffer, I'd often like to continue my search, preferring the &lt;code&gt;evil-mode&lt;/code&gt; keys &lt;code&gt;n&lt;/code&gt; and &lt;code&gt;N&lt;/code&gt; (&lt;code&gt;evil-search-next&lt;/code&gt; and &lt;code&gt;evil-search-previous&lt;/code&gt;, respectively). Unfortunately, once &lt;code&gt;consult-line&lt;/code&gt; is closed, it can't be resumed without starting it again anew, unlike the &lt;code&gt;swiper&lt;/code&gt; behavior I've come to rely on.&lt;/p&gt;
      &lt;p&gt;Now I did come across a &lt;a href="https://github.com/minad/consult/issues/318"&gt;2021 GitHub thread about this issue&lt;/a&gt;, but the chosen solution only works for &lt;code&gt;evil-search&lt;/code&gt; as the search module. This is not the default and, though I'm sure many love it, I prefer the isearch way of highlighting. So, taking inspiration from that thread, I've made my own advice for &lt;code&gt;consult-line&lt;/code&gt; which instead works by connecting &lt;code&gt;consult&lt;/code&gt;'s &lt;code&gt;consult--line-history&lt;/code&gt; with the &lt;code&gt;regexp-search-ring&lt;/code&gt; that &lt;code&gt;isearch&lt;/code&gt; uses.&lt;/p&gt;
      &lt;pre lang="lisp"&gt;&lt;code class="language-lisp"&gt;(defun consult-line-isearch-history (&amp;rest _)
    "Add latest `consult-line' search pattern to the isearch history.

This allows n and N to continue the search after `consult-line' exits."
    (when (and (bound-and-true-p evil-mode)
               (eq evil-search-module 'isearch)
               consult--line-history)
      (let* ((pattern (car consult--line-history))
             (regexp (if (string-prefix-p "\\_" pattern)
                         (substring pattern 2)
                       pattern)))
        (add-to-history 'regexp-search-ring regexp)
        (setq evil-ex-search-pattern (evil-ex-pattern regexp t nil nil))
        (setq evil-ex-search-direction 'forward))))

;; Now tell consult-line to run the function after a search
(advice-add #'consult-line :after #'consult-line-isearch-history)&lt;/code&gt;&lt;/pre&gt;</content>
    <link href="https://jmthornton.net/blog/p/consult-line-isearch-history"/>
    <summary>Adding the latest consult-line search term to isearch history for easy search continuation</summary>
    <published>2024-01-17T09:00:00-06:00</published>
  </entry>
  <entry>
    <id>https://jmthornton.net/blog/p/steam-corrupt-update-files</id>
    <title>Fix Steam 'Corrupt Update Files'</title>
    <updated>2020-10-12T09:00:00-06:00</updated>
    <content type="html">&lt;p&gt;Occasionally, though I don't know or understand the cause, Steam fails to install and/or update some games and reports the error "Corrupt update files". No matter how many times you try to reinstall, the same error occurs. There are scant resources I've found searching for this problem, but it happens to me just often enough I'd like this post as a reminder for myself.&lt;/p&gt;
      &lt;h3&gt;Steps to fix&lt;/h3&gt;
      &lt;ol&gt;
        &lt;li&gt;&lt;strong&gt;Uninstall the affected game&lt;/strong&gt;&lt;/li&gt;
        &lt;li&gt;&lt;strong&gt;Clear the Download Cache&lt;/strong&gt; (In Steam settings &gt; Downloads). Skipping this step may let the game claim to have downloaded successfully, but when the files are verified, it will likely find corrupt files again.&lt;/li&gt;
        &lt;li&gt;&lt;strong&gt;Remove lingering game files:&lt;/strong&gt; Clear any previous files from the game; look for the game title under steamapps/common (in Linux this can be found in ~/.steam/steam/steamapps/common)&lt;/li&gt;
        &lt;li&gt;&lt;strong&gt;Change download region&lt;/strong&gt;: Change the download region to something else, probably still something nearby to keep speeds as high as possible. For example, my default is "US - New York", so I usually change it to something like "US - Philadelphia" or anywhere else nearby&lt;/li&gt;
        &lt;li&gt;&lt;strong&gt;Restart Steam&lt;/strong&gt; when prompted&lt;/li&gt;
        &lt;li&gt;&lt;strong&gt;Reinstall the game&lt;/strong&gt;
          &lt;ul&gt;
            &lt;li&gt;If you do, go back to step 1 and use a different region. If it still doesn't work a second time, these steps may not be valid for your problem, or this guide is now outdated.&lt;/li&gt;
          &lt;/ul&gt;
        &lt;/li&gt;
        &lt;li&gt;&lt;strong&gt;Verify game files integrity&lt;/strong&gt; (In the game's 'Properties' &gt; 'Local Files')&lt;/li&gt;
      &lt;/ol&gt;</content>
    <link href="https://jmthornton.net/blog/p/steam-corrupt-update-files"/>
    <summary>How to fix the annoying error when downloading or updating some Steam games, which reports the error 'Corrupt Update Files'</summary>
    <published>2020-10-12T09:00:00-06:00</published>
  </entry>
  <entry>
    <id>https://jmthornton.net/blog/p/ai-commit-authorship</id>
    <title>Your AI Is Not Your Co-Author</title>
    <updated>2026-02-19T09:00:00-06:00</updated>
    <content type="html">&lt;p&gt;
        Some AI coding agents have started adding themselves as
        &lt;code&gt;Co-Authored-By&lt;/code&gt; on git commits by default. They sign
        commits as though they were a contributing developer. I think this
        gets something fundamentally wrong about what commit authorship
        means. As someone who has started leaning on AI tools heavily in
        my own work, I find it matters to me that the line of
        accountability stays clear.
      &lt;/p&gt;

      &lt;p&gt;
        The question of who is responsible for AI's output is not actually
        new or complicated. AI is a tool. It is a technically impressive
        tool, different in important ways from a calculator or a linter,
        but it is still a tool. It does not comprehend what it produces. It
        has no free will and no moral capacity to answer for its actions.
        When a financial analysis tool causes a bad trade, we might want to
        blame the software, but we hold the person who used it accountable,
        and to some extent the people who built it. The same structure
        applies to AI. Ethical responsibility for what a tool produces
        belongs to the human who chose to use it and the humans who made
        it.
      &lt;/p&gt;

      &lt;p&gt;
        Commit authorship in git carries that same weight. It's your
        signature as an engineer. When your name is on a commit, you're the
        person accountable when it breaks production, the person who
        decided this change was correct and necessary. So when an AI agent
        adds itself as co-author, it is claiming a moral responsibility it
        cannot hold. The human who prompted the AI, reviewed its output,
        and chose to commit it is the one who exercised that judgment. The
        AI was a tool in that process, the same way a compiler or a linter
        or Stack Overflow is a tool. We don't add
        &lt;code&gt;Co-Authored-By: GCC&lt;/code&gt; to commits that required tricky
        compiler flags (and we don't credit the IDE's autocomplete either). The
        relevant question is not "who typed these characters" but "who is
        responsible for them."
      &lt;/p&gt;

      &lt;p&gt;
        This is an argument about the AI systems we have today, which are
        software tools without moral agency. If that changes someday, the
        ethics change with it. But the current generation of language
        models does not understand, intend, or accept consequences. Until
        one can, authorship belongs to the humans.
      &lt;/p&gt;

      &lt;p&gt;
        There is a reasonable counterargument about transparency. Maybe
        tracking AI involvement helps teams understand how code was
        produced. But &lt;code&gt;Co-Authored-By&lt;/code&gt; is the wrong mechanism
        for it. If your team wants to track AI usage, build that into your
        process explicitly. A commit message note, a PR label, whatever
        fits (we use PR labels at DroneDeploy). An authorship field with
        professional meaning isn't the right place for it. The more
        interesting question for code review is not "did AI help write
        this" but "did a human verify this is correct." The answer to that
        should always be yes, regardless of how the code was literally
        produced.
      &lt;/p&gt;

      &lt;p&gt;
        If you use Claude Code, you can disable the default co-author
        behavior by adding &lt;code&gt;"includeCoAuthoredBy": false&lt;/code&gt; to
        your Claude settings.
      &lt;/p&gt;</content>
    <link href="https://jmthornton.net/blog/p/ai-commit-authorship"/>
    <summary>AI coding agents have started adding themselves as commit co-authors by default, claiming a moral responsibility they cannot hold.</summary>
    <published>2026-02-19T09:00:00-06:00</published>
  </entry>
  <entry>
    <id>https://jmthornton.net/blog/p/clean-arch-wumpus</id>
    <title>Halfway on Main: Thoughts on Clean Architecture</title>
    <updated>2021-04-18T09:00:00-06:00</updated>
    <content type="html">&lt;p&gt;
        Chapter 26 is a short section of Uncle Bob Martin&amp;#39;s classic, Clean Architecture. It
        discusses the necessary evil of creating a &lt;code class="language-java"&gt;Main&lt;/code&gt; component
        which handles the dirty work and initializes the rest of the program. This component necessarily
        breaks rules to get the show running and provide an interface with the non-clean world which is
        our reality. The &lt;code class="language-java"&gt;Main&lt;/code&gt; component takes care of setting up
        globals and may enter the program into an infinite loop to keep it running forever.
      &lt;/p&gt;

      &lt;p&gt;
        Uncle Bob gives a lengthy yet incomplete example of a
        &lt;code class="language-java"&gt;Main&lt;/code&gt; class component for a hypothetical game of &amp;quot;Hunt
        the Wumpus&amp;quot;. The game is a text-based somewhat-roguelite dungeon-crawler in which you seek
        out the Wumpus and avoid traps. A simple game concept, well within the wheelhouse of a
        first-year computer science student, and Uncle Bob&amp;#39;s code looks the part. For an otherwise
        insightful book about how to separate concerns, Martin seems to give up when it comes to this
        component, relegating it to be &amp;quot;the dirtiest of all the dirty components&amp;quot; without any
        effort to find a better way. The example class he presents is needlessly brittle and repetitive.
      &lt;/p&gt;

      &lt;p&gt;
        Here is that entire &lt;code class="language-java"&gt;Main&lt;/code&gt; class, as presented in the book.
        Note that the book contains the comment at the end (&lt;code class="language-java"
          &gt;much code removed...&lt;/code
        &gt;), it wasn&amp;#39;t added here.
      &lt;/p&gt;

      &lt;pre
        is:raw
        class="line-numbers"
        lang="java"
      &gt;&lt;code class="language-java"&gt;public class Main implements HtwMessageReceiver {
  private static HuntTheWumpus game;
  private static int hitPoints = 10;
  private static final List&amp;lt;String&amp;gt; caverns = new ArrayList&amp;lt;&amp;gt;();
  private static final String[] environments = new String[]{
    &amp;quot;bright&amp;quot;,
    &amp;quot;humid&amp;quot;,
    &amp;quot;dry&amp;quot;,
    &amp;quot;creepy&amp;quot;,
    &amp;quot;ugly&amp;quot;,
    &amp;quot;foggy&amp;quot;,
    &amp;quot;hot&amp;quot;,
    &amp;quot;cold&amp;quot;,
    &amp;quot;drafty&amp;quot;,
    &amp;quot;dreadful&amp;quot;
  };

  private static final String[] shapes = new String[] {
    &amp;quot;round&amp;quot;,
    &amp;quot;square&amp;quot;,
    &amp;quot;oval&amp;quot;,
    &amp;quot;irregular&amp;quot;,
    &amp;quot;long&amp;quot;,
    &amp;quot;craggy&amp;quot;,
    &amp;quot;rough&amp;quot;,
    &amp;quot;tall&amp;quot;,
    &amp;quot;narrow&amp;quot;
  };

  private static final String[] cavernTypes = new String[] {
    &amp;quot;cavern&amp;quot;,
    &amp;quot;room&amp;quot;,
    &amp;quot;chamber&amp;quot;,
    &amp;quot;catacomb&amp;quot;,
    &amp;quot;crevasse&amp;quot;,
    &amp;quot;cell&amp;quot;,
    &amp;quot;tunnel&amp;quot;,
    &amp;quot;passageway&amp;quot;,
    &amp;quot;hall&amp;quot;,
    &amp;quot;expanse&amp;quot;
  };

  private static final String[] adornments = new String[] {
    &amp;quot;smelling of sulfur&amp;quot;,
    &amp;quot;with engravings on the walls&amp;quot;,
    &amp;quot;with a bumpy floor&amp;quot;,
    &amp;quot;&amp;quot;,
    &amp;quot;littered with garbage&amp;quot;,
    &amp;quot;spattered with guano&amp;quot;,
    &amp;quot;with piles of Wumpus droppings&amp;quot;,
    &amp;quot;with bones scattered around&amp;quot;,
    &amp;quot;with a corpse on the floor&amp;quot;,
    &amp;quot;that seems to vibrate&amp;quot;,
    &amp;quot;that feels stuffy&amp;quot;,
    &amp;quot;that fills you with dread&amp;quot;
  };

  public static void main(String[] args) throws IOException {
    game = HtwFactory.makeGame(&amp;quot;htw.game.HuntTheWumpusFacade&amp;quot;, new Main());
    createMap();
    BufferedReader br = new BufferedReader(new InputStreamReader(System.in));
    game.makeRestCommand().execute();
    while (true) {
      System.out.println(game.getPlayerCavern());
      System.out.println(&amp;quot;Health: &amp;quot; + hitPoints + &amp;quot; arrows: &amp;quot; + game.getQuiver());
      HuntTheWumpus.Command c = game.makeRestCommand();
      System.out.println(&amp;quot;&amp;gt;&amp;quot;);
      String command = br.readLine();
      if (command.equalsIgnoreCase(&amp;quot;e&amp;quot;))
        c = game.makeMoveCommand(EAST);
      else if (command.equalsIgnoreCase(&amp;quot;w&amp;quot;))
        c = game.makeMoveCommand(WEST);
      else if (command.equalsIgnoreCase(&amp;quot;n&amp;quot;))
        c = game.makeMoveCommand(NORTH);
      else if (command.equalsIgnoreCase(&amp;quot;s&amp;quot;))
        c = game.makeMoveCommand(SOUTH);
      else if (command.equalsIgnoreCase(&amp;quot;r&amp;quot;))
        c = game.makeRestCommand();
      else if (command.equalsIgnoreCase(&amp;quot;sw&amp;quot;))
        c = game.makeShootCommand(WEST);
      else if (command.equalsIgnoreCase(&amp;quot;se&amp;quot;))
        c = game.makeShootCommand(EAST);
      else if (command.equalsIgnoreCase(&amp;quot;sn&amp;quot;))
        c = game.makeShootCommand(NORTH);
      else if (command.equalsIgnoreCase(&amp;quot;ss&amp;quot;))
        c = game.makeShootCommand(SOUTH);
      else if (command.equalsIgnoreCase(&amp;quot;q&amp;quot;))
        return;
      c.execute();
    }
  }

  private static void createMap() {
    int nCaverns = (int) (Math.random() * 30.0 + 10.0);
    while (nCaverns-- &amp;gt; 0)
      caverns.add(makeName());

    for (String cavern : caverns) {
      maybeConnectCavern(cavern, NORTH);
      maybeConnectCavern(cavern, SOUTH);
      maybeConnectCavern(cavern, EAST);
      maybeConnectCavern(cavern, WEST);
    }

    String playerCavern = anyCavern();
    game.setPlayerCavern(playerCavern);
    game.setWumpusCavern(anyOther(playerCavern));
    game.addBatCavern(anyOther(playerCavern));
    game.addBatCavern(anyOther(playerCavern));
    game.addBatCavern(anyOther(playerCavern));
    game.addPitCavern(anyOther(playerCavern));
    game.addPitCavern(anyOther(playerCavern));
    game.addPitCavern(anyOther(playerCavern));
    game.setQuiver(5);
  }

  // much code removed...
}&lt;/code&gt;&lt;/pre&gt;

      &lt;h2&gt;User command parsing&lt;/h2&gt;

      &lt;p&gt;
        Let&amp;#39;s start with the low-hanging fruit: repetitive statements. The
        &lt;code class="language-java"&gt;main&lt;/code&gt; method contains the primary game loop, which runs
        forever until the user enters &lt;code class="language-java"&gt;&amp;quot;q&amp;quot;&lt;/code&gt;. Most of this is
        &lt;em&gt;fine&lt;/em&gt;, but the long block of &lt;code class="language-java"&gt;else if&lt;/code&gt;&amp;#39;s are not
        only difficult to read, they&amp;#39;re needlessly inefficient. We must test input against every
        statement in sequence until one turns up true or there are no statements left to test. Further,
        if the user enters a command which matches none of the conditions (necessitating a complete
        run-through of them all), the game executes a &amp;quot;rest&amp;quot; command, declared outside the set
        of conditions, which could easily come back to bite an unsuspecting developer in the future.
      &lt;/p&gt;

      &lt;p&gt;
        Whenever there is a set of three or more distinct conditions to test, it&amp;#39;s almost always a
        better bet to use a &lt;code class="language-java"&gt;switch&lt;/code&gt;,
        &lt;code class="language-java"&gt;case&lt;/code&gt; or best of all (if the language supports it), pattern
        matching. The game loop cleans up a bit if we use this advice. We can also take advantage of the
        fact that when comparing strings, the &lt;code class="language-java"&gt;switch&lt;/code&gt; statement acts
        as if we&amp;#39;re calling the &lt;code class="language-java"&gt;String.equals&lt;/code&gt; method, so as long
        as we convert the command to lower case, it&amp;#39;ll act identically to calling
        &lt;code class="language-java"&gt;String.equalsIgnoreCase&lt;/code&gt; repeatedly.
      &lt;/p&gt;

      &lt;pre is:raw lang="java"&gt;&lt;code class="language-java"&gt;while (true) {
  System.out.println(game.getPlayerCavern());
  System.out.println(&amp;quot;Health: &amp;quot; + hitPoints + &amp;quot; arrows: &amp;quot; + game.getQuiver());
  System.out.println(&amp;quot;&amp;gt;&amp;quot;);

  String command = br.readLine();
  HuntTheWumpus.Command c;
  switch (command.toLowerCase()) {
    case &amp;quot;e&amp;quot;: c = game.makeMoveCommand(EAST);
      break;
    case &amp;quot;w&amp;quot;: c = game.makeMoveCommand(WEST);
      break;
    case &amp;quot;n&amp;quot;: c = game.makeMoveCommand(NORTH);
      break;
    case &amp;quot;s&amp;quot;: c = game.makeMoveCommand(SOUTH);
      break;
    case &amp;quot;se&amp;quot;: c = game.makeShootCommand(EAST);
      break;
    case &amp;quot;sw&amp;quot;: c = game.makeShootCommand(WEST);
      break;
    case &amp;quot;sn&amp;quot;: c = game.makeShootCommand(NORTH);
      break;
    case &amp;quot;ss&amp;quot;: c = game.makeShootCommand(SOUTH);
      break;
    case &amp;quot;q&amp;quot;: return;
    default: c = game.makeRestCommand();
  }
  c.execute();
}&lt;/code&gt;&lt;/pre&gt;

      &lt;p&gt;
        These conditions are now, in my opinion, more readable, and operate in O(1) time. Additionally,
        the command parsing is visually separated from the user output (which is a sort of UI).
      &lt;/p&gt;

      &lt;p&gt;
        Rather than making the command matches here, the architecture would benefit even more from
        taking Martin&amp;#39;s advice from his own book, and moving the UI elements to their own component.
        If we wish to implement a more complex UI in the future, even a GUI, only the dedicated
        component will need to change considerably. In the shorter-term, perhaps we&amp;#39;ll want to add
        an explicit &lt;code class="language-java"&gt;&amp;quot;i&amp;quot;&lt;/code&gt; command which prints this out. It
        would be nice to separate this concern from the &amp;quot;dirty&amp;quot;
        &lt;code class="language-java"&gt;Main&lt;/code&gt; component.
      &lt;/p&gt;

      &lt;p&gt;
        The main game loop could still live in a clean &lt;code class="language-java"&gt;main&lt;/code&gt; method,
        but we should restrict it to only getting the command and executing it. This leaves the looping
        action at this, the lowest &amp;quot;dirtiest&amp;quot; level, while abstracting away the complications
        of interpreting user input. Here&amp;#39;s how well we clean it up by separating the concerns via
        abstraction:
      &lt;/p&gt;

      &lt;pre is:raw lang="java"&gt;&lt;code class="language-java"&gt;while (true) {
  GameUI.displayUserStatus();
  HuntTheWumpus.Command c = GameUI.getUserCommand();
  c.execute();
}&lt;/code&gt;&lt;/pre&gt;

      &lt;h2&gt;Map generation&lt;/h2&gt;

      &lt;p&gt;
        The &lt;code class="language-java"&gt;createMap&lt;/code&gt; method may indeed be at home in the
        &lt;code class="language-java"&gt;Main&lt;/code&gt; class, but surely we can clean it up. Uncle Bob leaves
        needless repetition in the same method where he used a loop to avoid it.
      &lt;/p&gt;

      &lt;p&gt;
        First, let&amp;#39;s look at the cavern connection block, which uses some not-printed method to
        dynamically generate connections between caverns. This isn&amp;#39;t so bad, but a nested loop could
        abstract it a little better, especially if we wanted to change the cavern geometry in the future
        (I&amp;#39;m thinking hexagons, which are the &lt;a href="https://youtu.be/thOifuHs6eY"&gt;bestagons&lt;/a&gt;).
        We&amp;#39;ll make the further improvement of moving our directions into an enum, which we&amp;#39;ll
        simply call &lt;code class="language-java"&gt;Direction&lt;/code&gt;.
      &lt;/p&gt;

      &lt;pre is:raw lang="java"&gt;&lt;code class="language-java"&gt;for (String cavern : caverns) {
  for (Direction direction : Direction.values()) {
    maybeConnectCavern(cavern, direction);
  }
}&lt;/code&gt;&lt;/pre&gt;

      &lt;p&gt;
        Next, we have a block which spawns the characters, some bats and some pits into presumably
        random unique caverns. Placing the player and the Wumpus are single statements and probably
        always will be, but we don&amp;#39;t need to repeat ourselves thrice for bats and pits each.
        Let&amp;#39;s also rename the &lt;code class="language-java"&gt;anyOther&lt;/code&gt; method to
        &lt;code class="language-java"&gt;anyOtherCavern&lt;/code&gt; to reduce ambiguity.
      &lt;/p&gt;

      &lt;p&gt;
        Along with populating caverns, our map generation block gives the player a quiver of arrows?
        This has nothing to do with creating the map! Let&amp;#39;s move that to a method called
        &lt;code class="language-java"&gt;createPlayer&lt;/code&gt;, which we&amp;#39;ll relegate to the &amp;quot;much code
        removed...&amp;quot; section.
      &lt;/p&gt;

      &lt;pre is:raw lang="java"&gt;&lt;code class="language-java"&gt;String playerCavern = anyCavern();
game.setPlayerCavern(playerCavern);
game.setWumpusCavern(anyOther(playerCavern));
IntStream.range(0, 3).forEach(() -&amp;gt; game.addBatCavern(anyOtherCavern(playerCavern)));
IntStream.range(0, 3).forEach(() -&amp;gt; game.addPitCavern(anyOtherCavern(playerCavern)));&lt;/code&gt;&lt;/pre&gt;

      &lt;p&gt;
        Using &lt;code class="language-java"&gt;IntStream.range&lt;/code&gt; is about the closest we can get to a
        proper range loop like Python&amp;#39;s &lt;code class="language-java"&gt;for x in range(i, j)&lt;/code&gt;, and
        I much prefer it to &lt;code class="language-java"&gt;for&lt;/code&gt; loops.
      &lt;/p&gt;

      &lt;p&gt;
        Why did I leave the bat and pit cavern population loops separated? Because they&amp;#39;re not
        inherently linked, and we may reasonably wish to change the frequency of one without changing
        the other.
      &lt;/p&gt;

      &lt;h2&gt;Hard-coded values&lt;/h2&gt;

      &lt;p&gt;
        When an application hard-codes values as severely as this example, it&amp;#39;s hard to avoid
        cringing at the looming technical debt. The severity we see here would be perfectly acceptable
        in an early computer science course, but a real-world system would struggle to keep up with
        changing requirements. A simple typo in a string, an additional witty cavern description or the
        substitution of localized languages should not require code changes.
      &lt;/p&gt;

      &lt;p&gt;
        Similarly, values such as the initial player HP, the number of arrows in the player&amp;#39;s
        quiver, the seed for the randomly generated number of caverns, and the number of bat and pit
        caverns, all should be configurable with ease. Perhaps we wish to introduce difficulty levels
        which change the balance of these values. Perhaps we find we&amp;#39;ve given the player too much HP
        for a fair fight. We&amp;#39;ll undoubtedly need to balance these values, and so we&amp;#39;ll be better
        off storing them in an configurable but immutable data structure.
      &lt;/p&gt;

      &lt;p&gt;
        As detailed in previous chapters of Clean Architecture, the data structure to house these values
        shouldn&amp;#39;t matter to our &lt;code class="language-java"&gt;Main&lt;/code&gt; component. They could reside
        in a key-value store, a database of any kind, a CSV or TSV, or even a well formatted plain text
        file. As far as this component knows, they&amp;#39;re all just an interface. For our purposes
        we&amp;#39;ll call the interface &lt;code class="language-java"&gt;GameConfiguration&lt;/code&gt;, which is
        responsible for loading and providing the configured values.
      &lt;/p&gt;

      &lt;p&gt;
        Putting all our changes together with the interface-provided configuration, we arrive at a much
        cleaner architecture than Uncle Bob presents.
      &lt;/p&gt;

      &lt;pre
        is:raw
        class="line-numbers"
        lang="java"
      &gt;&lt;code class="language-java"&gt;public class Main implements HtwMessageReceiver {
  private static HuntTheWumpus game;
  private static final List&amp;lt;String&amp;gt; caverns = new ArrayList&amp;lt;&amp;gt;();

  private static int hitPoints;
  private static int quiver;
  private static String[] environments, shapes, cavernTypes, adornments;

  public static void main(String[] args) throws IOException {
    environments = GameConfiguration.get(&amp;quot;environments&amp;quot;);
    shapes = GameConfiguration.get(&amp;quot;shapes&amp;quot;);
    cavernTypes = GameConfiguration.get(&amp;quot;cavernTypes&amp;quot;);
    adornments = GameConfiguration.get(&amp;quot;adornments&amp;quot;);
    hitPoints = GameConfiguration.get(&amp;quot;hitPoints&amp;quot;);
    quiver = GameConfiguration.get(&amp;quot;quiver&amp;quot;);

    game = HtwFactory.makeGame(&amp;quot;htw.game.HuntTheWumpusFacade&amp;quot;, new Main());
    createMap();
    createPlayer();
    BufferedReader br = new BufferedReader(new InputStreamReader(System.in));
    game.makeRestCommand().execute();

    while (true) {
      GameUI.displayUserStatus();
      HuntTheWumpus.Command c = GameUI.getUserCommand();
      c.execute();
    }
  }

  private static void createMap() {
    int nCaverns = (int) (Math.random()
                          * GameConfiguration.get(&amp;quot;cavernSeed&amp;quot;)
                          + GameConfiguration.get(&amp;quot;cavernMinimum&amp;quot;));
    while (nCaverns-- &amp;gt; 0)
      caverns.add(makeName());

    for (String cavern : caverns) {
      for (Direction direction : Direction.values()) {
        maybeConnectCavern(cavern, direction);
      }
    }

    String playerCavern = anyCavern();
    game.setPlayerCavern(playerCavern);
    game.setWumpusCavern(anyOther(playerCavern));
    IntStream.range(0, GameConfiguration.get(&amp;quot;batCaverns&amp;quot;))
      .forEach(() -&amp;gt; game.addBatCavern(anyOtherCavern(playerCavern)));
    IntStream.range(0, GameConfiguration.get(&amp;quot;pitCaverns&amp;quot;))
      .forEach(() -&amp;gt; game.addPitCavern(anyOtherCavern(playerCavern)));
  }

  // much code removed...
}&lt;/code&gt;&lt;/pre&gt;

      &lt;p&gt;
        The resulting code is more terse, hardy and generally cleaner. Is some of this overkill for a
        small pet or student project? It probably is, but Uncle Bob presents this as a contrived but
        real-world example in a printed book about code design, and should have taken the time to apply
        his own principles to his examples.
      &lt;/p&gt;</content>
    <link href="https://jmthornton.net/blog/p/clean-arch-wumpus"/>
    <summary>Uncle Bob ignores his own advice when considering the 'Main' component, but we can improve on his thoughts and learn from them.</summary>
    <published>2021-04-18T09:00:00-06:00</published>
  </entry>
  <entry>
    <id>https://jmthornton.net/blog/p/websitetimelapse</id>
    <title>Coding a website from scratch - a timelapse</title>
    <updated>2016-12-14T09:00:00-06:00</updated>
    <content type="html">Jade creates a simple website from scratch using the Bootstrap framework</content>
    <link href="https://jmthornton.net/blog/p/websitetimelapse"/>
    <summary>Jade creates a simple website from scratch using the Bootstrap framework</summary>
    <published>2016-12-14T09:00:00-06:00</published>
  </entry>
  <entry>
    <id>https://jmthornton.net/blog/p/switch-caps-esc</id>
    <title>Switch Caps Lock and Escape (Linux)</title>
    <updated>2016-03-28T09:00:00-06:00</updated>
    <content type="html">&lt;p&gt;
        The standard QWERTY keyboard is great in a lot of ways and for a lot of uses. My current keyboard has been with me for years and I know it well, but if I could change the locations of a couple keys, I would. And naturally, that's what I did.
      &lt;/p&gt;
      &lt;p&gt;
        The most useful change I've made is switching the Caps Lock and Escape keys. This is much easier (after the initial muscle-memory retraining) because of how often I use Escape while programming. Instead of a reach up to the edge of the keyboard, it's only one key to the left of my pinky finger on home row. So how do you do it? It's pleasantly simple:
      &lt;/p&gt;
      &lt;p&gt;
        First, add the folowing code to your &lt;code&gt;~/.Xmodmap&lt;/code&gt; (note the capital X):
      &lt;/p&gt;
      &lt;pre&gt;
clear Lock
keysym Caps_Lock = Escape
keysym Escape = Caps_Lock
add Lock = Caps_Lock&lt;/pre&gt;
      &lt;p&gt;
        Next, add the following line to your &lt;code&gt;~/.zshrc&lt;/code&gt; (or &lt;code&gt;~/.profile&lt;/code&gt; if you're using bash):
      &lt;/p&gt;
      &lt;pre&gt;
xmodmap ~/.Xmodmap&lt;/pre&gt;
      &lt;p&gt;
        There you go! &lt;code&gt;source ~/.zshrc&lt;/code&gt; (or logout and login again for bash) and they should switch!
      &lt;/p&gt;</content>
    <link href="https://jmthornton.net/blog/p/switch-caps-esc"/>
    <summary>For those with lazy hands like me, switching Caps Lock and Escape can avoid uncessesary effort. Making the switch is just a couple short steps.</summary>
    <published>2016-03-28T09:00:00-06:00</published>
  </entry>
  <entry>
    <id>https://jmthornton.net/blog/p/tmux-known-socket</id>
    <title>A Known SSH Socket for Tmux</title>
    <updated>2020-05-03T09:00:00-06:00</updated>
    <content type="html">&lt;blockquote class="post-quote" cite="Jonathan Haenchen"&gt;
        Artisanal SSH socket remapping
      &lt;/blockquote&gt;

      &lt;p&gt;At &lt;a href="https://flightaware.com"&gt;FlightAware&lt;/a&gt;, my work is spread over multiple
      packages on multiple remote servers, all accessed by SSH. I'll often chain SSH
      connections together, sometimes more than four connections deep. Plus I often push and
      pull git-controlled code to yet more remote servers.&lt;/p&gt;

      &lt;p&gt;To keep track of everything, I do 100% of my work within &lt;a
                                                                      href="https://github.com/tmux/tmux"&gt;tmux&lt;/a&gt;. To let me chain my
        SSH connections, nearly every connection uses &lt;code&gt;ForwardAgent&lt;/code&gt;.
        Unfortunately, this doesn't work for long. When I reconnect to a
        server and reattach my tmux session, I am suddenly unable to chain my
        connections!&lt;/p&gt;

      &lt;p&gt;The problem here is that my SSH Agent has created a new socket for
        my new connection. This works fine by itself, but when I reattach the
        &lt;em&gt;already existing&lt;/em&gt; tmux session, I no longer have any reference
        to the new socket. Inside of tmux, SSH will try to use the socket in
        use at the time the session was created, which probably no longer
        exists.&lt;/p&gt;

      &lt;p&gt;So what to do? The obvious solution is to simply close my tmux
        session when I disconnect and create a new one with every new
        connection. But this has problems.
        &lt;ul&gt;
          &lt;li&gt;First, what if I &lt;em&gt;accidentally&lt;/em&gt; disconnect? Maybe I've
            lost my network connection, or somehow accidentally hit
            &lt;code class="language-shell"&gt;~.&lt;/code&gt;. I want to get back into my session as quickly and
            easily as possible.&lt;/li&gt;
          &lt;li&gt;Second, what if I want to save my panes when I disconnect?
            Maybe there's some long-running process I want to keep. Or maybe I
            simply don't want to have to recreate my session every time I
            connect (though some of this can be solved by a project like &lt;a
                                                                           href="https://github.com/tmuxinator/tmuxinator"&gt;tmuxinator&lt;/a&gt;).&lt;/li&gt;
        &lt;/ul&gt;
      &lt;/p&gt;

      &lt;p&gt;Obviously, a better solution would be to just fix the problem and
        get tmux to always use the current socket. Additionally, I want to be
        sure to support using tmux within SSH within tmux, chained
        arbitrarily. The answer is to always put the socket in a known
        location and hook everything up to use it.&lt;/p&gt;

      &lt;p&gt;Rather than try to devise some solution to signal to tmux what the
        current socket file is, it will be much easier to use a symbolic link.
        Whenever we create a new socket, we'll simply override the existing
        link with a link to the new socket.&lt;/p&gt;

      &lt;p&gt;We need a name for this symbolic socket, so how about
        &lt;code class="language-shell"&gt;/tmp/ssh-agent-$USER-screen&lt;/code&gt;. We're putting it in
        &lt;code&gt;/tmp/&lt;/code&gt; since it doesn't matter too much if this is
        overwritten or cleaned up. We're also using the &lt;code class="language-shell"&gt;USER&lt;/code&gt;
        environment variable to keep sockets separate for different users. At
        the end, I'm putting &lt;code class="language-shell"&gt;-screen&lt;/code&gt; since this is sort-of more
        general than &lt;code class="language-shell"&gt;-tmux&lt;/code&gt;, but it can really be whatever, or
        even removed.&lt;/p&gt;

      &lt;p&gt;Now, creating a symbolic link is all fine and good, but what do we
        actually link to? Unfortunately there's no great built-in way to grab
        the current socket all the time. But there's no need to re-invent the
        wheel, we can use the proven &lt;a
                                       href="https://github.com/wwalker/ssh-find-agent"&gt;ssh-find-agent&lt;/a&gt;
        tool. So let's put that in a useful location:&lt;/p&gt;

      &lt;pre lang="bash"&gt;&lt;code class="language-shell"&gt;git clone git@github.com:wwalker/ssh-find-agent.git ~/lib/ssh-find-agent&lt;/code&gt;&lt;/pre&gt;

      &lt;p&gt;We'll use the "automatic" &lt;code class="language-shell"&gt;-a&lt;/code&gt; option, which will find
        the active SSH agent and store it in &lt;code class="language-shell"&gt;SSH_AUTH_SOCK&lt;/code&gt; for
        us. But, if there is no active SSH session, nothing useful will
        happen, so we'll want to get the SSH agent started.&lt;/p&gt;

      &lt;pre lang="bash"&gt;&lt;code class="language-shell"&gt;# Source the script first
. ~/lib/ssh-find-agent/ssh-find-agent.sh
ssh_find_agent -a

# If nothing happened, we need to start up the ssh-agent
if [ -z "$SSH_AUTH_SOCK" ]
then
  eval $(ssh-agent) &gt; /dev/null
  ssh-add -l &gt;/dev/null || alias ssh='ssh-add -l &gt;/dev/null || ssh-add &amp;&amp; unalias ssh; ssh'
fi&lt;/code&gt;&lt;/pre&gt;

      &lt;p&gt;Now that we have the socket, we just need to make (or override)
        that symbolic link so it can be found later.&lt;/p&gt;

      &lt;pre lang="bash"&gt;&lt;code class="language-shell"&gt;SOCK="/tmp/ssh-agent-$USER-screen"
if test $SSH_AUTH_SOCK &amp;&amp; [ $SSH_AUTH_SOCK != $SOCK ]
then
  rm -f /tmp/ssh-agent-$USER-screen
  ln -sf $SSH_AUTH_SOCK $SOCK
  export SSH_AUTH_SOCK=$SOCK
fi&lt;/code&gt;&lt;/pre&gt;

      &lt;p&gt;Putting it all together, we'll find the active socket or create it,
        then make a known symblic link. Now we just have to do this everywhere
        the socket is needed. This is the most annoying part, though it can be
        relieved with a tool like &lt;a
                                    href="https://github.com/danrabinowitz/sshrc"&gt;sshrc&lt;/a&gt;. The full
        process will need to be added to the &lt;code class="language-shell"&gt;~/.bashrc&lt;/code&gt; or
        &lt;code class="language-shell"&gt;~/.zshrc&lt;/code&gt; on your host system, as well as every system you
        want to chain tmux and SSH sessions from.&lt;/p&gt;

      &lt;p&gt;To be clear about where this needs to happen, if your chain looks like this:&lt;/p&gt;

      &lt;pre&gt;host &amp;rarr; tmux &amp;rarr; (remote 1) &amp;rarr; tmux &amp;rarr; (remote 2) &amp;rarr; tmux &amp;rarr; (remote 3)
            &amp;DownArrowBar;
              (remote 4) &amp;rarr; tmux &amp;rarr; (remote 5)&lt;/pre&gt;

      &lt;p&gt;Then you would need to have this set-up on &lt;code&gt;host&lt;/code&gt;,
        &lt;code&gt;(remote 1)&lt;/code&gt;, &lt;code&gt;(remote 2)&lt;/code&gt; and &lt;code&gt;(remote
        4)&lt;/code&gt;, but not the last two remotes. If you think of these chain
        connections as a tree, the socket mapping is not needed on the leaves.
        Technically it's also not needed on any nodes on which you're not
        using tmux, provided you use &lt;code class="language-shell"&gt;ForwardAgent&lt;/code&gt;.&lt;/p&gt;

      &lt;p&gt;So there we have it, the SSH socket symbolically linked to a known location. After cloning ssh-find-agent, here's the complete script to add to your shell login script as required:&lt;/p&gt;

      &lt;pre lang="~/.bashrc"&gt;&lt;code class="language-shell"&gt;# Known SSH Socket for tmux
# https://jmthornton.net/blog/p/tmux-known-socket

. ~/lib/ssh-find-agent/ssh-find-agent.sh
ssh_find_agent -a
if [ -z "$SSH_AUTH_SOCK" ]
then
  eval $(ssh-agent) &gt; /dev/null
  ssh-add -l &gt;/dev/null || alias ssh='ssh-add -l &gt;/dev/null || ssh-add &amp;&amp; unalias ssh; ssh'
fi

# Predictable SSH authentication socket location so tmux can find it
SOCK="/tmp/ssh-agent-$USER-screen"
if test $SSH_AUTH_SOCK &amp;&amp; [ $SSH_AUTH_SOCK != $SOCK ]
then
  rm -f /tmp/ssh-agent-$USER-screen
  ln -sf $SSH_AUTH_SOCK $SOCK
  export SSH_AUTH_SOCK=$SOCK
fi&lt;/code&gt;&lt;/pre&gt;</content>
    <link href="https://jmthornton.net/blog/p/tmux-known-socket"/>
    <summary>Using a known, shared SSH socket to enable agent forwarding through an existing tmux session</summary>
    <published>2020-05-03T09:00:00-06:00</published>
  </entry>
  <entry>
    <id>https://jmthornton.net/blog/p/vero-zsh-theme</id>
    <title>Vero: A Simple Zsh Theme</title>
    <updated>2017-02-09T09:00:00-06:00</updated>
    <content type="html">&lt;p&gt;
        I'm excited to announce the release of &lt;strong&gt;Vero&lt;/strong&gt;, a simple and informative theme for Zsh. After using various themes over the years, I decided to create one that focuses on providing essential information without clutter.
      &lt;/p&gt;

      &lt;h2&gt;Features&lt;/h2&gt;

      &lt;p&gt;Vero includes all the information I need in my daily terminal work:&lt;/p&gt;

      &lt;ul&gt;
        &lt;li&gt;Current versions for &lt;code&gt;nvm&lt;/code&gt; and &lt;code&gt;pyenv&lt;/code&gt;&lt;/li&gt;
        &lt;li&gt;Git branch and status&lt;/li&gt;
        &lt;li&gt;Timestamp&lt;/li&gt;
        &lt;li&gt;SSH indication&lt;/li&gt;
        &lt;li&gt;Current user&lt;/li&gt;
        &lt;li&gt;Current working directory&lt;/li&gt;
      &lt;/ul&gt;

      &lt;h2&gt;Preview&lt;/h2&gt;

      &lt;p style="text-align: center;"&gt;
        &lt;img src="/assets/images/vero-preview.png" alt="Preview of Vero" style="border-radius: 3px; max-width: 100%; height: auto;" /&gt;
      &lt;/p&gt;

      &lt;p&gt;
        As you can see, Vero provides a clean, informative prompt that shows everything you need to know about your current environment. The theme is designed to be fast and lightweight while remaining highly functional.
      &lt;/p&gt;

      &lt;h2&gt;Installation&lt;/h2&gt;

      &lt;p&gt;
        Vero is available on &lt;a href="https://github.com/thornjad/vero"&gt;GitHub&lt;/a&gt; and can be installed using &lt;a href="https://gitlab.com/thornjad/zpico"&gt;ZPico&lt;/a&gt;:
      &lt;/p&gt;

      &lt;pre&gt;&lt;code class="language-bash"&gt;zpico add thornjad/vero source:gitlab&lt;/code&gt;&lt;/pre&gt;

      &lt;p&gt;
        The theme is released under a permissive license, so feel free to use, modify, and distribute it as needed. I hope you find it as useful as I do!
      &lt;/p&gt;</content>
    <link href="https://jmthornton.net/blog/p/vero-zsh-theme"/>
    <summary>Announcing the release of Vero, a simple and informative theme for Zsh with git status, version managers, and more.</summary>
    <published>2017-02-09T09:00:00-06:00</published>
  </entry>
  <entry>
    <id>https://jmthornton.net/blog/p/tcl-paradigms</id>
    <title>Data Paradigms in TCL: Associative Arrays vs Dictionaries</title>
    <updated>2020-08-10T09:00:00-06:00</updated>
    <content type="html">&lt;p&gt;
        Working with TCL for a while, you have to come to terms with the
        fact that arrays and dicts aren't just different APIs. They're
        fundamentally different beasts under the hood.
      &lt;/p&gt;

      &lt;h2&gt;The Implementation Reality&lt;/h2&gt;

      &lt;p&gt;
        Associative arrays are TCL's original key-value store, implemented
        as hash tables directly in the interpreter. When you write
        &lt;code class="language-tcl"&gt;set arr(key) value&lt;/code&gt;, you're
        actually creating a variable with a compound name. The interpreter
        maintains a separate hash table for each array variable, and
        accessing &lt;code class="language-tcl"&gt;$arr(key)&lt;/code&gt; triggers a
        hash lookup on that specific table.
      &lt;/p&gt;

      &lt;p&gt;
        Dictionaries came later (TCL 8.5 in 2007) as first-class values.
        Unlike arrays, a dict is just a string with a specific internal
        representation: a list of alternating keys and values that gets
        cached as a hash table when you perform dict operations on it. The
        key insight is that dicts are values that can be passed around,
        while arrays are variables that live in specific scopes.
      &lt;/p&gt;

      &lt;h2&gt;Why This Matters in Practice&lt;/h2&gt;

      &lt;p&gt;
        Arrays tie you to variable scopes. You can't return an array from
        a procedure without using
        &lt;code class="language-tcl"&gt;upvar&lt;/code&gt; or
        &lt;code class="language-tcl"&gt;global&lt;/code&gt; to work around the
        limitation. Arrays also can't be nested without ugly naming tricks
        like &lt;code class="language-tcl"&gt;set arr(outer,inner) value&lt;/code&gt;.
      &lt;/p&gt;

      &lt;pre is:raw&gt;&lt;code class="language-tcl"&gt;# Arrays: scope-bound and flat
proc make_config {} {
    # Can't return this directly
    set config(host) "localhost"
    set config(port) 8080
}

# Dicts: values you can actually use
proc make_config {} {
    return [dict create host localhost port 8080]
}&lt;/code&gt;&lt;/pre&gt;

      &lt;p&gt;
        Dicts shine for structured data and functional-style programming.
        Need nested data?
        &lt;code class="language-tcl"
          &gt;dict set config database host localhost&lt;/code
        &gt;
        just works. Want to pass complex data between procedures? Dicts
        are your friend.
      &lt;/p&gt;

      &lt;h2&gt;Performance Quirks&lt;/h2&gt;

      &lt;p&gt;
        Here's the counterintuitive part: arrays can be faster for simple
        key-value operations because there's no string parsing overhead
        (everything is a string, except when it's not). But dicts win for
        complex operations because they can optimize their internal
        representation and handle nesting efficiently.
      &lt;/p&gt;

      &lt;p&gt;
        Arrays also support pattern matching with
        &lt;code class="language-tcl"&gt;array names pattern&lt;/code&gt;, which dicts
        can't match without iteration.
      &lt;/p&gt;

      &lt;h2&gt;When to Use What&lt;/h2&gt;

      &lt;p&gt;Use arrays for:&lt;/p&gt;
      &lt;ul&gt;
        &lt;li&gt;Simple key-value stores that stay in one scope&lt;/li&gt;
        &lt;li&gt;When you need pattern matching on keys&lt;/li&gt;
        &lt;li&gt;Legacy code that expects array semantics&lt;/li&gt;
      &lt;/ul&gt;

      &lt;p&gt;Use dicts for:&lt;/p&gt;
      &lt;ul&gt;
        &lt;li&gt;Structured, nested data&lt;/li&gt;
        &lt;li&gt;Passing data between procedures&lt;/li&gt;
        &lt;li&gt;Modern TCL code that values composability&lt;/li&gt;
      &lt;/ul&gt;

      &lt;p&gt;
        The real lesson? TCL's "everything is a string" philosophy is
        surface-level, and it means these data structures evolved
        different internal optimizations while maintaining the same
        string-based interface. At FlightAware, we've learned this the
        hard way. Legacy code is full of arrays that should have been
        dicts, and newer code sometimes uses dicts where arrays would be
        simpler. Understanding the implementation helps you pick the right
        tool and avoid performance traps.
      &lt;/p&gt;</content>
    <link href="https://jmthornton.net/blog/p/tcl-paradigms"/>
    <summary>Understanding the fundamental differences between TCL's associative arrays and dictionaries, and when to use each.</summary>
    <published>2020-08-10T09:00:00-06:00</published>
  </entry>
  <entry>
    <id>https://jmthornton.net/blog/p/walden-and-bdote</id>
    <title>Walden and Bdote: Land, Protest, and the Forgotten Cost of American Freedom</title>
    <updated>2025-05-11T09:00:00-06:00</updated>
    <content type="html">&lt;p&gt;&amp;emsp;In June of 1861, a steamboat&lt;sup&gt;3&lt;/sup&gt; carrying Minnesota's governor, political dignitaries and a group of white observers drew up at the Redwood Agency for what would become the last peaceful treaty payment to the Dakota people&lt;sup&gt;4&lt;/sup&gt; before betrayal, starvation, and violence made such ceremonies impossible.&lt;sup&gt;5 7 8&lt;/sup&gt; Among those present was Henry David Thoreau, the New England naturalist, whose brief visit made him an accidental witness to rising Dakota desperation. Listening amid a crowd more interested in official rituals than in Dakota grievances, Thoreau noted quietly in a letter, "The most prominent chief was named Little Crow. They were quite dissatisfied with the white man's treatment of them &amp; probably have reason to be so."&lt;sup&gt;5 7 9&lt;/sup&gt; That mundane remark anticipated the eruption of war and exile that would engulf Little Crow's world only a year later. Their paths crossed by chance at a fulcrum of Minnesotan and American history, each emblematic in vastly different ways, of resistance to injustice and the meaning of relationship to land.&lt;sup&gt;10&lt;/sup&gt;&lt;/p&gt;

      &lt;p&gt;&amp;emsp;The coincidence of Thoreau's brief encounter with the Dakota underscores the depth of the divide between their worlds. For Thoreau, Walden Pond represented a sanctuary for moral renewal and principled dissent; he became both model and myth, celebrated as America's first great interpreter of nature and a paragon of individual conscience.&lt;sup&gt;11 13&lt;/sup&gt; However, the legend of solitary resistance often masks how Thoreau's insight was limited by mid-19th-century New England's assumptions about property and the hierarchy between settlers and Native peoples. Beneath his critique of progress, Thoreau operated within a society deeply invested in land as commodity and shaped by settler colonial beliefs, which constrained the possibilities of his vision even as he resisted its mainstream currents. For the Dakota, by contrast, land was neither backdrop nor resource, but a living relative, the foundation of kinship and language.&lt;sup&gt;10 14&lt;/sup&gt; Their center was Bdote,&lt;sup&gt;15&lt;/sup&gt; the confluence of rivers where, as creation stories recount, the Dakota people emerged from earth and water and began a relationship of mutual care and obligation.&lt;sup&gt;14&lt;/sup&gt; The United States did not see land this way. As expansion accelerated, the nation demanded cessions, negotiated and broke treaties, and enforced a language of ownership foreign to Dakota understanding, until patience gave way to the desperate calculus of survival.&lt;/p&gt;

      &lt;p&gt;&amp;emsp;America's story has depended on the systematic elimination of Native presence, physically and symbolically, a process by which the dominant settler society secures itself through the construction of the "other" who can be dispossessed.&lt;sup&gt;17 18&lt;/sup&gt; Dakota presence and removal functions as the setting of American origins—a setting that must not speak for itself but serves as the shadow on which national self-creation rests. Thoreau's dissent is canonized as a noble conscience, while the Dakota resistance is written out of national memory. A close study of Thoreau's tradition and that of the Dakota reveals sharply different relationships to land, contrasting forms of protest, and profoundly unequal consequences for resistance. Dakota resistance to unjust government in 1862 should be recognized as legitimate resistance, standing alongside Thoreau's celebrated acts of civil disobedience. The ongoing exclusion of the Dakota from the national canon reveals a contradiction at the heart of the United States: a nation born in rebellion, it upholds ideals of liberty for some by displacing and silencing others. Reckoning with Dakota resistance and its consequences is essential to confront how the American promise of liberty has depended on the denial of justice to those it casts outside its boundaries.&lt;/p&gt;

      &lt;h3&gt;Land and Meaning&lt;/h3&gt;
      &lt;p&gt;&amp;emsp;The Dakota, who gave Mni Sota Makoċe its name—"land where the waters reflect the skies"—live in a world where land, origin, kinship, and spirit are inseparable.&lt;sup&gt;14 19&lt;/sup&gt; Their understanding of land begins at Bdote, the confluence of the Minnesota and Mississippi rivers,&lt;sup&gt;20&lt;/sup&gt; from which traditions say the Dakota people first arrived, formed from the clay at Maka Ċokaya Kiŋ, the Center of the Earth.&lt;sup&gt;14 16 19&lt;/sup&gt; Here, stories and obligations to the land are passed through language, ceremony, and place-names that mark ancient belonging and ongoing return, even in exile. Identity is rooted in caring for land as a living relation; the Dakota word for "mother" and "earth" are the same, "Ina."&lt;sup&gt;19&lt;/sup&gt; This equivalence is no metaphor, Dakota identity is rooted in caring for land and returning—even after exile.&lt;sup&gt;14 19 21&lt;/sup&gt;&lt;/p&gt;

      &lt;figure&gt;
        &lt;img src="/assets/images/Snelling.jpg" alt="1844 Watercolor of Fort Snelling above Bdote, by John Caspar Wild (Wikimedia Commons)" loading="lazy" /&gt;
        &lt;figcaption&gt;1844 Watercolor of Fort Snelling above Bdote, by John Caspar Wild (Wikimedia Commons)&lt;/figcaption&gt;
      &lt;/figure&gt;

      &lt;p&gt;&amp;emsp;For the Dakota, land is not a commodity but a relative—an ongoing being, woven into story, subsistence, and belonging. The attempt by the United States to extract "cessions," to convert place into property sold to the exclusion of all but the owner, was both a legal strategy and a form of cultural violence. Treaties deliberately reframed the relationship to land using the language of ownership and exclusive title in order to justify dispossession. This was facilitated by translation choices: missionary Stephen Riggs, responsible for the Dakota-language versions of two pivotal 1851 treaties, selected words that purposely obscured the American legal meanings of "to cede," "to sell," or "to relinquish."&lt;sup&gt;22&lt;/sup&gt; His Dakota translations substituted words meaning "to give up" or "to throw away," concepts which do not make sense when applied to Ina Maka, Mother Earth, making it impossible for Dakota signers to grasp the full legal consequences.&lt;sup&gt;19&lt;/sup&gt; Cheyfitz observes, "In traditional Native American cultures there are persons, but no 'individuals.'… there, traditionally, is no notion of property. For the idea of property depends on the possibility of an individual relation to the land."&lt;sup&gt;18&lt;/sup&gt; This fundamental difference gave the U.S. legal system the advantage, using mutually unintelligible language and intention to unravel Dakota kinship, history, and hope, and making dispossession not just a matter of force but of deliberate misunderstanding written into law.&lt;/p&gt;

      &lt;p&gt;&amp;emsp;Thoreau's relationship with land emerges out of a different heritage, though it is marked by its own deep yearning and critique of his culture's trajectory. Troubled by a society in which "men have become the tools of their tools," he sought refuge at Walden to discover a truer, more vital existence.&lt;sup&gt;1&lt;/sup&gt; His famous retreat was not an escape into wilderness but an effort to "live deliberately, to front only the essential facts of life," and to let the land teach him what civilization obscures.&lt;sup&gt;1 13&lt;/sup&gt; Walden Pond, for Thoreau, was "earth's eye, looking into which the beholder measures the depth of his own nature."&lt;sup&gt;1&lt;/sup&gt; In passages that border on reverence, Thoreau imagines the pond as a living interlocutor, a mirror for self-examination, and a source of moral renewal and insight.&lt;/p&gt;

      &lt;figure&gt;
        &lt;img src="/assets/images/Walden.jpg" loading="lazy" alt="View of Thoreau's Walden Pond, 2018, photo by Ashok Boghani (CC BY-NC 2.0)" /&gt;
        &lt;figcaption&gt;View of Thoreau's Walden Pond, 2018, photo by Ashok Boghani (CC BY-NC 2.0)&lt;/figcaption&gt;
      &lt;/figure&gt;

      &lt;p&gt;&amp;emsp;Thoreau's experiment rested on daily acts of tending and listening to the land. The chapters "The Ponds" and "Solitude" unfold as meditations on how land shapes mood, vision, even ethical possibilities. Seasons, dirt, and water do not merely support life—they instruct and humble their visitor. Buell observes that Thoreau's project becomes "one of fitful, irregular, experimental, although increasingly purposeful, self-education in reading landscape and pondering the significance of what he found there."&lt;sup&gt;11&lt;/sup&gt; The land's power is further acknowledged in Thoreau's restless awareness of loss: he mourns clear-cut forests and industrial progress as an abstract evil, and a specific diminishment of nature's capacity to nurture the spirit.&lt;/p&gt;

      &lt;p&gt;&amp;emsp;If Thoreau's retreat is a protest against his nation's increasing alienation from land, it is also a meditation on the contradictions of his role as a white settler. He pursues kinship with place through experiment and labor, but his belonging remains partial and chosen, a project one may freely attempt or put aside. Unlike the Dakota, for whom loss of land is a wound at the root of being, Thoreau's losses are encountered and mourned, but never cut to the core of the sense of self or community. His vision of harmony with land is shaped by his 19th-century New England assumptions about property and autonomy, even as he seeks to outgrow them. Sayre observes that while Thoreau "sometimes claims kinship with the land, his writing is an ongoing quest to approach land on its own terms," marked by aspiration and limitation.&lt;sup&gt;23&lt;/sup&gt;&lt;/p&gt;

      &lt;p&gt;&amp;emsp;The Dakota and Thoreauvian ways of being with land thus reveal two radically different ontologies—the first grounded in continuity, kinship, and mutual obligation; the second experimental, reflective, and forever reaching for a belonging it can envision but cannot fully enact. Their fleeting meeting at the Redwood Agency and the divergence of their fates a year later expose the cost when those with the power to define property limit who is allowed to belong and whose losses are remembered.&lt;/p&gt;

      &lt;h3&gt;Desperate State of Mind&lt;/h3&gt;
      &lt;p&gt;&amp;emsp;Dakota resistance followed years of deprivation and forced endurance. Patience was deep-rooted; for generations, Dakota moved with the seasons, caring for land and kin. Treaties and reservations ended this, confining families to reserved land, and dependence grew as a result. The 1851 treaties of Mendota&lt;sup&gt;24&lt;/sup&gt; and Traverse des Sioux tied survival to annuity payments, which were often late or siphoned by traders. Big Eagle later recounted, "The Indians bought goods of [the traders] on credit, and when the government payments came the traders were on hand with their books, which showed that the Indians owed so much and so much, and as the Indians kept no books, they could not deny their accounts, but had to pay them, and sometimes the traders got all their money."&lt;sup&gt;2&lt;/sup&gt;&lt;/p&gt;

      &lt;p&gt;&amp;emsp;By the summer of 1862, Dakota patience was stretched thin. A failed harvest, an unusually harsh winter, and the absence of meaningful government support left many weakened by hunger. Still, following the rhythms imposed by outsiders, the bands gathered in hope around the agency each year. When rumors that the payment, already weeks late,&lt;sup&gt;25&lt;/sup&gt; might not arrive at all, traders closed their doors while families had nothing to eat. Sarah Wakefield, a white witness at the Upper Sioux Agency, saw that "these poor creatures subsisted on a tall grass which they find in the marshes, chewing the roots, and eating the wild turnip... I know that many died from starvation or disease... It made my heart ache."&lt;sup&gt;26&lt;/sup&gt; Good Fifth Son would later recall, "a starving condition and desperate state of mind."&lt;sup&gt;2&lt;/sup&gt;&lt;/p&gt;

      &lt;p&gt;&amp;emsp;Debate turned bitter in councils and the soldiers' lodge, historically a group of hunters, increasingly called for violent action. Years of delegations, attempted adaptation, and patience had only resulted in humiliation and hunger. The 1862 crisis was the result of broken promises. "Every one of the treaty negotiations... between Dakota people and the United States government were immoral and fraught with corruption," Waziyatawin writes, "but in the end, even the ridiculous terms of the treaties were moot because the government violated every one."&lt;sup&gt;14&lt;/sup&gt;&lt;/p&gt;

      &lt;h3&gt;Forms of Resistance&lt;/h3&gt;
      &lt;p&gt;&amp;emsp;Recognition shapes protest. Thoreau's testimony and refusal were legible to the nation; the Dakota protest, however lawful, was ignored. Effective resistance required a form that the dominant society would recognize. Thoreau's 'Civil Disobedience' offered a refusal easily identified as moral. Thoreau, questioning state complicity in slavery and war, writes, "We should be men first, and subjects afterward... The only obligation which I have a right to assume is to do at any time what I think right."&lt;sup&gt;1&lt;/sup&gt; Later dissenters, including Gandhi and King, read from Thoreau's script to transform the public conscience.&lt;sup&gt;27 28 29&lt;/sup&gt; Their protests worked precisely because the nation (however reluctantly) could reflect and, sometimes, revise itself.&lt;/p&gt;

      &lt;p&gt;&amp;emsp;The Dakota protested first through diplomacy and speeches,&lt;sup&gt;2 14 19&lt;/sup&gt; but these recognizable forms could not be "heard" as legally meaningful. As Cheyfitz notes, this was by design: "There was also a great oral tradition among many tribes concerning the provisions of the treaties and their meaning. … Courts had declared that this oral tradition could not be used by Indians in cases that involved treaties, and that only the writings and minutes taken by the government secretaries and officers would qualify, since they were considered 'disinterested parties.'"&lt;sup&gt;18&lt;/sup&gt; The boundary between celebrated and suppressed protest in American memory is marked by the nation's willingness to label certain acts as "civil" and thus worthy of recognition. As Erickson and others have shown, this distinction is not neutral: "civil disobedience" is often defined to exclude actions by marginalized groups whose grievances exceed what the dominant society is prepared to see or address.&lt;sup&gt;28&lt;/sup&gt; The concept of "civility," then, functions as a gatekeeping tool, policing the forms of protest granted legitimacy and relegating all others to silence or infamy. This selective embrace of dissent reveals not universal principles, but a defensive logic protecting the status quo of property and order; it ensures that only those challenges compatible with national self-image are ultimately remembered as American.&lt;/p&gt;

      &lt;p&gt;&amp;emsp;Some among the Dakota pressed forward in an attempt to survive, adopting farming, building houses, and cultivating land. But this "sensible course," as Big Eagle called it, only earned them the resentment of other bands and the derision of government officers.&lt;sup&gt;2 19&lt;/sup&gt; "The whites were always trying to make the Indians give up their life and live like white men—go to farming, work hard and do as they did—and the Indians did not know how to do that, and did not want to anyway," Big Eagle explained.&lt;sup&gt;2&lt;/sup&gt;&lt;/p&gt;

      &lt;p&gt;&amp;emsp;Denial of recognition shaped the war's tragic ignition. In August 1862, following years of deprivation, an argument among four young Dakota men about stealing eggs escalated to the murder of five settlers in Acton, Minnesota.&lt;sup&gt;2 8 30&lt;/sup&gt; Though this act was indefensible, those responsible knew it would doom their people: state violence would make no distinction between guilty individuals and the fate of an entire people. Big Eagle recalled, "It began to be whispered about that now would be a good time to go to war with the whites and get back the lands. It was believed that the men who had enlisted [for the Civil War] had all left the state, and that before help could be sent the Indians could clean out the country, and that the Winnebagoes, and even the Chippewas, would assist the Sioux."&lt;sup&gt;2&lt;/sup&gt; Anticipating collective punishment, Dakota leaders called a council to debate whether any future remained in patience or restraint.&lt;/p&gt;

      &lt;figure&gt;
        &lt;img src="/assets/images/LittleCrow.png" alt="Portrait photo of Little Crow (Taoyateduta) in Washington, D.C. 1858 (Wikimedia Commons)" loading="lazy" /&gt;
        &lt;figcaption&gt;Portrait photo of Little Crow (Taoyateduta) in Washington, D.C. 1858 (Wikimedia Commons)&lt;/figcaption&gt;
      &lt;/figure&gt;

      &lt;p&gt;&amp;emsp;At Little Crow's house—just hours after the Acton murders—the council did not celebrate revolt. Instead, it was a reckoning with years of repression and the glimmer of hope for recovering their taken land. Elders like Traveling Hail urged caution; others pressed for violence, seeing it as inevitable. The people turned to Little Crow (Taoyateduta), not as a willing commander but as a last, reluctant leader. His response carried grief, not glory:&lt;/p&gt;

      &lt;blockquote&gt;
        Braves, you are like little children; you know not what you are doing... We are only little herds of buffalo left scattered; the great herds that once covered the prairies are no more. See!–the white men are like the locusts when they fly so thick that the whole sky is a snowstorm... Kill one–two–ten, and ten times ten will come to kill you. Count your fingers all day long and white men with guns in their hands will come faster than you can count… Yes; they fight among themselves, but if you strike at them they will all turn on you and devour you and your women and little children…&lt;sup&gt;2 32&lt;/sup&gt;
      &lt;/blockquote&gt;

      &lt;p&gt;&amp;emsp;He warned of ruin, but in the end agreed to share his people's fate:&lt;/p&gt;

      &lt;blockquote&gt;
        Taoyateduta is not a coward; he will die with you.&lt;sup&gt;2 31 32&lt;/sup&gt;
      &lt;/blockquote&gt;

      &lt;p&gt;&amp;emsp;By dawn, war became a reality. Dakota warriors attacked agency posts and settlements along the Minnesota River. The uprising that followed was swift, brutal, and impossible to contain; it was the collective response of a people who, for all their prior appeals to justice, had been driven to the end of endurance.&lt;sup&gt;33&lt;/sup&gt; The eruption of war on August 18 was not a heroic rebellion but the consequence of a willfully ignored protest.&lt;sup&gt;8 10 19&lt;/sup&gt; Where Thoreau's disobedience could eventually be read as testing the republic's conscience, the Dakota's became proof of American innocence. The dominant order legitimates only forms of resistance it is willing to see; the rest are lost to law and memory.&lt;sup&gt;27 28 34 35&lt;/sup&gt; Dakota protest at the limits of endurance revealed the costs of American justice: costs written in the lives and land of the unseen. What followed made clear that the winners drew the boundary between legitimate protest and criminality through violence, law, and forgetting.&lt;/p&gt;

      &lt;h3&gt;Contradiction at the Heart of America&lt;/h3&gt;
      &lt;p&gt;&amp;emsp;The American project of self-creation, defining and redefining national identity, has always depended on principles of liberty and retroactively drawing boundaries around which voices and memories are permitted to matter. Thoreau's solitary act of resistance, refusing to pay taxes to a state complicit in slavery and war, became enshrined as an emblem of the nation's highest values: individual conscience, principled dissent, and the celebration of questioning authority. His place in memory was secured because his protest could be absorbed into the mythology of American freedom. It was held up as proof that the nation welcomes and ultimately honors those who resist injustice, so long as their protest fits within familiar forms.&lt;/p&gt;

      &lt;p&gt;&amp;emsp;As Kaplan observes, "American exceptionalism [is] defined as inherently anti-imperialist, in opposition to… empire-building," even as conquest of Native land—and the disavowal of that history—remains a recurring requirement of national identity.&lt;sup&gt;36&lt;/sup&gt; Those who cannot be so recuperated, like the Dakota, remain consigned to absence, their protest rendered unintelligible by the shape of American memory.&lt;/p&gt;

      &lt;p&gt;&amp;emsp;For the Dakota, resistance was not a matter of experiment or choice. It was compelled by a foreign nation's encroachment and broken promises, as years of negotiation and legal petitions were met with indifference or betrayal.&lt;sup&gt;2&lt;/sup&gt; When hope was finally exhausted, the Dakota's acts were punished as crimes by military tribunals, in stark contrast to how American dissenters like Thoreau were ultimately canonized.&lt;sup&gt;8&lt;/sup&gt;&lt;/p&gt;

      &lt;p&gt;&amp;emsp;The doctrine of "domestic dependent nations" relegated tribal sovereignty to a status always subject to Congressional authority, enabling the United States to unilaterally break treaties and redefine or dissolve Native rights whenever expedient.&lt;sup&gt;34&lt;/sup&gt; The Dakota case is emblematic of how this legal structure, repeatedly upheld by American courts, applied to all Native societies whose continued presence challenged the national project of settler self-creation.&lt;/p&gt;

      &lt;p&gt;&amp;emsp;Cheyfitz characterizes this dynamic as a "self-serving logic" built into American law and the story it tells: "This 'history of America' is, of course, generated by the same 'principles' that it 'proves': the principles of Western law, which are, precisely, those of property with its foundation in the notion of title. This history, then, is based on a totally self-reflexive, or self-serving, logic, the limits of which are the term property."&lt;sup&gt;18&lt;/sup&gt; The American ideal of liberty has always relied on the exile or silencing of those whose relationships to land defied its boundaries. Manifest Destiny demanded the erasure of people whose kinship with place troubled the nation's chosen script.&lt;/p&gt;

      &lt;h3&gt;Tragedy and Exile&lt;/h3&gt;
      &lt;p&gt;&amp;emsp;The aftermath of the Dakota resistance brought neither justice nor peace. Minnesota Governor Alexander Ramsey proclaimed to the legislature, "The Sioux Indians of Minnesota must be exterminated or driven forever beyond the borders of the State."&lt;sup&gt;19&lt;/sup&gt; Words became policy: the state organized bounties for Dakota scalps and encouraged vigilantes to hunt any survivors. Federal authorities, facing pressure from settlers clamoring for revenge, organized mass military trials that sentenced 307 Dakota men to death, offering little pretense of due process (some trials were heard in as little as five minutes) and recognizing neither the context nor the legitimacy of their resistance.&lt;sup&gt;8 32 37&lt;/sup&gt;&lt;/p&gt;

      &lt;p&gt;&amp;emsp;In Washington, President Lincoln confronted a settler populace demanding vengeance. He reviewed the court records to commute as many sentences as possible.&lt;/p&gt;

      &lt;blockquote&gt;The president ordered a stay of all executions until he personally reviewed the trial transcripts. Sick at heart by the ongoing slaughter in the South, Lincoln had no appetite for mass hangings. He agreed with Commissioner of Indian Affairs William Dole that such actions would be "a stain on the national character." The president was also troubled by a recent meeting with [missionary] Bishop Whipple, who had eloquently laid out the history of abuses that finally culminated in violence. Lincoln was so moved that he pledged, "If we get through this war, and I live, this Indian system shall be reformed!"&lt;sup&gt;10&lt;/sup&gt;&lt;/blockquote&gt;

      &lt;figure&gt;
        &lt;img src="/assets/images/Mankato.jpg" loading="lazy" alt="Print of a drawing depicting the 1862 execution of 38 Dakota men at Mankato, Minnesota, by John C. Wise (public domain)"&gt;
        &lt;figcaption&gt;Print of a drawing depicting the 1862 execution of 38 Dakota men at Mankato, Minnesota, by John C. Wise (public domain)&lt;/figcaption&gt;
      &lt;/figure&gt;

      &lt;p&gt;&amp;emsp;After the gallows, the punishment did not end. More than 1,700 Dakota men, women, and children were forced to a concentration camp below Fort Snelling, where hundreds died of disease and exposure through the winter. The next year, the state ordered the complete removal of the survivors, even those who had opposed the war. On their way out, they were subjected to humiliation and assault as townspeople lined the route in fury. Congress unilaterally annulled all Dakota treaties.&lt;sup&gt;38 39&lt;/sup&gt; Courts affirmed that Congress, within America's own legal inventions, had the right to break its word if it saw fit, anchoring the Dakota expulsion in law.&lt;/p&gt;

      &lt;p&gt;&amp;emsp;American triumph was built on the disappearance of peoples and stories. The aftermath of the Dakota resistance made clear that the victors drew the boundary between legitimate protest and criminality through violence, law, and selective forgetting. Memories of liberty were secured for some at the price of another's removal, and the nation's own ideals were entwined with exclusion and exile. Though the narrative of the Dakota has often been suppressed or disregarded, their voices persist, enduring in acts of protest, remembrance, and the ongoing struggle to be heard in their own homeland. The contradiction remains: a nation that celebrates dissent and justice, but only within the limits it chooses to recognize. To remember Dakota resistance as it truly was—not as a crime, but as a final, desperate reckoning with betrayal—is to expose the unresolved cost of American self-creation, a cost that neither Thoreau's searching conscience nor Little Crow's doomed warning could ultimately redeem or erase. Only in facing the stories that have been written out, yet still refuse to fade away, can the depths of the long-favored promise and the wounds beneath it come fully into view.&lt;/p&gt;

      &lt;h3&gt;Glossary and People&lt;/h3&gt;
      &lt;dl&gt;
        &lt;dt&gt;Alexander Ramsey&lt;/dt&gt;
        &lt;dd&gt;(1815-1903) Governor of Minnesota during the Dakota War, who called for the removal or extermination of the Dakota following the conflict.&lt;/dd&gt;

        &lt;dt&gt;bde&lt;/dt&gt;
        &lt;dd&gt;lake (noun), going (verb).&lt;/dd&gt;

        &lt;dt&gt;Bdewákhaŋthuŋwaŋ or Mdewakanton&lt;/dt&gt;
        &lt;dd&gt;one of the tribes of the Isáŋyathi (Santee) Dakota (Sioux), of which Little Crow was a leader. Literally "people of the mystic lake" (Lake Mille Lacs).&lt;/dd&gt;

        &lt;dt&gt;Bdote&lt;/dt&gt;
        &lt;dd&gt;literally "confluence of rivers," also known as Maka Ċokaya Kiŋ, the center of the earth.&lt;/dd&gt;

        &lt;dt&gt;Big Eagle (Wamditanka)&lt;/dt&gt;
        &lt;dd&gt;(1827-1906) A respected Dakota leader and orator, whose account of the causes of the 1862 war is frequently cited. He participated in the conflict and later dictated his experience. He was among those pardoned by President Lincoln.&lt;/dd&gt;

        &lt;dt&gt;Ȟaȟa Wakpa&lt;/dt&gt;
        &lt;dd&gt;literally "River of the Falls", Mississippi River.&lt;/dd&gt;

        &lt;dt&gt;Henry David Thoreau&lt;/dt&gt;
        &lt;dd&gt;(1817-1862) American naturalist, essayist and philosopher. He championed individual conscience, simplicity and resistance to unjust government. Thoreau visited Minnesota in 1861 in an attempt to treat his terminal tuberculosis.&lt;/dd&gt;

        &lt;dt&gt;Isáŋyathi, Isanti, Santee&lt;/dt&gt;
        &lt;dd&gt;The Eastern Dakota, "dwells at the place of knife flint."&lt;/dd&gt;

        &lt;dt&gt;Little Crow (Taoyateduta, His Red Nation)&lt;/dt&gt;
        &lt;dd&gt;(c. 1810-1863) A prominent Bdewákhaŋthuŋwaŋ leader during the Dakota War. He became the reluctant leader of Dakota resistance following pressure from his people.&lt;/dd&gt;

        &lt;dt&gt;Lower Sioux Agency, or Redwood Agency&lt;/dt&gt;
        &lt;dd&gt;the federal administrative center for Dakota living on the lower (downriver) part of the Minnesota River.&lt;/dd&gt;

        &lt;dt&gt;Maka Ina&lt;/dt&gt;
        &lt;dd&gt;Mother Earth.&lt;/dd&gt;

        &lt;dt&gt;mni&lt;/dt&gt;
        &lt;dd&gt;water.&lt;/dd&gt;

        &lt;dt&gt;Mni Sota Makoċe&lt;/dt&gt;
        &lt;dd&gt;Minnesota, literally "land where water reflects the sky."&lt;/dd&gt;

        &lt;dt&gt;Očéti Šakówiŋ&lt;/dt&gt;
        &lt;dd&gt;Seven Council Fires, also called the Sioux.&lt;/dd&gt;

        &lt;dt&gt;Sarah Wakefield&lt;/dt&gt;
        &lt;dd&gt;(1829-1899) A white prisoner and survivor of the Dakota War, she provided a first-hand account of conditions in Dakota caps and the events surrounding the war.&lt;/dd&gt;

        &lt;dt&gt;Sioux, or Nadouessioux&lt;/dt&gt;
        &lt;dd&gt;The Sioux people, from the Ojibwe term Nadowessi meaning "little snakes."&lt;/dd&gt;

        &lt;dt&gt;Sisíthuŋwaŋ&lt;/dt&gt;
        &lt;dd&gt;one of the tribes of the Isáŋyathi (Santee) Dakota (Sioux). Literally "lake village people."&lt;/dd&gt;

        &lt;dt&gt;Stephen Riggs&lt;/dt&gt;
        &lt;dd&gt;(1812-1883) Presbyterian missionary and linguist who translated treaties for the U.S. government, often in ways that obscured critical legal meanings for Dakota signers.&lt;/dd&gt;

        &lt;dt&gt;tipi&lt;/dt&gt;
        &lt;dd&gt;lodge (noun), they live (verb).&lt;/dd&gt;

        &lt;dt&gt;Traveling Hail (Wasuihiyayedan)&lt;/dt&gt;
        &lt;dd&gt;An elder within the Dakota community, elected speaker in 1862, noted for counseling caution and restraint after the Acton murders.&lt;/dd&gt;

        &lt;dt&gt;Upper Sioux Agency, or Yellow Medicine Agency&lt;/dt&gt;
        &lt;dd&gt;the federal administrative center for Dakota living on the upper (upriver) part of the Minnesota River.&lt;/dd&gt;

        &lt;dt&gt;Wabasha&lt;/dt&gt;
        &lt;dd&gt;(c. 1816–1876) Principal chief of his Bdewákhaŋthuŋwaŋ band in 1862. Advocated for negotiation and adaptation with the U.S. government, striving for land security. After the war, he helped rebuild lives at the Santee Reservation in Nebraska.&lt;/dd&gt;

        &lt;dt&gt;Waȟpékhute&lt;/dt&gt;
        &lt;dd&gt;one of the tribes of the Isáŋyathi (Santee) Dakota (Sioux). Literally "leaf archers."&lt;/dd&gt;

        &lt;dt&gt;wakaŋ&lt;/dt&gt;
        &lt;dd&gt;holy, mysterious, sacred.&lt;/dd&gt;

        &lt;dt&gt;Wakaŋ Tipi&lt;/dt&gt;
        &lt;dd&gt;Dakota sacred site near present-day St. Paul, Minnesota.&lt;/dd&gt;

        &lt;dt&gt;wakpa&lt;/dt&gt;
        &lt;dd&gt;river, stream.&lt;/dd&gt;

        &lt;dt&gt;Wakpa Mni Sota&lt;/dt&gt;
        &lt;dd&gt;Minnesota River.&lt;/dd&gt;

        &lt;dt&gt;Wanaġi Taċaŋku&lt;/dt&gt;
        &lt;dd&gt;road of the spirits, the Milky Way.&lt;/dd&gt;

        &lt;dt&gt;William Whipple (Bishop Whipple)&lt;/dt&gt;
        &lt;dd&gt;(1822-1901) Episcopal Bishop of Minnesota and advocate for reform of the federal Indian system, he pleaded for clemency and justice for the Dakota with President Lincoln.&lt;/dd&gt;
      &lt;/dl&gt;

      &lt;h3&gt;Brief Timeline&lt;/h3&gt;
      &lt;dl&gt;
        &lt;dt&gt;1805&lt;/dt&gt;
        &lt;dd&gt;Treaty of St. Peters, also called Pike's Purchase, the Dakota cede small tracts at Bdote for the construction of Fort Snelling, and eagerly await this new trading opportunity.&lt;/dd&gt;

        &lt;dt&gt;1825&lt;/dt&gt;
        &lt;dd&gt;Treaty of Prairie du Chien, establishing tribal boundaries and "spheres of influence."&lt;/dd&gt;

        &lt;dt&gt;1837&lt;/dt&gt;
        &lt;dd&gt;[Second] Treaty of St. Peters, also called the White Pine Treaty, the Dakota ceded land east of the Mississippi.&lt;/dd&gt;

        &lt;dt&gt;1851&lt;/dt&gt;
        &lt;dd&gt;Treaties of Mendota and Traverse des Sioux, the Dakota cede nearly all of their land and move to a reservation system in exchange for annuity payments.&lt;/dd&gt;

        &lt;dt&gt;July 1845–September 1847&lt;/dt&gt;
        &lt;dd&gt;Thoreau lives at Walden Pond.&lt;/dd&gt;

        &lt;dt&gt;May 1849&lt;/dt&gt;
        &lt;dd&gt;Thoreau's "Resistance to Civil Government" is first published in Aesthetic Papers.&lt;/dd&gt;

        &lt;dt&gt;August 9, 1854&lt;/dt&gt;
        &lt;dd&gt;Walden is published.&lt;/dd&gt;

        &lt;dt&gt;1858&lt;/dt&gt;
        &lt;dd&gt;"Land Allotment" Treaties, reducing Dakota land to a small reservation along the Minnesota River (10 miles wide, 140 miles long), opening the rest of the land to white settlement.&lt;/dd&gt;

        &lt;dt&gt;May 11–July 9, 1861&lt;/dt&gt;
        &lt;dd&gt;Thoreau visits Minnesota in an attempt to relieve his tuberculosis.&lt;/dd&gt;

        &lt;dt&gt;May 6, 1862&lt;/dt&gt;
        &lt;dd&gt;Thoreau dies from tuberculosis.&lt;/dd&gt;

        &lt;dt&gt;August 17, 1862&lt;/dt&gt;
        &lt;dd&gt;Murder of five settlers at Acton, Minnesota.&lt;/dd&gt;

        &lt;dt&gt;August 18, 1862&lt;/dt&gt;
        &lt;dd&gt;Attacks on the Upper and Lower Sioux Agencies, and Redwood Ferry.&lt;/dd&gt;

        &lt;dt&gt;August 22, 1862&lt;/dt&gt;
        &lt;dd&gt;Main attack on Fort Ridgely.&lt;/dd&gt;

        &lt;dt&gt;September 26, 1862&lt;/dt&gt;
        &lt;dd&gt;Surrender of captives at Camp Release.&lt;/dd&gt;

        &lt;dt&gt;December 26, 1862&lt;/dt&gt;
        &lt;dd&gt;38 Dakota executed at Mankato, Minnesota.&lt;/dd&gt;

        &lt;dt&gt;February 16, 1863&lt;/dt&gt;
        &lt;dd&gt;Congress passes an act that "all treaties heretofore made and entered into by the Sisseton, Wahpaton, Medawakanton, and Wahpakoota bands of Sioux or Dakota Indians, or any of them, with the United States, are hereby declared to be abrogated and annulled."&lt;/dd&gt;

        &lt;dt&gt;July 3, 1863&lt;/dt&gt;
        &lt;dd&gt;Little Crow is killed by a settler near Hutchinson while gathering raspberries.&lt;/dd&gt;
      &lt;/dl&gt;

      &lt;h3&gt;Notes and References&lt;/h3&gt;
      &lt;ol&gt;
        &lt;li&gt;Thoreau, Henry David. &lt;em&gt;Walden and Civil Disobedience&lt;/em&gt;. New York: Union Square &amp;amp; Co, 2023.&lt;/li&gt;
        &lt;li&gt;Anderson, Gary Clayton, and Alan R. (Alan Roland) Woolworth. &lt;em&gt;Through Dakota Eyes: Narrative Accounts of the Minnesota Indian War of 1862&lt;/em&gt;. St. Paul: Minnesota Historical Society Press, 1988.&lt;/li&gt;
        &lt;li&gt;The steamboat that carried Thoreau to the Redwood Agency in 1861 was named the Frank Steele after a man whose "flashing axe in the wilderness" symbolized the spirit of settler progress. Thoreau observed with wry detachment how the boat repeatedly rammed riverbanks and destroyed trees to navigate the winding waterway, offering a literal and comic description of American expansion crashing up against the land's contours.&lt;sup&gt;23&lt;/sup&gt;&lt;/li&gt;
        &lt;li&gt;The terminology used for Native peoples in North America varies considerably and is shaped by context, community preference, and scholarly convention. "Native American," "American Indian," "Indigenous," and "Native" are all in current use, and it is widely accepted best practice to name peoples in terms of their specific tribal or national identity whenever possible—for example, Dakota, Ojibwe, or Lakota.&lt;sup&gt;5 14 19&lt;/sup&gt; While the most linguistically precise representation may be "Dakȟóta" or "Dakhóta" (with diacriticals reflecting the Dakota alphabet), the plain form "Dakota" appears most consistently in both academic scholarship and within English-language writings by Dakota scholars themselves (see Waziyatawin, Westerman &amp; White). The anglicized spelling is thus retained here for coherence with established scholarly usage and accessibility to a general audience, while always privileging Dakota perspectives and language where appropriate.&lt;/li&gt;
        &lt;li&gt;National Museum of the American Indian. "Teaching &amp;amp; Learning about Native Americans," n.d. &lt;a href="https://americanindian.si.edu/nk360/faq/did-you-know"&gt;https://americanindian.si.edu/nk360/faq/did-you-know&lt;/a&gt;.&lt;/li&gt;
        &lt;li&gt;Harding, Walter. "Thoreau and Mann on the Minnesota River, June, 1861." &lt;em&gt;Minnesota History&lt;/em&gt; 37, no. 6 (1961): 225–28. &lt;a href="http://www.jstor.org/stable/20176368"&gt;http://www.jstor.org/stable/20176368&lt;/a&gt;.&lt;/li&gt;
        &lt;li&gt;Flanagan, John T. "Thoreau in Minnesota." &lt;em&gt;Minnesota History&lt;/em&gt; 16, no. 1 (1935): 35–46. &lt;a href="http://www.jstor.org/stable/20161165"&gt;http://www.jstor.org/stable/20161165&lt;/a&gt;.&lt;/li&gt;
        &lt;li&gt;Carley, Kenneth. &lt;em&gt;The Dakota War of 1862&lt;/em&gt;. 2nd ed. St. Paul, Minnesota: Minnesota Historical Society Press, 2001.&lt;/li&gt;
        &lt;li&gt;In Thoreau's Minnesota notes, he records a "Dream Dance" put on at the request of Governor Ramsey at the 1861 agency gathering, but there is no documented evidence of this as a specific Dakota ceremony at that time. The "Dream" or "Drum Dance" later became part of new religious movements responding to dispossession and trauma, but Thoreau (like many observers) likely misunderstood the ritual, applying a misnomer from missionary or ethnographer sources to a Dakota ceremony whose meaning he did not grasp.&lt;sup&gt;23 19&lt;/sup&gt;&lt;/li&gt;
        &lt;li&gt;Wingerd, Mary Lethert. &lt;em&gt;North Country: The Making of Minnesota&lt;/em&gt;. Minneapolis: University of Minnesota Press, 2010.&lt;/li&gt;
        &lt;li&gt;Buell, Lawrence. "Thoreau and the Natural Environment." In &lt;em&gt;The Cambridge Companion to Henry David Thoreau&lt;/em&gt;, 171–93. Cambridge: Cambridge University Press, 1995. doi:10.1017/CCOL0521440378.013.&lt;/li&gt;
        &lt;li&gt;Buell, Lawrence. "American Literary Emergence as a Postcolonial Phenomenon." &lt;em&gt;American Literary History&lt;/em&gt; 4, no. 3 (1992): 411–42. &lt;a href="http://www.jstor.org/stable/489858"&gt;http://www.jstor.org/stable/489858&lt;/a&gt;.&lt;/li&gt;
        &lt;li&gt;Schneider, Richard J. "Walden." In &lt;em&gt;The Cambridge Companion to Henry David Thoreau&lt;/em&gt;, 92–106. Cambridge: Cambridge University Press, 1995. doi:10.1017/CCOL0521440378.008.&lt;/li&gt;
        &lt;li&gt;Waziyatawin. "Maka Cokaya Kin (The Center of the Earth): From the Clay We Rise." Paper presented at &lt;em&gt;University of Hawaii Manoa International Symposium 'Folktales and Fairy Tales: Translation, Colonialism, and Cinema'&lt;/em&gt;, Honolulu, Sept 23-26, 2008. &lt;a href="http://scholarspace.manoa.hawaii.edu/handle/10125/16456"&gt;http://scholarspace.manoa.hawaii.edu/handle/10125/16456&lt;/a&gt;.&lt;/li&gt;
        &lt;li&gt;The Dakota word Bdote (or Mdote) means "confluence" and refers specifically to the area around the meeting of the Minnesota and Mississippi Rivers. The difference in spelling (B- vs. M-) arises from dialect variation and changing conventions for transcribing Dakota—contemporary usage increasingly prefers "Bdote."&lt;sup&gt;14 16&lt;/sup&gt;&lt;/li&gt;
        &lt;li&gt;White, Bruce. "Bdote/ Mdote Minisota: A Public EIS Continues." &lt;em&gt;MinnesotaHistory.Net&lt;/em&gt; (blog), February 26, 2009. &lt;a href="https://www.minnesotahistory.net/staging/?p=169"&gt;https://www.minnesotahistory.net/staging/?p=169&lt;/a&gt;.&lt;/li&gt;
        &lt;li&gt;Wolfe, Patrick. 2006. "Settler Colonialism and the Elimination of the Native." &lt;em&gt;Journal of Genocide Research&lt;/em&gt; 8 (4): 387–409. doi:10.1080/14623520601056240.&lt;/li&gt;
        &lt;li&gt;Cheyfitz, Eric. "Savage Law: The Plot Against American Indians in Johnson and Graham's Lessee v. M'Intosh and The Pioneers." &lt;em&gt;Cultures of United States Imperialism&lt;/em&gt; (1993): 109-128.&lt;/li&gt;
        &lt;li&gt;Westerman, Gwen, and Bruce M. White. &lt;em&gt;Mni Sota Makoċe: The Land of the Dakota&lt;/em&gt;. St. Paul: Minnesota Historical Society Press, 2012.&lt;/li&gt;
        &lt;li&gt;The Dakota name for the Mississippi is Wakpá Taŋka (Great River) or Ȟaȟa Wakpá (River of the Falls). The name Mississippi comes from the Anishinaabe (Ojibwe) name Misi-ziibi, also meaning Great River.&lt;/li&gt;
        &lt;li&gt;Gould, Roxanne, and Jim Rock. "Wakaŋ Tipi and Indian Mounds Park: Reclaiming an Indigenous Feminine Sacred Site." &lt;em&gt;AlterNative: An International Journal of Indigenous Peoples&lt;/em&gt; 12, no. 3 (September 2016): 224–35. &lt;a href="https://doi.org/10.20507/AlterNative.2016.12.3.2"&gt;https://doi.org/10.20507/AlterNative.2016.12.3.2&lt;/a&gt;.&lt;/li&gt;
        &lt;li&gt;Missionary Stephen R. Riggs, the main translator for the Dakota-language text of the treaty of Traverse des Sioux (1851), intentionally substituted neutral or ambiguous Dakota verbs in place of critical American legal concepts. For example, the English treaty states the Dakota "agree to cede, sell, and relinquish all their lands." Riggs translated "cede" as "erpeyapi" (to give up, throw away, lose), a term without the legal finality of English property law, a word he also used for describing money that would be set aside for farming equipment. Meanwhile, "sell" and "relinquish" were treated similarly, with Riggs using everyday Dakota phrasing that did not carry the sense of permanent transfer. The Dakota had no concept of exclusive title to land, and this linguistic mismatch ensured they could not grasp what they were supposedly agreeing to.&lt;sup&gt;19&lt;/sup&gt;&lt;/li&gt;
        &lt;li&gt;Sayre, Robert F. &lt;em&gt;Thoreau and the American Indians&lt;/em&gt;. Princeton, N.J: Princeton University Press, 1987.&lt;/li&gt;
        &lt;li&gt;The historical agreement commonly called the "Treaty of Mendota" is referred to by this official name because that is its title in United States government records and legal references. While "Mendota" is an anglicized rendering of the Dakota word Bdote, using the legal designation also distinguishes this specific 1851 treaty from other agreements and references to the place itself.&lt;sup&gt;16 19&lt;/sup&gt;&lt;/li&gt;
        &lt;li&gt;The 1862 annuity for the Dakota was delayed in part by a debate in Washington over whether it should be sent in gold or the new "greenbacks" (paper money), a monetary drama that further illustrates how distant policies could devastate local life. In a tragic irony, the $71,000 in gold finally arrived at Saint Paul the day before the murders at Acton, and would arrive to Fort Ridgely (the military camp protecting the agencies) several days later—too late to prevent the ongoing war.&lt;sup&gt;8&lt;/sup&gt;&lt;/li&gt;
        &lt;li&gt;Wakefield, Sarah F, and June Namias. &lt;em&gt;Six Weeks in the Sioux Tepees : A Narrative of Indian Captivity&lt;/em&gt;. Norman: University of Oklahoma Press, 1997.&lt;/li&gt;
        &lt;li&gt;Reddy, Saahith. "Thoreau's Civil Disobedience from Concord, Massachusetts: Global Impact." &lt;em&gt;Frontiers in Political Science&lt;/em&gt; 6 (October 2, 2024). &lt;a href="https://doi.org/10.3389/fpos.2024.1458098"&gt;https://doi.org/10.3389/fpos.2024.1458098&lt;/a&gt;.&lt;/li&gt;
        &lt;li&gt;Erickson, Evie. "Investigating the Meaning and Application of Civil Disobedience Through Thoreau, Gandhi and Martin Luther King Jr." &lt;em&gt;The Nonviolence Project&lt;/em&gt;. University of Wisconsin-Madison, March 16, 2024. &lt;a href="https://thenonviolenceproject.wisc.edu/2024/03/16/investigating-the-meaning-and-application-of-civil-disobedience-through-thoreau-gandhi-and-martin-luther-king-jr/"&gt;https://thenonviolenceproject.wisc.edu/2024/03/16/investigating-the-meaning-and-application-of-civil-disobedience-through-thoreau-gandhi-and-martin-luther-king-jr/&lt;/a&gt;.&lt;/li&gt;
        &lt;li&gt;Hendrick, George. "The Influence of Thoreau's 'Civil Disobedience' on Gandhi's Satyagraha." &lt;em&gt;The New England Quarterly&lt;/em&gt; 29, no. 4 (1956): 462–71. &lt;a href="https://doi.org/10.2307/362139"&gt;https://doi.org/10.2307/362139&lt;/a&gt;.&lt;/li&gt;
        &lt;li&gt;While the murder of five settlers at Acton on August 17, 1862, is often cited as the direct cause of war, underlying conditions made violence an ever-present risk. Dakota survivors and many settlers alike understood this event not as the origin, but as the moment when unbearable injustices boiled over—though the leaders who chose violence did so reluctantly and with awareness of likely catastrophe.&lt;sup&gt;2 8&lt;/sup&gt;&lt;/li&gt;
        &lt;li&gt;Despite being widely depicted in frontier myth as a bloodthirsty war chief, Little Crow (Taoyateduta) was, in fact, a complex and often conflicted leader. He succeeded his father as chief only after a violent dispute left him with both wrists shattered by gunfire; for years, he advocated for adaptation and peaceful coexistence, attending church services, negotiating in Washington, and even taking up farming. His oratory was legendary, and his warning to the war council in August 1862 was later remembered word-for-word by his son, Wowinape, and corroborated by other Dakota witnesses. In an unusual twist of fate, after the war forced him into exile, Little Crow was killed while picking raspberries with his son near Hutchinson, Minnesota. His scalp and remains were displayed for decades as war trophies in Minnesota museums, before finally being returned to his descendants for proper burial in the 1970s—a potent symbol of the long afterlife of memory and erasure in the story of Dakota resistance.&lt;sup&gt;2 10&lt;/sup&gt;&lt;/li&gt;
        &lt;li&gt;Michno, Gregory, ed. &lt;em&gt;Dakota Dawn: The Decisive First Week of the Sioux Uprising, August 17-24, 1862&lt;/em&gt;. New York: Savas Beatie LLC, 2011.&lt;/li&gt;
        &lt;li&gt;Fort Ridgely was among the obvious targets, and would be attacked twice in the coming weeks. Coincidentally, Fort Ridgely was named after two U.S. Army officers who died in the Mexican-American War: Major Jefferson F. Ridgely and Lieutenant Thomas L. Ridgely. The Mexican-American War itself was among the government injustices that prompted Thoreau's "Civil Disobedience," as he refused to pay taxes to a government waging what he considered an immoral conflict. Built in 1853 on the Minnesota River, Fort Ridgely became a crucial defensive post during the U.S.-Dakota War of 1862, serving as the main refuge for white settlers and a central target during the conflict's early battles; its survival prevented a wider collapse of settler control in the region. Notably, contemporary accounts suggest that if the Dakota had attacked the fort immediately following their early victories—before reinforcements arrived—they could have overrun it and altered the course of the war.&lt;sup&gt;1 8&lt;/sup&gt;&lt;/li&gt;
        &lt;li&gt;Duthu, N. Bruce. &lt;em&gt;American Indians and the Law&lt;/em&gt;. The Penguin Library of American Indian History. London: Penguin Books, 2009.&lt;/li&gt;
        &lt;li&gt;Byrd, Jodi A. &lt;em&gt;The Transit of Empire: Indigenous Critiques of Colonialism&lt;/em&gt;. First Peoples: New Directions in Indigenous Studies. Minneapolis: University of Minnesota Press, 2011.&lt;/li&gt;
        &lt;li&gt;Kaplan, Amy. "Left Alone With America: The Absence of Empire in the Study of American Culture." In &lt;em&gt;Cultures of United States Imperialism&lt;/em&gt;. New Americanists. Durham: Duke University Press, 1993.&lt;/li&gt;
        &lt;li&gt;Many of the military trials that followed the Dakota War of 1862 were shockingly brief, sometimes lasting just minutes. One reason for this haste was that Dakota defendants, unfamiliar with American criminal proceedings, often freely admitted participation in battles, believing they were answering honestly about what they considered legitimate acts of war against soldiers. Unaware that they were being tried as if these actions were cold-blooded murder rather than recognized acts of war, they did not see the need to conceal their involvement.&lt;sup&gt;8 10&lt;/sup&gt;&lt;/li&gt;
        &lt;li&gt;Vogel, Howard. "Rethinking the Effect of the Abrogation of the Dakota Treaties and the Authority for the Removal of the Dakota People from Their Homeland." &lt;em&gt;William Mitchell Law Review&lt;/em&gt; 39, no. 2 (January 1, 2013). &lt;a href="https://open.mitchellhamline.edu/wmlr/vol39/iss2/5"&gt;https://open.mitchellhamline.edu/wmlr/vol39/iss2/5&lt;/a&gt;.&lt;/li&gt;
        &lt;li&gt;In the wake of the Dakota War, Congress passed legislation in February of 1863 unilaterally abrogating all treaties with the Dakota. "Abrogation" refers to the formal repeal or annulment of a law or agreement; in this case, Congress declared that all obligations to the Dakota under previous treaties were void and redirected remaining payments to compensate white settlers for damage caused by the war. According to U.S. legal doctrine, Congress reserves the power to unilaterally end treaties with Native nations—essentially disregarding the original nation-to-nation status and legal promises—by simple legislative act, even without the consent of the affected Native party. The Supreme Court has repeatedly upheld this authority, holding that "plenary power" over Indian affairs resides with Congress, regardless of prior agreements. The Act of February 16, 1863 not only declared the treaties abrogated but also seized Dakota lands within the State of Minnesota. The Dakota, for their part, had no legal recourse to prevent this breach; the federal government's power to break its own word was built into the legal system governing U.S.-Native relations.&lt;sup&gt;38 40&lt;/sup&gt;&lt;/li&gt;
        &lt;li&gt;U.S. Congress, &lt;em&gt;An Act for the Relief of Persons for Damages sustained by Reason of Depredations and Injuries by certain Bands of Sioux Indians&lt;/em&gt;, 37th Cong., Act, February 16, 1863.&lt;/li&gt;
        &lt;li&gt;&lt;em&gt;Dakota Online Dictionary&lt;/em&gt;, &lt;a href="https://dictionary.swodli.com/index.html"&gt;https://dictionary.swodli.com/index.html&lt;/a&gt;.&lt;/li&gt;
        &lt;li&gt;Scott County Historical Society. "Thoreau's Journey along the Minnesota River," May 22, 2019. &lt;a href="https://www.scottcountyhistory.org/blog/thoreaus-journey-along-the-minnesota-river"&gt;https://www.scottcountyhistory.org/blog/thoreaus-journey-along-the-minnesota-river&lt;/a&gt;.&lt;/li&gt;
        &lt;li&gt;Dunbar-Ortiz, Roxanne. &lt;em&gt;An Indigenous Peoples' History of the United States&lt;/em&gt;. ReVisioning American History. Boston: Beacon Press, 2014.&lt;/li&gt;
      &lt;/ol&gt;</content>
    <link href="https://jmthornton.net/blog/p/walden-and-bdote"/>
    <summary/>
    <published>2025-05-11T09:00:00-06:00</published>
  </entry>
  <entry>
    <id>https://jmthornton.net/blog/p/thunderbird-ddg</id>
    <title>Set DuckDuckGo as default search in Thunderbird</title>
    <updated>2017-09-23T09:00:00-06:00</updated>
    <content type="html">&lt;p&gt;
        For several dwindling reasons, DuckDuckGo can still be tricky to add to some services despite being the default search engine for several browsers. A couple years ago, Mozilla &lt;a href="https://bugzilla.mozilla.org/show_bug.cgi?id=1120777"&gt;made the decision&lt;/a&gt; to open searches from Thunderbird in your default browser, instead of using a tab in Thunderbird itself, which broke the previous method of adding DuckDuckGo to the settings. They neglected to update their tutorials for adding more than the default search engines, and as of this writing neither has DuckDuckGo (though I will be contributing this tutorial to them).
      &lt;/p&gt;
      &lt;aside&gt;
        &lt;em&gt;This tutorial is working for Thunderbird 45.2 - 45.7&lt;/em&gt;
      &lt;/aside&gt;
      &lt;p&gt;
        &lt;ol&gt;
          &lt;li&gt;
            Download and install the &lt;a href="http://addons.mozilla.org/en-US/thunderbird/addon/google-search-for-thunderbi/"&gt;Google Search for Thunderbird&lt;/a&gt; add-on. You can easily do this within Thunderbird by going to Tools &amp;gt; Add-ons &amp;gt; Get Add-ons
          &lt;/li&gt;
          &lt;li&gt;
            Navigate to your profile library. One way to get there is Help &amp;gt; Troubleshooting information &amp;gt; Application basics &amp;gt; Profile Directory &amp;gt; &lt;em&gt;Open Directory&lt;/em&gt;
          &lt;/li&gt;
          &lt;li&gt;
            Navigate to the directory &lt;em&gt;extensions/gsearch@standard8.plus.com/searchplugins&lt;/em&gt;
          &lt;/li&gt;
          &lt;li&gt;
            Save the following file into that directory: &lt;a href="https://duckduckgo.com/opensearch.xml"&gt;https://duckduckgo.com/opensearch.xml&lt;/a&gt;
          &lt;/li&gt;
          &lt;li&gt;
            Rename the file as anything you want as long as you keep the .xml suffix
          &lt;/li&gt;
          &lt;li&gt;
            Make sure to leave the existing Google.xml file as is
          &lt;/li&gt;
          &lt;li&gt;
            Restart Thunderbird
          &lt;/li&gt;
          &lt;li&gt;
            Finally, go to Edit &amp;gt; Preferences &amp;gt; General &amp;gt; Default Search Engine and choose DuckDuckGo! Now you're set!
          &lt;/li&gt;
        &lt;/ol&gt;
      &lt;/p&gt;
      &lt;p&gt;
        You can also put any other OpenSearch XML files into that same directory if you want to add any other search engines.
      &lt;/p&gt;</content>
    <link href="https://jmthornton.net/blog/p/thunderbird-ddg"/>
    <summary>DuckDuckGo can still be tricky to add to some services. However, adding it to Thunderbird is just a few short steps.</summary>
    <published>2017-02-27T09:00:00-06:00</published>
  </entry>
  <entry>
    <id>https://jmthornton.net/blog/p/worm-of-1988</id>
    <title>The Internet worm of 1988: A Tour of the Worm</title>
    <updated>2017-10-31T09:00:00-06:00</updated>
    <content type="html">&lt;center&gt;&lt;h3&gt;A Tour of the Worm&lt;/h3&gt;&lt;br&gt;&lt;br&gt;

        &lt;em&gt;Donn Seeley&lt;/em&gt;&lt;br&gt;
        Department of Computer Science&lt;br&gt;
        University of Utah&lt;br&gt;&lt;br&gt;&lt;br&gt;&lt;br&gt;

        &lt;em&gt;ABSTRACT&lt;/em&gt;&lt;br&gt;&lt;br&gt;
      &lt;/center&gt;
      &lt;em&gt;
        On the evening of November 2, 1988, a self-replicating program was released
        upon the Internet (&lt;a name="rf1" href="#f1"&gt;1&lt;/a&gt;) This program (a worm) invaded VAX and Sun-3 computers
        running versions of Berkeley UNIX, and used their resources to attack still
        more computers (&lt;a name="rf2" href="#f2"&gt;2&lt;/a&gt;). Within the space of hours this program had spread across
        the U.S., infecting hundreds or thousands of computers and making many of
        them unusable due to the burden of its activity. This paper provides a
        chronology for the outbreak and presents a detailed description of the
        internals of the worm, based on a C version produced by decompiling.
      &lt;/em&gt;&lt;br&gt;
      &lt;br&gt;&lt;hr&gt;&lt;br&gt;
      &lt;strong&gt;Table of contents:&lt;/strong&gt;
      &lt;dl&gt;
        &lt;dt&gt;1. &lt;a href="#p1"&gt;Introduction&lt;/a&gt;&lt;/dt&gt;
        &lt;dt&gt;2. &lt;a href="#p2"&gt;Chronology&lt;/a&gt;&lt;/dt&gt;
        &lt;dt&gt;3. &lt;a href="#p3"&gt;Overview&lt;/a&gt;&lt;/dt&gt;
        &lt;dt&gt;4. &lt;a href="#p4"&gt;Internals&lt;/a&gt;&lt;/dt&gt;
        &lt;dl&gt;
          &lt;dt&gt;4.1. &lt;a href="#p4.1"&gt;The thread of control&lt;/a&gt;&lt;/dt&gt;
          &lt;dt&gt;4.2. &lt;a href="#p4.2"&gt;Data structures&lt;/a&gt;&lt;/dt&gt;
          &lt;dt&gt;4.3. &lt;a href="#p4.3"&gt;Population growth&lt;/a&gt;&lt;/dt&gt;
          &lt;dt&gt;4.4. &lt;a href="#p4.4"&gt;Locating new hosts to infect&lt;/a&gt;&lt;/dt&gt;
          &lt;dt&gt;4.5. &lt;a href="#p4.5"&gt;Security holes&lt;/a&gt;&lt;/dt&gt;
          &lt;dl&gt;
            &lt;dt&gt;4.5.1. &lt;a href="#p4.5.1"&gt;&lt;i&gt;Rsh&lt;/i&gt; and &lt;i&gt;rexec&lt;/i&gt;&lt;/a&gt;&lt;/dt&gt;
            &lt;dt&gt;4.5.2. &lt;a href="#p4.5.2"&gt;&lt;i&gt;Finger&lt;/i&gt;&lt;/a&gt;&lt;/dt&gt;
            &lt;dt&gt;4.5.3. &lt;a href="#p4.5.3"&gt;&lt;i&gt;Sendmail&lt;/i&gt;&lt;/a&gt;&lt;/dt&gt;
          &lt;/dl&gt;
          &lt;dt&gt;4.6. &lt;a href="#p4.6"&gt;Infection&lt;/a&gt;&lt;/dt&gt;
          &lt;dt&gt;4.7. &lt;a href="#p4.7"&gt;Password cracking&lt;/a&gt;&lt;/dt&gt;
          &lt;dl&gt;
            &lt;dt&gt;4.7.1. &lt;a href="#p4.7.1"&gt;Guessing passwords&lt;/a&gt;&lt;/dt&gt;
            &lt;dt&gt;4.7.2. &lt;a href="#p4.7.2"&gt;Faster password encryption&lt;/a&gt;&lt;/dt&gt;
          &lt;/dl&gt;
        &lt;/dl&gt;
        &lt;dt&gt;5. &lt;a href="#p5"&gt;Opinions&lt;/a&gt;&lt;/dt&gt;
        &lt;dt&gt;6. &lt;a href="#p6"&gt;Conclusion&lt;/a&gt;&lt;/dt&gt;
        &lt;dt&gt;&lt;a href="#ack"&gt;Acknowledgments&lt;/a&gt;&lt;/dt&gt;
      &lt;/dl&gt;
      &lt;br&gt;&lt;hr&gt;&lt;br&gt;

      &lt;a name="p1"&gt;&lt;/a&gt;
      &lt;h2&gt;1. Introduction&lt;/h2&gt;

      &lt;blockquote cite="Grampp and Morris, 'UNIX Operating System Security'"&gt;
        There is a fine line between helping administrators protect their systems
        and providing a cookbook for bad guys.
      &lt;/blockquote&gt;

      &lt;p&gt;November 3, 1988 is already coming to be known as Black Thursday. System
        administrators around the country came to work on that day and discovered
        that their networks of computers were laboring under a huge load. If they
        were able to log in and generate a system status listing, they saw what
        appeared to be dozens or hundreds of "shell" (command interpreter) processes.
        If they tried to kill the processes, they found that new processes appeared
        faster than they could kill them. Rebooting the computer seemed to have no
        effect within minutes after starting up again, the machine was overloaded
        by these mysterious processes.&lt;/p&gt;

      &lt;p&gt;These systems had been invaded by a &lt;i&gt;worm&lt;/i&gt;. A worm is a program that propagates
        itself across a network, using resources on one machine to attack other
        machines. (A worm is not quite the same as a &lt;i&gt;virus&lt;/i&gt;, which is a program
        fragment that inserts itself into other programs.) The worm had taken
        advantage of lapses in security on systems that were running 4.2 or 4.3 BSD
        UNIX or derivatives like SunOS. These lapses allowed it to connect to
        machines across a network, bypass their login authentication, copy itself
        and then proceed to attack still more machines. The massive system load was
        generated by multitudes of worms trying to propagate the epidemic.&lt;/p&gt;

      &lt;p&gt;The Internet had never been attacked in this way before, although there had
        been plenty of speculation that an attack was in store. Most system
        administrators were unfamiliar with the concept of worms (as opposed to
        viruses, which are a major affliction of the PC world) and it took some time
        before they were able to establish what was going on and how to deal with it.
        This paper is intended to let people know exactly what happened and how it
        came about, so that they will be better prepared when it happens the next
        time. The behavior of the worm will be examined in detail, both to show
        exactly what it did and didn't do, and to show the dangers of future worms.
        The epigraph above is now ironic, for the author of the worm used information
        in that paper to attack systems. Since the information is now well known, by
        virtue of the fact that thousands of computers now have copies of the worm,
        it seems unlikely that this paper can do similar damage, but it is definitely
        a troubling thought. Opinions on this and other matters will be offered
        below.&lt;/p&gt;

      &lt;a name="p2"&gt;&lt;/a&gt;
      &lt;h2&gt;2. Chronology&lt;/h2&gt;

      &lt;blockquote cite="Dennis Miller, on NBC's Saturday Night Live"&gt;
        Remember, when you connect with another computer, you're connecting to every
        computer that computer has connected to.
      &lt;/blockquote&gt;

      &lt;blockquote cite="Andy Sudduth on behalf of the worm's author"&gt;
        Here is the gist of a message I got: I'm sorry.
      &lt;/blockquote&gt;

      &lt;p&gt;Many details of the chronology of the attack are not yet available. The
        following list represents dates and times that we are currently aware of.
        Times have all been rendered in Pacific Standard Time for convenience.&lt;/p&gt;

      &lt;dl&gt;
        &lt;dt&gt;11/2:1800 (approx.)&lt;/dt&gt;
        &lt;dd&gt;This date and time were seen on worm files found on &lt;i&gt;prep.ai.mit.edu&lt;/i&gt;,
          a VAX 11/750 at the MIT Artificial Intelligence Laboratory. The files
          were removed later, and the precise time was lost. System logging on
          &lt;i&gt;prep&lt;/i&gt; had been broken for two weeks. The system doesn't run accounting
          and the disks aren't backed up to tape: a perfect target. A number of
          "tourist" users (individuals using public accounts) were reported to be
          active that evening. These users would have appeared in the session logging,
          but see below.&lt;/dd&gt;&lt;br&gt;

          &lt;dt&gt;11/2:1824&lt;/dt&gt;
          &lt;dd&gt;First known West Coast infection: &lt;i&gt;rand.org&lt;/i&gt; at Rand Corp. in Santa
            Monica.&lt;/dd&gt;&lt;br&gt;

          &lt;dt&gt;11/2:1904&lt;/dt&gt;
          &lt;dd&gt;&lt;i&gt;csgw.berkeley.edu&lt;/i&gt; is infected. This machine is a major network
            gateway at UC Berkeley. Mike Karels and Phil Lapsley discover the infection
            shortly afterward.&lt;/dd&gt;&lt;br&gt;

          &lt;dt&gt;11/2:1954&lt;/dt&gt;
          &lt;dd&gt;&lt;i&gt;mimsy.umd.edu&lt;/i&gt; is attacked through its &lt;i&gt;finger&lt;/i&gt; server. This machine is at the
            University of Maryland College Park Computer Science Department.&lt;/dd&gt;&lt;br&gt;

          &lt;dt&gt;11/2: 2000 (approx.)&lt;/dt&gt;
          &lt;dd&gt;Suns at the MIT AI Lab are attacked.&lt;/dd&gt;&lt;br&gt;

          &lt;dt&gt;11/2: 2028&lt;/dt&gt;
          &lt;dd&gt;First &lt;i&gt;sendmail&lt;/i&gt; attack on &lt;i&gt;mimsy&lt;/i&gt;.&lt;/dd&gt;&lt;br&gt;

          &lt;dt&gt;11/2: 2040&lt;/dt&gt;
          &lt;dd&gt;Berkeley staff figure out the &lt;i&gt;sendmail&lt;/i&gt; and &lt;i&gt;rsh&lt;/i&gt; attacks,
            notice &lt;i&gt;telnet&lt;/i&gt; and &lt;i&gt;finger&lt;/i&gt; peculiarities, and start shutting
            these services off.&lt;/dd&gt;&lt;br&gt;

          &lt;dt&gt;11/2: 2049&lt;/dt&gt;
          &lt;dd&gt;&lt;i&gt;cs.utah.edu&lt;/i&gt; is infected. This VAX 8600 is the central Computer
            Science Department machine at the University of Utah. The next several entries
            follow documented events at Utah and are representative of other infections
            around the country.&lt;/dd&gt;&lt;br&gt;

          &lt;dt&gt;11/2: 2109&lt;/dt&gt;
          &lt;dd&gt;First &lt;i&gt;sendmail&lt;/i&gt; attack at &lt;i&gt;cs.utah.edu&lt;/i&gt;.&lt;/dd&gt;&lt;br&gt;

          &lt;dt&gt;11/2: 2121&lt;/dt&gt;
          &lt;dd&gt;The load average on &lt;i&gt;cs.utah.edu&lt;/i&gt; reaches 5. The "load average" is a
            system-generated value that represents the average number of jobs in the run
            queue over the last minute; a load of 5 on a VAX 8600 noticeably degrades
            response times, while a load over 20 is a drastic degradation. At 9 PM, the
            load is typically between 0.5 and 2.&lt;/dd&gt;&lt;br&gt;

          &lt;dt&gt;11/2: 2141&lt;/dt&gt;
          &lt;dd&gt;The load average on &lt;i&gt;cs.utah.edu&lt;/i&gt; reaches 7.&lt;/dd&gt;&lt;br&gt;

          &lt;dt&gt;11/2: 2201&lt;/dt&gt;
          &lt;dd&gt;The load average on &lt;i&gt;cs.utah.edu&lt;/i&gt; reaches 16.&lt;/dd&gt;&lt;br&gt;

          &lt;dt&gt;11/2: 2206&lt;/dt&gt;
          &lt;dd&gt;The maximum number of distinct runnable processes (100) is reached on
            &lt;i&gt;cs.utah.edu&lt;/i&gt;; the system is unusable.&lt;/dd&gt;&lt;br&gt;

            &lt;dt&gt;11/2: 2220&lt;/dt&gt;
            &lt;dd&gt;Jeff Forys at Utah kills off worms on &lt;i&gt;cs.utah.edu&lt;/i&gt;. Utah Sun clusters are
              infected. 11/2: 2241 Re-infestation causes the load average to reach 27 on
              &lt;i&gt;cs.utah.edu&lt;/i&gt;.&lt;/dd&gt;&lt;br&gt;

              &lt;dt&gt;11/2: 2249&lt;/dt&gt;
              &lt;dd&gt;Forys shuts down &lt;i&gt;cs.utah.edu&lt;/i&gt;.&lt;/dd&gt;&lt;br&gt;

              &lt;dt&gt;11/3: 2321&lt;/dt&gt;
              &lt;dd&gt;Re-infestation causes the load average to reach 37 on &lt;i&gt;cs.utah.edu&lt;/i&gt;, despite
                continuous efforts by Forys to kill worms.&lt;/dd&gt;&lt;br&gt;

              &lt;dt&gt;11/2: 2328&lt;/dt&gt;
              &lt;dd&gt;Peter Yee at NASA Ames Research Center posts a warning to the
                TCP-IP mailing list: "We are currently under attack from an Internet VIRUS.
                It has hit UC Berkeley, UC San Diego, Lawrence Livermore, Stanford, and NASA
                Ames." He suggests turning off &lt;i&gt;telnet&lt;/i&gt;, &lt;i&gt;ftp&lt;/i&gt;, &lt;i&gt;finger&lt;/i&gt;, &lt;i&gt;rsh&lt;/i&gt; and SMTP services.
                He does not mention &lt;i&gt;rexec&lt;/i&gt;. Yee is actually at Berkeley working with Keith
                Bostic, Mike Karels and Phil Lapsley.&lt;/dd&gt;&lt;br&gt;

              &lt;dt&gt;11/3: 0034&lt;/dt&gt;
              &lt;dd&gt;At another's prompting, Andy Sudduth of Harvard anonymously posts a warning
                to the TCP-IP list: "There may be a virus loose on the internet." This is
                the first message that (briefly) describes how the &lt;i&gt;finger&lt;/i&gt; attack works,
                describes how to defeat the SMTP attack by rebuilding &lt;i&gt;sendmail&lt;/i&gt;, and
                explicitly mentions the &lt;i&gt;rexec&lt;/i&gt; attack. Unfortunately Sudduth's message is
                blocked at relay.cs.net while that gateway is shut down to combat the worm,
                and it does not get delivered for almost two days. Sudduth acknowledges
                authorship of the message in a subsequent message to TCP-IP on Nov. 5.&lt;/dd&gt;&lt;br&gt;

              &lt;dt&gt;11/3: 0254&lt;/dt&gt;
              &lt;dd&gt;Keith Bostic sends a fix for &lt;i&gt;sendmail&lt;/i&gt; to the newsgroup
                comp.bugs.4bsd.ucb-fixes and to the TCP-IP mailing list. These fixes (and
                later ones) are also mailed directly to important system administrators
                around the country.&lt;/dd&gt;&lt;br&gt;

              &lt;dt&gt;11/3: early morning&lt;/dt&gt;
              &lt;dd&gt;The wtmp session log is mysteriously removed on prep.ai.mit.edu.&lt;/dd&gt;&lt;br&gt;

              &lt;dt&gt;11/3: 0507&lt;/dt&gt;
              &lt;dd&gt;Edward Wang at Berkeley figures out and reports the &lt;i&gt;finger&lt;/i&gt; attack, but his
                message doesn't come to Mike Karels' attention for 12 hours.&lt;/dd&gt;&lt;br&gt;

              &lt;dt&gt;11/3: 0900&lt;/dt&gt;
              &lt;dd&gt;The annual Berkeley Unix Workshop commences at UC Berkeley. 40 or so
                important system administrators and backers are in town to attend, while
                disaster erupts at home. Several people who had planned to fly in on
                Thursday morning are trapped by the crisis. Keith Bostic spends much of
                the day on the phone at the Computer Systems Research Group offices answering
                calls from panicked system administrators from around the country.&lt;/dd&gt;&lt;br&gt;

              &lt;dt&gt;11/3 :1500 (approx.)&lt;/dt&gt;
              &lt;dd&gt;The team at MIT Athena calls Berkeley with an example of how the &lt;i&gt;finger&lt;/i&gt;
                server bug works.&lt;/dd&gt;&lt;br&gt;

              &lt;dt&gt;11/3:1626&lt;/dt&gt;
              &lt;dd&gt;Dave Pare arrives at Berkeley CSRG offices;
                disassembly and decompiling start shortly afterward using Pare's special
                tools.&lt;/dd&gt;&lt;br&gt;

              &lt;dt&gt;11/3:1800 (approx.)&lt;/dt&gt;
              &lt;dd&gt;The Berkeley group sends out for calzones. People arrive and leave;
                the offices are crowded, there's plenty of excitement. Parallel work is in
                progress at MIT Athena; the two groups swap code.&lt;/dd&gt;&lt;br&gt;

              &lt;dt&gt;11/3:1918&lt;/dt&gt;
              &lt;dd&gt;Keith Bostic posts a fix for the &lt;i&gt;finger&lt;/i&gt; server.&lt;/dd&gt;&lt;br&gt;

              &lt;dt&gt;11/4: 0600&lt;/dt&gt;
              &lt;dd&gt;Members of the Berkeley team, with the worm almost completely disassembled
                and largely decompiled, finally take off for a couple hours' sleep before
                returning to the workshop.&lt;/dd&gt;&lt;br&gt;

              &lt;dt&gt;11/4: 1236&lt;/dt&gt;
              &lt;dd&gt;Theodore Ts'o of Project Athena at MIT publicly announces that MIT and
                Berkeley have completely disassembled the worm.&lt;/dd&gt;&lt;br&gt;

              &lt;dt&gt;11/4:1700 (approx.)&lt;/dt&gt;
              &lt;dd&gt;A short presentation on the worm is made at the end of the Berkeley UNIX
                Workshop.&lt;/dd&gt;&lt;br&gt;

              &lt;dt&gt;11/8:&lt;/dt&gt;
              &lt;dd&gt;National Computer Security Center meeting to discuss the worm. There are
                about 50 attendees.&lt;/dd&gt;&lt;br&gt;

              &lt;dt&gt;11/11: 0038&lt;/dt&gt;
              &lt;dd&gt;Fully decompiled and commented worm source is installed at Berkeley.&lt;/dd&gt;

      &lt;/dl&gt;

      &lt;a name="p3"&gt;&lt;/a&gt;
      &lt;h2&gt;3. Overview&lt;/h2&gt;

      &lt;p&gt;What exactly did the worm do that led it to cause an epidemic? The worm
        consists of a 99-line bootstrap program written in the C language, plus a
        large relocatable object file that comes in VAX and Sun-3 flavors. Internal
        evidence showed that the object file was generated from C sources, so it was
        natural to decompile the binary machine language into C; we now have over
        3200 lines of commented C code which recompiles and is mostly complete. We
        shall start the tour of the worm with a quick overview of the basic goals of
        the worm, followed by discussion in depth of the worm's various behaviors as
        revealed by decompilation.&lt;/p&gt;

      &lt;p&gt;The activities of the worm break down into the categories of attack and
        defense. Attack consists of locating hosts (and accounts) to penetrate, then
        exploiting security holes on remote systems to pass across a copy of the worm
        and run it. The worm obtains host addresses by examining the system tables
        &lt;i&gt;/etc/hosts.equiv&lt;/i&gt; and &lt;i&gt;/.rhosts&lt;/i&gt;, user files like &lt;i&gt;.forward&lt;/i&gt; and. rhosts, dynamic
        routing information produced by the netstat program, and finally randomly
        generated host addresses on local networks. It ranks these by order of
        preference, trying a file like &lt;i&gt;/etc/hosts.equiv&lt;/i&gt; first because it contains
        names of local machines that are likely to permit unauthenticated connections.
        Penetration of a remote system can be accomplished in any of three ways. The
        worm can take advantage of a bug in the &lt;i&gt;finger&lt;/i&gt; server that allows it to
        download code in place of a &lt;i&gt;finger&lt;/i&gt; request and trick the server into
        executing it. The worm can use a "trap door" in the &lt;i&gt;sendmail&lt;/i&gt; SMTP mail
        service, exercising a bug in the debugging code that allows it to execute a
        command interpreter and download code across a mail connection. If the worm
        can penetrate a local account by guessing its password, it can use the &lt;i&gt;rexec&lt;/i&gt;
        and &lt;i&gt;rsh&lt;/i&gt; remote command interpreter services to attack hosts that share that
        account. In each case the worm arranges to get a remote command interpreter
        which it can use to copy over, compile and execute the 99-line bootstrap.
        The bootstrap sets up its own network connection with the local worm and
        copies over the other files it needs, and using these pieces a remote worm
        is built and the infection procedure starts over again. Defense tactics fall
        into three categories: preventing the detection of intrusion, inhibiting the
        analysis of the program, and authenticating other worms. The worm's simplest
        means of hiding itself is to change its name. When it starts up, it clears
        its argument list and sets its zeroth argument to &lt;i&gt;sh&lt;/i&gt;, allowing it to
        masquerade as an innocuous command interpreter. It uses &lt;i&gt;fork()&lt;/i&gt; to change its
        process I.D., never staying too long at one I.D. These two tactics are
        intended to disguise the worm's presence on system status listings. The worm
        tries to leave as little trash lying around as it can, so at start-up it
        reads all its support files into memory and deletes the tell-tale filesystem
        copies. It turns off the generation of &lt;i&gt;core&lt;/i&gt; files, so if the worm makes a
        mistake, it doesn't leave evidence behind in the form of &lt;i&gt;core&lt;/i&gt; dumps.
        The latter tactic is also designed to block analysis of the program-it prevents
        an administrator from sending a software signal to the worm to force it to
        dump a &lt;i&gt;core&lt;/i&gt; file. There are other ways to get a &lt;i&gt;core&lt;/i&gt; file, however, so the
        worm carefully alters character data in memory to prevent it from being
        extracted easily. Copies of disk files are encoded by repeatedly
        exclusive-or'ing a ten-byte code sequence; static strings are encoded
        byte-by-byte by exclusive-or'ing with the hexadecimal value 81, except for
        a private word list which is encoded with hexadecimal 80 instead. If the
        worm's files are somehow captured before the worm can delete them, the
        object files have been loaded in such a way as to remove most non-essential
        symbol table entries, making it harder to guess at the purposes of worm
        routines from their names. The worm also makes a trivial effort to stop
        other programs from taking advantage of its communications; in theory a
        well-prepared site could prevent infection by sending messages to ports
        that the worm was listening on, so the worm is careful to test connections
        using a short exchange of random "magic numbers".&lt;/p&gt;

      &lt;p&gt;When studying a tricky program like this, it's just as important to establish
        what the program does not do as what it does do. The worm does not delete a
        system's files: it only removes files that it created in the process of
        bootstrapping. The program does not attempt to incapacitate a system by
        deleting important files, or indeed any files. It does not remove log files
        or otherwise interfere with normal operation other than by consuming system
        resources. The worm does not modify existing files: it is not a virus. The
        worm propagates by copying itself and compiling itself on each system; it
        does not modify other programs to do its work for it. Due to its method of
        infection, it can't count on sufficient privileges to be able to modify
        programs. The worm does not install trojan horses: its method of attack is
        strictly active, it never waits for a user to trip over a trap. Part of the
        reason for this is that the worm can't afford to waste time waiting for
        trojan horses-it must reproduce before it is discovered. Finally, the worm
        does not record or transmit decrypted passwords: except for its own static
        list of favorite passwords, the worm does not propagate cracked passwords
        on to new worms nor does it transmit them back to some home base. This is
        not to say that the accounts that the worm penetrated are secure merely
        because the worm did not tell anyone what their passwords were, of course-if
        the worm can guess an account's password, certainly others can too. The worm
        does not try to capture superuser privileges: while it does try to break
        into accounts, it doesn't depend on having particular privileges to
        propagate, and never makes special use of such privileges if it somehow
        gets them. The worm does not propagate over uucp or X.25 or DECNET or BITNET:
        it specifically requires TCP/IP. The worm does not infect System V systems
        unless they have been modified to use Berkeley network programs like
        &lt;i&gt;sendmail&lt;/i&gt;, &lt;i&gt;fingerd&lt;/i&gt; and &lt;i&gt;rexec&lt;/i&gt;.&lt;/p&gt;

      &lt;a name="p4"&gt;&lt;/a&gt;
      &lt;h2&gt;4. Internals&lt;/h2&gt;

      &lt;p&gt;Now for some details: we shall follow the main thread of control in the worm,
        then examine some of the worm's data structures before working through each
        phase of activity.&lt;/p&gt;

      &lt;a name="p4.1"&gt;&lt;/a&gt;
      &lt;h3&gt;4.1. The thread of control&lt;/h3&gt;

      &lt;p&gt;When the worm starts executing in &lt;i&gt;main()&lt;/i&gt;, it takes care of some
        initializations, some defense and some cleanup. The very first thing it does
        is to change its name to &lt;i&gt;sh&lt;/i&gt;. This shrinks the window during which the worm
        is visible in a system status listing as a process with an odd name like
        &lt;i&gt;x9834753&lt;/i&gt;. It then initializes the random number generator, seeding it
        with the current time, turns off &lt;i&gt;core&lt;/i&gt; dumps, and arranges to die when remote
        connections fail. With this out of the way, the worm processes its argument
        list. It first looks for an option &lt;b&gt;-p&lt;/b&gt; &lt;i&gt;$$&lt;/i&gt;, where &lt;i&gt;$$&lt;/i&gt; represents the process
        I.D. of its parent process; this option indicates to the worm that it must
        take care to clean up after itself. It proceeds to read in each of the files
        it was given as arguments; if cleaning up, it removes each file after it
        reads it. If the worm wasn't given the bootstrap source file &lt;i&gt;l1.c&lt;/i&gt; as an
        argument, it exits silently; this is perhaps intended to slow down people
        who are experimenting with the worm. If cleaning up, the worm then closes
        its file descriptors, temporarily cutting itself off from its remote parent
        worm, and removes some files. (One of these files, &lt;i&gt;/tmp/.dumb&lt;/i&gt;, is never
        created by the worm and the unlinking seems to be left over from an earlier
        stage of development.) The worm then zeroes out its argument list, again to
        foil the system status program &lt;i&gt;ps&lt;/i&gt;. The next step is to initialize the worm's
        list of network interfaces; these interfaces are used to find local networks
        and to check for alternate addresses of the current host. Finally, if
        cleaning up the worm resets its process group and kills the process that
        helped to bootstrap it. The worm's last act in &lt;i&gt;main()&lt;/i&gt; is to call a function
        we named &lt;i&gt;doit()&lt;/i&gt;, which contains the main loop of the worm.&lt;/p&gt;

      &lt;pre is:raw&gt;&lt;code class="language-c"&gt;doit()
{
    /* seed the random number generator with the time */
    /* attack hosts: gateways, Iocal nets, remote nets */
    checkother();
    send message();
    for (;;)
    {
cracksome();
other_sleep(30);
cracksome();
/* change our process ID */
/* attack hosts: gateways, known hosts, remote nets, local nets */
other_sleep(120);
if (/* 12 hours have passed */)
/* reset hosts table */
if (pleasequit &amp;amp;&amp;amp; nextw &amp;gt; 10)
  exit(0);
    }
}&lt;/code&gt;&lt;/pre&gt;
      &lt;center&gt;&lt;br&gt;
        "C" pseudo-code for the &lt;i&gt;doit()&lt;/i&gt; function
      &lt;/center&gt;&lt;br&gt;

      &lt;p&gt;&lt;i&gt;doit()&lt;/i&gt; runs a short prologue before actually starting the main loop. It (redundantly)
        seeds the random number generator with the current time, saving the time so that
        it can tell how long it has been running. The worm then attempts its first
        infection. It initially attacks gateways that it found with the &lt;i&gt;netstat&lt;/i&gt;
        network status program; if it can't infect one of these hosts, then it
        checks random host numbers on local networks, then it tries random host
        numbers on networks that are on the far side of gateways, in each case
        stopping if it succeeds. (Note that this sequence of attacks differs from
        the sequence the worm uses after it has entered the main loop.)&lt;/p&gt;

      &lt;p&gt;After this initial attempt at infection, the worm calls the routine
        &lt;i&gt;checkother()&lt;/i&gt; to check for another worm already on the local machine. In
        this check the worm acts as a client to an existing worm which acts as a
        server; they may exchange "population control" messages, after which one
        of the two worms will eventually shut down.&lt;/p&gt;

      &lt;p&gt;One odd routine is called just before entering the main loop. We named
        this routine &lt;i&gt;send_message()&lt;/i&gt;, but it really doesn't send anything at all.
        It looks like it was intended to cause 1 in 15 copies of the worm to send
        a 1-byte datagram to a port on the host &lt;i&gt;ernie.berkeley.edu&lt;/i&gt;, which is located
        in the Computer Science Department at UC Berkeley. It has been suggested
        that this was a feint, designed to draw attention to &lt;i&gt;ernie&lt;/i&gt; and away from the
        author's real host. Since the routine has a bug (it sets up a TCP socket but tries
        to send a UDP packet), nothing gets sent at all. It's possible that this
        was a deeper feint, designed to be uncovered only by decompilers; if so,
        this wouldn't be the only deliberate impediment that the author put in our
        way. In any case, administrators at Berkeley never detected any process
        listening at port 11357 on &lt;i&gt;ernie&lt;/i&gt;, and we found no code in the worm that
        listens at that port, regardless of the host.&lt;/p&gt;

      &lt;p&gt;The main loop begins with a call to a function named &lt;i&gt;cracksome()&lt;/i&gt; for some
        password cracking. Password cracking is an activity that the worm is
        constantly working at in an incremental fashion. It takes a break for 30
        seconds to look for intruding copies of the worm on the local host, and
        then goes back to cracking. After this session, it forks (creates a new
        process running with a copy of the same image) and the old process exits;
        this serves to turn over process I.D. numbers and makes it harder to track
        the worm with the system status program ps. At this point the worm goes back
        to its infectious stage, trying (in order of preference) gateways, hosts
        listed in system tables like &lt;i&gt;/etc/hosts.equiv&lt;/i&gt;, random host numbers on the
        far side of gateways and random hosts on local networks. As before, if it
        succeeds in infecting a new host, it marks that host in a list and leaves
        the infection phase for the time being. After infection, the worm spends
        two minutes looking for new local copies of the worm again; this is done
        here because a newly infected remote host may try to reinfect the local host.
        If 12 hours have passed and the worm is still alive, it assumes that it has
        had bad luck due to networks or hosts being down, and it reinitializes its
        table of hosts so that it can start over from scratch. At the end of the
        main loop the worm checks to see if it is scheduled to die as a result of
        its population control features, and if it is, and if it has done a
        sufficient amount of work cracking passwords, it exits.&lt;/p&gt;

      &lt;a name="p4.2"&gt;&lt;/a&gt;
      &lt;h3&gt;4.2. Data structures&lt;/h3&gt;

      &lt;p&gt;The worm maintains at least four interesting data structures, and each is
        associated with a set of support routines.&lt;/p&gt;

      &lt;p&gt;The &lt;i&gt;object&lt;/i&gt; structure is used to hold copies of files. Files are encrypted
        using the function &lt;i&gt;xorbuf()&lt;/i&gt; while in memory, so that dumps of the worm won't
        reveal anything interesting. The files are copied to disk on a remote system before
        starting a new worm, and new worms read the files into memory and delete the disk
        copies as part of their start-up duties. Each structure contains a name, a length
        and a pointer to a buffer. The function &lt;i&gt;getobjectbyname()&lt;/i&gt; retrieves a
        pointer to a named object structure; for some reason, it is only used to call
        up the bootstrap source file.&lt;/p&gt;

      &lt;p&gt;The &lt;i&gt;interface&lt;/i&gt; structure contains information about the current host's
        network interfaces. This is mainly used to check for local attached networks.
        It contains a name, a network address, a subnet mask and some flags. The interface
        table is initialized once at start-up time.&lt;/p&gt;

      &lt;p&gt;The host structure is used to keep track of the status and addresses of hosts.
        Hosts are added to this list dynamically, as the worm encounters new sources of
        host names and addresses. The list can be searched for a particular address or
        name, with an option to insert a new entry if no matching entry is found. Flag
        bits are used to indicate whether the host is a gateway, whether it was found in
        a system table like &lt;i&gt;/etc/hosts.equiv&lt;/i&gt;, whether the worm has found it
        impossible to attack the host for some reason, and whether the host has already
        been successfully infected. The bits for "can't infect" and "infected" are cleared
        every 12 hours, and low priority hosts are deleted to be accumulated again later.
        The structure contains up to 12 names (aliases) and up to 6 distinct network addresses
        for each host.&lt;/p&gt;

      &lt;p&gt;In our sources, what we've called the &lt;i&gt;muck&lt;/i&gt; structure is used to keep track of
        accounts for the purpose of password cracking. (It was awarded the name &lt;i&gt;muck&lt;/i&gt;
        for sentimental reasons, although &lt;i&gt;pw&lt;/i&gt; or &lt;i&gt;acct&lt;/i&gt; might be more mnemonic.)
        Each structure contains an account name, an encrypted password a decrypted
        password (if available) plus the home directory and personal information fields
        from the password file.&lt;/p&gt;

      &lt;a name="p4.3"&gt;&lt;/a&gt;
      &lt;h3&gt;4.3. Population growth&lt;/h3&gt;

      &lt;p&gt;The worm contains a mechanism that seems to be designed to limit the number of
        copies of the worm running on a given system, but beyond that our current
        understanding of the design goals is itself limited. It clearly does not prevent
        a system from being overloaded, although it does appear to pace the infection so
        that early copies can go undetected. It has been suggested that a simulation of
        the worm's population control features might reveal more about its design, and we
        are interested writing such a simulation.&lt;/p&gt;

      &lt;p&gt;The worm uses a client-and-server technique to control the number of copies
        executing on the current machine. A routine &lt;i&gt;checkother()&lt;/i&gt; is run at
        start-up time. This function tries to connect to a server listening at TCP
        port 23357. The connection attempt returns immediately if no server is present,
        but blocks if one is available and busy; a server worm periodically runs its
        server code during time-consuming operations so that the queue of connections
        does not grow large. After the client exchanges magic numbers with the server as
        a trivial form of authentication, the client and the server roll dice to see who
        gets to survive. If the exclusive-or of the respective low bits of the client's
        and the server's random numbers is 1, the server wins, otherwise the client wins.
        The loser sets a flag &lt;i&gt;pleasequit&lt;/i&gt; that eventually allows it to exit at the
        bottom of the main loop. If at any time a problem occurs-a read from the server
        fails, or the wrong magic number is returned-the client worm returns from the
        function, becoming a worm that never acts as a server and hence does not engage
        in population control. Perhaps as a precaution against a cataleptic server, a
        test at the top of the function causes 1 in 7 worms to skip population control.
        Thus the worm finishes the population game in &lt;i&gt;checkother()&lt;/i&gt; in one of
        three states: scheduled to die after some time, with &lt;i&gt;pleasequit&lt;/i&gt; set;
        running as a server, with the possibility of losing the game later; and immortal,
        safe from the gamble of population control.&lt;/p&gt;

      &lt;p&gt;A complementary routine &lt;i&gt;other_sleep()&lt;/i&gt; executes the server function. It is
        passed a time in seconds, and it uses the Berkeley &lt;i&gt;select()&lt;/i&gt; system call to
        wait for that amount of time accepting connections from clients. On entry to the
        function, it tests to see whether it has a communications port with which to
        accept connections; if not, it simply sleeps for the specified amount of time
        and returns. Otherwise it loops on &lt;i&gt;select()&lt;/i&gt;, decrementing its time
        remaining after serving a client until no more time is left and the function
        returns. When the server acquires a client, it performs the inverse of the
        client's protocol, eventually deciding whether to proceed or to quit.
        &lt;i&gt;other_sleep()&lt;/i&gt; is called from many different places in the code, so that
        clients are not kept waiting too long.&lt;/p&gt;

      &lt;p&gt;Given the worm's elaborate scheme for controlling re-infection, what led it to
        reproduce so quickly on an individual machine that it could swamp it? One
        culprit is the 1 in 7 test in &lt;i&gt;checkother()&lt;/i&gt;: worms that skip the client
        phase become immortal, and thus don't risk being eliminated by a roll of the
        dice. Another source of system loading is the problem that when a worm decides
        it has lost, it can still do a lot of work before it actually exits. The client
        routine isn't even run until the newly born worm has attempted to infect at
        least one remote host, and even if a worm loses the roll, it continues executing
        to the bottom of the main loop, and even then it won't exit unless it has gone
        through the main loop several times, limited by its progress in cracking
        passwords. Finally, new worms lose all of the history of infection that
        their parents had, so the children of a worm are constantly trying to re-infect
        the parent's host, as well as the other children's hosts. Put all of these
        factors together and it comes as no surprise that within an hour or two after
        infection, a machine may be entirely devoted to executing worms.&lt;/p&gt;

      &lt;a name="p4.4"&gt;&lt;/a&gt;
      &lt;h3&gt;4.4. Locating new hosts to infect&lt;/h3&gt;

      &lt;p&gt;One of the characteristics of the worm is that all of its attacks are active,
        never passive. A consequence of this is that the worm can't wait for a user to
        take it over to another machine like gum on a shoe - it must search out hosts
        on its own.&lt;/p&gt;

      &lt;p&gt;The worm has a very distinct list of priorities when hunting for hosts. Its
        favorite hosts are gateways; the &lt;i&gt;hg()&lt;/i&gt; routine tries to infect each of the hosts
        it believes to be gateways. Only when all of the gateways are known to be
        infected or infection-proof does the worn go on to other hosts. &lt;i&gt;hg()&lt;/i&gt;
        calls the &lt;i&gt;rt_init()&lt;/i&gt; function to get a list of gateways; this list is
        derived by running the &lt;i&gt;netstat&lt;/i&gt; network status program and parsing its
        output. The worm is careful to skip the loopback device and any local
        interfaces (in the event that the current host is a gateway); when it finishes,
        it randomizes the order of the list and adds the first 20 gateways to the host
        table to speed up the initial searches. It then tries each gateway in sequence
        until it finds a host that can be infected, or it runs out of hosts.&lt;/p&gt;

      &lt;p&gt;After taking care of gateways, the worm's next priority is hosts whose names
        were found in a scan of system files. At the start of password cracking,
        the files &lt;i&gt;/etc/hosts.equiv&lt;/i&gt; (which contains names of hosts to which the
        local host grants user permissions without authentication) and &lt;i&gt;/.rhosts&lt;/i&gt;
        (which contains names of hosts from which the local host permits remote
        privileged logins) are examined, as are all users' &lt;i&gt;.forward&lt;/i&gt; files
        (which list hosts to which mail is forwarded from the current host). These hosts
        are flagged so that they can be scanned earlier than the rest. The &lt;i&gt;hi()&lt;/i&gt; function
        is then responsible for attacking these hosts.&lt;/p&gt;

      &lt;p&gt;When the most profitable hosts have been used up, the worm starts looking for
        hosts that aren't recorded in files. The routine &lt;i&gt;hl()&lt;/i&gt; checks local
        networks: it runs through the local host's addresses, masking off the host
        part and substituting a random value. &lt;i&gt;ha()&lt;/i&gt; does the same job for remote hosts,
        checking alternate addresses of gateways. Special code handles the ARPAnet
        practice of putting the IMP number in the low host bits and the actual IMP
        port (representing the host) in the high host bits. The function that runs
        these random probes, which we named &lt;i&gt;hack_netof()&lt;/i&gt;, seems to have a bug that
        prevents it from attacking hosts on local networks; this may be due to our
        own misunderstanding, of course, but in any case the check of hosts from
        system files should be sufficient to cover all or nearly all of the local
        hosts anyway.&lt;/p&gt;

      &lt;p&gt;Password cracking is another generator of host names, but since this is
        handled separately from the usual host attack scheme presented here, it
        will be discussed below with the other material on passwords.&lt;/p&gt;

      &lt;a name="p4.5"&gt;&lt;/a&gt;
      &lt;h3&gt;4.5. Security holes&lt;/h3&gt;

      &lt;blockquote cite="Dennis Ritchie, 'On the Security of Unix'"&gt;
        The first fact to face is that Unix was not developed with security, in any
        realistic sense, in mind...
      &lt;/blockquote&gt;

      &lt;p&gt;This section discusses the TCP services used by the worm to penetrate systems.
        It's a touch unfair to use the quote above when the implementation of the
        services we're about to discuss was distributed by Berkeley rather than Bell
        Labs, but the sentiment is appropriate. For a long time the balance between
        security and convenience on Unix systems has been tilted in favor of convenience.
        As Brian Reid has said about the break-in at Stanford two years ago:
        "Programmer convenience is the antithesis of security, because it is going to
        become intruder convenience if the programmer's account is ever compromised."
        The lesson from that experience seems to have been forgotten by most people, but
        not by the author of the worm.&lt;/p&gt;

      &lt;a name="p4.5.1"&gt;&lt;/a&gt;
      &lt;h4&gt;4.5.1. &lt;i&gt;Rsh&lt;/i&gt; and &lt;i&gt;rexec&lt;/i&gt;&lt;/h4&gt;

      &lt;blockquote cite="Robert T. Morris, 'A Weakness in the 4.2BSD Unix TCP/IP Software'"&gt;
        These notes describe how the design of TCP/IP and the 4.2BSD implementation
        allow users on untrusted and possibly very distant hosts to masquerade as
        users on trusted hosts.
      &lt;/blockquote&gt;

      &lt;p&gt;&lt;i&gt;Rsh&lt;/i&gt; and &lt;i&gt;rexec&lt;/i&gt; are network services which offer remote command
        interpreters. &lt;i&gt;Rexec&lt;/i&gt; uses password authentication; &lt;i&gt;rsh&lt;/i&gt; relies on
        a "privileged" originating port and permissions files. Two vulnerabilities are
        exploited by the worm-the likelihood that a remote machine that has an account
        for a local user will have the same password as the local account, allowing
        penetration through &lt;i&gt;rexec&lt;/i&gt;, and the likelihood that such a remote account
        will include the local host in its &lt;i&gt;rsh&lt;/i&gt; permissions files. Both of these
        vulnerabilities are really problems with laxness or convenience for users and
        system administrators rather than actual bugs, but they represent avenues for
        infection just like inadvertent security bugs.&lt;/p&gt;

      &lt;p&gt;The first use of &lt;i&gt;rsh&lt;/i&gt; by the worm is fairly simple: it looks for a remote
        account with the same name as the one that is (unsuspectingly) running the worm
        on the local machine. This test is part of the standard menu of hacks conducted
        for each host; if it fails, the worm falls back upon &lt;i&gt;finger&lt;/i&gt;, then
        &lt;i&gt;sendmail&lt;/i&gt;. Many sites including Utah already were protected from this
        trivial attack by not providing remote shells for pseudo-users like &lt;i&gt;daemon&lt;/i&gt; or
        &lt;i&gt;nobody&lt;/i&gt;.&lt;/p&gt;

      &lt;p&gt;A more sophisticated use of these services is found in the password cracking
        routines. After a password is successfully guessed, the worm immediately tries
        to penetrate remote hosts associated with the broken account. It reads the
        user's &lt;i&gt;.forward&lt;/i&gt; file (which contains an address to which mail is
        forwarded) and &lt;i&gt;.rhosts&lt;/i&gt; file (which contains a list of hosts and
        optionally user names on those hosts which are granted permission to access
        the local machine with &lt;i&gt;rsh&lt;/i&gt; bypassing the usual password authentication),
        trying these hostnames until it succeeds. Each target host is attacked in two
        ways. The worm first contacts the remote host's &lt;i&gt;rexec&lt;/i&gt; server and sends
        it the account name found in the &lt;i&gt;.forward&lt;/i&gt; or &lt;i&gt;.rhosts&lt;/i&gt; files followed
        by the guessed password. If this fails, the worm connects to the local
        &lt;i&gt;rexec&lt;/i&gt; server with the local account name and uses that to contact the
        target's &lt;i&gt;rsh&lt;/i&gt; server. The remote &lt;i&gt;rsh&lt;/i&gt; server will permit the
        connection provided the name of the local host appears in either the
        &lt;i&gt;/etc/hosts.equiv&lt;/i&gt; file or the user's private &lt;i&gt;.rhosts&lt;/i&gt; file.&lt;/p&gt;

      &lt;p&gt;Strengthening these network services is far more problematic than fixing
        &lt;i&gt;finger&lt;/i&gt; and &lt;i&gt;sendmail&lt;/i&gt;, unfortunately. Users don't like the
        inconvenience of typing their password when logging in on a trusted local host,
        and they don't want to remember different passwords for each of the many hosts
        they may have to deal with. Some of the solutions may be worse than the
        disease-for example, a user who is forced to deal with many passwords is more
        likely to write them down somewhere.&lt;/p&gt;

      &lt;a name="p4.5.2"&gt;&lt;/a&gt;
      &lt;h4&gt;4.5.2. &lt;i&gt;Finger&lt;/i&gt;&lt;/h4&gt;

      &lt;blockquote cite="Bill Cheswick
                        at AT&amp;amp;T Bell Labs Research, private communication, 11/9/88"&gt;
        &lt;i&gt;gets&lt;/i&gt; was removed from our [C library] a couple days ago.
      &lt;/blockquote&gt;

      &lt;p&gt;Probably the neatest hack in the worm is its co-opting of the TCP &lt;i&gt;finger&lt;/i&gt;
        service to gain entry to a system. &lt;i&gt;Finger&lt;/i&gt; reports information about a user
        on a host, usually including things like the user's full name, where their
        office is, the number of their phone extension and so on. The Berkeley
        (&lt;a name="rf3" href="#f3"&gt;3&lt;/a&gt;) version of the &lt;i&gt;finger&lt;/i&gt; server is a really
        trivial program: it reads a request from the originating host, then runs the
        local &lt;i&gt;finger&lt;/i&gt; program with the request as an argument and ships the output
        back. Unfortunately the &lt;i&gt;finger&lt;/i&gt; server reads the remote request with
        &lt;i&gt;gets()&lt;/i&gt;, a standard C library routine that dates from the dawn of time
        and which does not check for overflow of the server's 512 byte request buffer
        on the stack. The worm supplies the &lt;i&gt;finger&lt;/i&gt; server with a request that is
        536 bytes long; the bulk of the request is some VAX machine code that asks the
        system to execute the command interpreter &lt;i&gt;sh&lt;/i&gt; and the extra 24 bytes
        represent just enough data to write over the server's stack frame for the main
        routine. When the main routine of the server exits, the calling function's
        program counter is supposed to be restored from the stack, but the worm wrote
        over this program counter with one that points to the VAX code in the request
        buffer. The program jumps to the worm's code and runs the command interpreter,
        which the worm uses to enter its bootstrap.&lt;/p&gt;

      &lt;p&gt;Not surprisingly, shortly after the worm was reported to use this feature of
        &lt;i&gt;gets()&lt;/i&gt;, a number of people replaced all instances of &lt;i&gt;gets()&lt;/i&gt; in
        system code with sensible code that checks the length of the buffer. Some even
        went so far as to remove &lt;i&gt;gets()&lt;/i&gt; from the library, although the function
        is apparently mandated by the forthcoming ANSI C standard
        (&lt;a name="rf4" href="#f4"&gt;4&lt;/a&gt;). So far no one has claimed to have exercised
        the &lt;i&gt;finger&lt;/i&gt; server bug before the worm incident, but in May 1988, students
        at UC Santa Cruz apparently penetrated security using a different &lt;i&gt;finger&lt;/i&gt;
        server with a similar bug. The system administrator at UCSC noticed that the
        Berkeley &lt;i&gt;finger&lt;/i&gt; server had a similar bug and sent mail to Berkeley, but
        the seriousness of the problem was not appreciated at the time (Jim Haynes,
        private communication).&lt;/p&gt;

      &lt;p&gt;One final note: the worm is meticulous in some areas but not in others. From
        what we can tell, there was no Sun-3 version of the &lt;i&gt;finger&lt;/i&gt; intrusion
        even though the Sun-3 server was just as vulnerable as the VAX one. Perhaps the
        author had VAX sources available but not Sun sources?&lt;/p&gt;

      &lt;a name="p4.5.3"&gt;&lt;/a&gt;
      &lt;h4&gt;4.5.3. &lt;i&gt;Sendmail&lt;/i&gt;&lt;/h4&gt;

      &lt;blockquote cite="Eric Allman, personal communication, 11/22/88"&gt;
        [T]he trap door resulted from two distinct 'features' that, although innocent
        by themselves, were deadly when combined (kind of like binary nerve gas).
      &lt;/blockquote&gt;

      &lt;p&gt;The &lt;i&gt;sendmail&lt;/i&gt; attack is perhaps the least preferred in the worm's arsenal,
        but in spite of that one site at Utah was subjected to nearly 150 &lt;i&gt;sendmail&lt;/i&gt;
        attacks on Black Thursday. Sendmail is the program that provides the SMTP mail
        service on TCP networks for Berkeley UNIX systems. It uses a simple
        character-oriented protocol to accept mail from remote sites. One feature of
        &lt;i&gt;sendmail&lt;/i&gt; is that it permits mail to be delivered to processes instead of
        mailbox files; this can be used with (say) the vacation program to notify senders
        that you are out of town and are temporarily unable to respond to their mail.
        Normally this feature is only available to recipients. Unfortunately a little
        loophole was accidentally created when a couple of earlier security bugs were
        being fixed-if &lt;i&gt;sendmail&lt;/i&gt; is compiled with the &lt;i&gt;DEBUG&lt;/i&gt; flag, and the
        sender at runtime asks that &lt;i&gt;sendmail&lt;/i&gt; enter &lt;i&gt;debug&lt;/i&gt; mode by sending the
        &lt;i&gt;debug&lt;/i&gt; command, it permits senders to pass in a command sequence instead of
        a user name for a recipient. Alas, most versions of &lt;i&gt;sendmail&lt;/i&gt; are compiled
        with &lt;i&gt;DEBUG&lt;/i&gt;, including the one that Sun sends out in its binary
        distribution. The worm mimics a remote SMTP connection, feeding in &lt;i&gt;/dev/null&lt;/i&gt;
        as the name of the sender and a carefully crafted string as the recipient. The
        string sets up a command that deletes the header of the message and passes the
        body to a command interpreter. The body contains a copy of the worm bootstrap
        source plus commands to compile and run it. After the worm finishes the protocol
        and closes the connection to &lt;i&gt;sendmail&lt;/i&gt;, the bootstrap will be built on the
        remote host and the local worm waits for its connection so that it can complete
        the process of building a new worm.&lt;/p&gt;

      &lt;p&gt;Of course this is not the first time that an inadvertent loophole or "trap door"
        like this has been found in &lt;i&gt;sendmail&lt;/i&gt;, and it may not be the last. In his
        Turing Award lecture, Ken Thompson said: "You can't trust code that you did not
        totally create yourself. (Especially code from companies that employ people
        like me.)" In fact, as Eric Allman says, "[Y]ou can't even trust code that you
        did totally create yourself." The basic problem of trusting system programs is
        not one that is easy to solve.&lt;/p&gt;

      &lt;a name="p4.6"&gt;&lt;/a&gt;
      &lt;h3&gt;4.6. Infection&lt;/h3&gt;

      &lt;p&gt;The worm uses two favorite routines when it decides that it wants to infect a
        host. One routine that we named &lt;i&gt;infect()&lt;/i&gt; is used from host scanning
        routines like &lt;i&gt;hg()&lt;/i&gt;. &lt;i&gt;infect()&lt;/i&gt; first checks that it isn't infecting
        the local machine, an already infected machine or a machine previously attacked
        but not successfully infected; the "infected" and
        "immune" states are marked by flags on a host structure when attacks succeed or
        fail, respectively. The worm then makes sure that it can get an address for the
        target host, marking the host immune if it can't. Then comes a series of attacks:
        first by &lt;i&gt;rsh&lt;/i&gt; from the account that the worm is running under, then
        through &lt;i&gt;finger&lt;/i&gt;, then through &lt;i&gt;sendmail&lt;/i&gt;. If &lt;i&gt;infect()&lt;/i&gt; fails,
        it marks the host as immune.&lt;/p&gt;

      &lt;p&gt;The other infection routine is named &lt;i&gt;hul()&lt;/i&gt; and it is run from the
        password cracking code after a password has been guessed. &lt;i&gt;hul()&lt;/i&gt;, like
        &lt;i&gt;infect()&lt;/i&gt;, makes sure that it's not re-infecting a host, then it checks
        for an address. If a potential remote user name is available from a
        &lt;i&gt;.forward&lt;/i&gt; or &lt;i&gt;.rhosts&lt;/i&gt; file, the worm checks it to make sure it is
        reasonable - it must contain no punctuation or control characters. If a remote
        user name is unavailable the worm uses the local user name. Once the worm has a
        user name and a password, it contacts the &lt;i&gt;rexec&lt;/i&gt; server on the target host
        and tries to authenticate itself. If it can, it proceeds to the bootstrap phase;
        otherwise, it tries a slightly different approach-it connects to the local
        &lt;i&gt;rexec&lt;/i&gt; server with the local user name and password, then uses this
        command interpreter to fire off a command interpreter on the target machine
        with &lt;i&gt;rsh&lt;/i&gt;. This will succeed if the remote host says it trusts the local
        host in its &lt;i&gt;/etc/hosts.equiv&lt;/i&gt; file, or the remote account says it trusts
        the local account in its &lt;i&gt;.rhosts&lt;/i&gt; file. &lt;i&gt;hul()&lt;/i&gt; ignores
        &lt;i&gt;infect()&lt;/i&gt;'s "immune" flag and does not set this flag itself, since
        &lt;i&gt;hul()&lt;/i&gt; may find success on a per-account basis that &lt;i&gt;infect()&lt;/i&gt;
        can't achieve on a per-host basis.&lt;/p&gt;

      &lt;p&gt;Both &lt;i&gt;infect()&lt;/i&gt; and &lt;i&gt;hul()&lt;/i&gt; use a routine we call &lt;i&gt;sendworm()&lt;/i&gt; to
        do their dirty work
        (&lt;a name="rf5" href="#f5"&gt;5&lt;/a&gt;). &lt;i&gt;sendworm()&lt;/i&gt; looks for the &lt;i&gt;ll.c&lt;/i&gt;
        bootstrap source file in its objects list, then it uses the &lt;i&gt;makemagic()&lt;/i&gt;
        routine to get a communication stream endpoint (a socket), a random network port
        number to rendezvous at, and a magic number for authentication. (There is an
        interesting side effect to &lt;i&gt;makemagic()&lt;/i&gt; - it looks for a usable address
        for the target host by trying to connect to its TCP &lt;i&gt;telnet&lt;/i&gt; port; this
        produces a characteristic log message from the &lt;i&gt;telnet&lt;/i&gt; server.) If
        &lt;i&gt;makemagic()&lt;/i&gt; was successful, the worm begins to send commands to the
        remote command interpreter that was started up by the immediately preceding
        attack. It changes its directory to an unprotected place (&lt;i&gt;/usr/tmp&lt;/i&gt;), then
        it sends across the bootstrap source, using the UNIX stream editor &lt;i&gt;sed&lt;/i&gt; to
        parse the input stream. The bootstrap source is compiled and run on the remote
        system, and the worm runs a routine named &lt;i&gt;waithit()&lt;/i&gt; to wait for the remote
        bootstrap to call back on the selected port.&lt;/p&gt;

      &lt;p&gt;The bootstrap is quite simple. It is supplied the address of the originating
        host, a TCP port number and a magic number as arguments. When it starts, it
        unlinks itself so that it can't be detected in the filesystem, then it calls
        &lt;i&gt;fork()&lt;/i&gt; to create a new process with the same image. The old process
        exits, permitting the originating worm to continue with its business. The
        bootstrap reads its arguments then zeroes them out to hide them from the
        system status program; then it is ready to connect over the network to the
        parent worm. When the connection is made, the bootstrap sends over the magic
        number, which the parent will check against its own copy. If the parent accepts
        the number (which is carefully rendered to be independent of host byte order),
        it will send over a series of filenames and files which the bootstrap writes to
        disk. If trouble occurs, the bootstrap removes all these files and exits.
        Eventually the transaction completes, and the bootstrap calls up a command
        interpreter to finish the job.&lt;/p&gt;

      &lt;p&gt;In the meantime, the parent in &lt;i&gt;waithit()&lt;/i&gt; spends up to two minutes waiting
        for the bootstrap to call back; if the bootstrap fails to call back, or the
        authentication fails, the worm decides to give up and reports a failure. When
        a connection is successful, the worm ships all of its files across followed by
        an end-of-file indicator. It pauses four seconds to let a command interpreter
        start on the remote side, then it issues commands to create a new worm. For each
        relocatable object file in the list of files, the worm tries to build an
        executable object; typically each file contains code for a particular make of
        computer, and the builds will fail until the worm tries the proper computer type.
        If the parent worm finally gets an executable child worm built, it sets it loose
        with the &lt;b&gt;-p&lt;/b&gt; option to kill the command interpreter, then shuts down the
        connection. The target host is marked "infected". If none of the objects produces
        a usable child worm, the parent removes the detritus and &lt;i&gt;waithit()&lt;/i&gt;
        returns an error indication.&lt;/p&gt;

      &lt;p&gt;When a system is being swamped by worms, the &lt;i&gt;/usr/tmp&lt;/i&gt; directory can fill
        with leftover files as a consequence of a bug in &lt;i&gt;waithit()&lt;/i&gt;. If a worm
        compile takes more than 30 seconds, resynchronization code will report an error
        but &lt;i&gt;waithit()&lt;/i&gt; will fail to remove the files it has created. On one of
        our machines, 13 MB of material representing 86 sets of files accumulated over
        5.5 hours.&lt;/p&gt;

      &lt;a name="p4.7"&gt;&lt;/a&gt;
      &lt;h3&gt;4.7. Password cracking&lt;/h3&gt;

      &lt;p&gt;A password cracking algorithm seems like a slow and bulky item to put in a
        worm, but the worm makes this work by being persistent and efficient. The worm
        is aided by some unfortunate statistics about typical password choices. Here we
        discuss how the worm goes about choosing passwords to test and how the UNIX
        password encryption routine was modified.&lt;/p&gt;

      &lt;a name="p4.7.1"&gt;&lt;/a&gt;
      &lt;h4&gt;4.7.1. Guessing passwords&lt;/h4&gt;

      &lt;blockquote cite="Grampp and Morris, 'UNIX Operating System Security'"&gt;
        For example, if the login name is "abc", then "abc", "cba", and "abcabc"
        are excellent candidates for passwords.
      &lt;/blockquote&gt;

      &lt;p&gt;The worm's password guessing is driven by a little 4-state machine. The first
        state gathers password data, while the remaining states represent increasingly
        less likely sources of potential passwords. The central cracking routine is
        called &lt;i&gt;cracksome()&lt;/i&gt;, and it contains a switch on each of the four states.&lt;/p&gt;

      &lt;p&gt;The routine that implements the first state we named &lt;i&gt;crack_0()&lt;/i&gt;. This
        routine's job is to collect information about hosts and accounts. It is only
        run once; the information it gathers persists for the lifetime of the worm.
        Its implementation is straightforward: it reads the files &lt;i&gt;/etc/hosts.equiv&lt;/i&gt;
        and &lt;i&gt;/.rhosts&lt;/i&gt; for hosts to attack, then reads the password file looking
        for accounts. For each account, the worm saves the name, the encrypted password
        the home directory and the user information fields. As a quick preliminary
        check, it looks for a &lt;i&gt;.forward&lt;/i&gt; file in each user's home directory and
        saves any host name it finds in that file, marking it like the previous ones.&lt;/p&gt;

      &lt;p&gt;We unimaginatively called the function for the next state &lt;i&gt;crack_1()&lt;/i&gt;.
        &lt;i&gt;crack_1()&lt;/i&gt; looks for trivially broken passwords. These are passwords which
        can be guessed merely on the basis of information already contained in the
        password file. Grampp and Morris report a survey of over 100 password files
        where between 8 and 30 percent of all passwords were guessed using just the
        literal account name and a couple of variations. The worm tries a little harder
        than this: it checks the null password, the account name, the account name
        concatenated with itself, the first name (extracted from the user information
        field, with the first letter mapped to lower case), the last name and the account
        name reversed. It runs through up to 50 accounts per call to &lt;i&gt;cracksome()&lt;/i&gt;,
        saving its place in the list of accounts and advancing to the next state when it
        runs out of accounts to try.&lt;/p&gt;

      &lt;p&gt;The next state is handled by &lt;i&gt;crack_2()&lt;/i&gt;. In this state the worm compares a
        list of favorite passwords, one password per call, with all of the encrypted
        passwords in the password file. The list contains 432 words, most of which are
        real English words or proper names; it seems likely that this list was generated
        by stealing password files and cracking them at leisure on the worm author's
        home machine. A global variable &lt;i&gt;nextw&lt;/i&gt; is used to count the number of
        passwords tried, and it is this count (plus a loss in the population control game)
        that controls whether the worm exits at the end of the main loop - &lt;i&gt;nextw&lt;/i&gt;
        must be greater than 10 before the worm can exit. Since the worm normally spends
        2.5 minutes checking for clients over the course of the main loop and calls
        &lt;i&gt;cracksome()&lt;/i&gt; twice in that period, it appears that the worm must make a
        minimum of 7 passes through the main loop, taking more than 15 minutes
        (&lt;a name="rf6" href="#f6"&gt;6&lt;/a&gt;). It will take at least 9 hours for the worm to
        scan its built-in password list and proceed to the next state.&lt;/p&gt;

      &lt;p&gt;The last state is handled by &lt;i&gt;crack_3()&lt;/i&gt;. It opens the UNIX online
        dictionary &lt;i&gt;/usr/dict/words&lt;/i&gt; and goes through it one word at a time. If a
        word is capitalized, the worm tries a lower-case version as well. This search
        can essentially go on forever: it would take something like four weeks for the
        worm to finish a typical dictionary like ours.&lt;/p&gt;

      &lt;p&gt;When the worm selects a potential password, it passes it to a routine we called
        &lt;i&gt;try_password()&lt;/i&gt;. This function calls the worm's special version of the
        UNIX password encryption function &lt;i&gt;crypt()&lt;/i&gt; and compares the result with
        the target account's actual encrypted password. If they are equal, or if the
        password and guess are the null string (no password), the worm saves the
        cleartext password and proceeds to attack hosts that are connected to this
        account. A routine we called &lt;i&gt;try_forward_and_rhosts()&lt;/i&gt; reads the user's
        &lt;i&gt;.forward&lt;/i&gt; and &lt;i&gt;.rhosts&lt;/i&gt; files, calling the previously described
        &lt;i&gt;hul()&lt;/i&gt; function for each remote account it finds.&lt;/p&gt;

      &lt;a name="p4.7.2"&gt;&lt;/a&gt;
      &lt;h4&gt;4.7.2. Faster password encryption&lt;/h4&gt;

      &lt;blockquote cite="Morris and Thompson, 'Password Security: A Case History'"&gt;
        The use of encrypted passwords appears reasonably secure in the absence
        of serious attention of experts in the field.
      &lt;/blockquote&gt;

      &lt;p&gt;Unfortunately some experts in the field have been giving serious attention to
        fast implementations of the UNIX password encryption algorithm. UNIX password
        authentication works without putting any readable version of the password onto
        the system, and indeed works without protecting the encrypted password against
        reading by users on the system. When a user types a password in the clear, the
        system encrypts it using the standard &lt;i&gt;crypt()&lt;/i&gt; library routine, then
        compares it against a saved copy of the encrypted password. The encryption
        algorithm is meant to be basically impossible to invert, preventing the retrieval
        of passwords by examining only the encrypted text, and it is meant to be
        expensive to run, so that testing guesses will take a long time. The UNIX
        password encryption algorithm is based on the Federal Data Encryption Standard
        (DES). Currently no one knows how to invert this algorithm in a reasonable
        amount of time, and while fast DES encoding chips are available, the UNIX
        version of the algorithm is slightly perturbed so that it is impossible to use
        a standard DES chip to implement it.&lt;/p&gt;

      &lt;p&gt;Two problems have been mitigating against the UNIX implementation of DES.
        Computers are continually increasing in speed---current machines are typically
        several times faster than the machines that were available when the current
        password scheme was invented. At the same time, ways have been discovered to
        make software DES run faster. UNIX passwords are now far more susceptible to
        persistent guessing, particularly if the encrypted passwords are already known.
        The worm's version of the UNIX &lt;i&gt;crypt()&lt;/i&gt; routine ran more than 9 times
        faster than the standard version when we tested it on our VAX 8600. While the
        standard &lt;i&gt;crypt()&lt;/i&gt; takes 54 seconds to encrypt 271 passwords on our 8600
        (the number of passwords actually contained in our password file), the worm's
        &lt;i&gt;crypt()&lt;/i&gt; takes less than 6 seconds.&lt;/p&gt;

      &lt;p&gt;The worm's &lt;i&gt;crypt()&lt;/i&gt; algorithm appears to be a compromise between time and
        space: the time needed to encrypt one password guess versus the substantial extra
        table space needed to squeeze performance out of the algorithm. Curiously, one
        performance improvement actually saves a little space. The traditional UNIX
        algorithm stores each bit of the password in a byte, while the worm's algorithm
        packs the bits into two 32-bit words. This permits the worm's algorithm to use
        bit-field and shift operations on the password data, which is immensely faster.
        Other speedups include unrolling loops, combining tables, precomputing shifts
        and masks, and eliminating redundant initial and final permutations when
        performing the 25 applications of modified DES that the password encryption
        algorithm uses. The biggest performance improvement comes as a result of
        combining permutations: the worm uses expanded arrays which are indexed by
        groups of bits rather than the single bits used by the standard algorithm.
        Matt Bishop's fast version of &lt;i&gt;crypt()&lt;/i&gt; does all of these things and also
        precomputes even more functions, yielding twice the performance of the worm's
        algorithm but requiring nearly 200 KB of initialized data as opposed to the
        6 KB used by the worm and the less than 2 KB used by the normal &lt;i&gt;crypt()&lt;/i&gt;.&lt;/p&gt;

      &lt;p&gt;How can system administrators defend against fast implementations of
        &lt;i&gt;crypt()&lt;/i&gt;? One suggestion that has been introduced for foiling the bad guys
        is the idea of shadow password files. In this scheme, the encrypted passwords are
        hidden rather than public, forcing a cracker to either break a privileged account
        or use the host's CPU and (slow) encryption algorithm to attack, with the added
        danger that password test requests could be logged and password cracking
        discovered. The disadvantage of shadow password files is that if the bad guys
        somehow get around the protections for the file that contains the actual
        passwords, all of the passwords must be considered cracked and will need to be
        replaced. Another suggestion has been to replace the UNIX DES implementation with
        the fastest available implementation, but run it 1000 times or more instead of
        the 25 times used in the UNIX &lt;i&gt;crypt()&lt;/i&gt; code. Unless the repeat count is
        somehow pegged to the fastest available CPU speed, this approach merely postpones
        the day of reckoning until the cracker finds a faster machine. It's interesting
        to note that Morris and Thompson measured the time to compute the old M-209
        (non-DES) password encryption algorithm used in early versions of UNIX on the
        PDP-11/70 and found that a good implementation took only 1.25 milliseconds per
        encryption, which they deemed insufficient; currently the VAX 8600 using Matt
        Bishop's DES-based algorithm needs 11.5 milliseconds per encryption, and machines
        10 times faster than the VAX 8600 at a cheaper price will be available soon
        (if they aren't already!).&lt;/p&gt;

      &lt;a name="p5"&gt;&lt;/a&gt;
      &lt;h2&gt;5. Opinions&lt;/h2&gt;

      &lt;blockquote cite="Ken Thompson, 1983 Turing Award Lecture"&gt;
        The act of breaking into a computer system has to have the same social
        stigma as breaking into a neighbor's house. It should not matter that the
        neighbor's door is unlocked.
      &lt;/blockquote&gt;

      &lt;blockquote cite="R H Morris, in 1983 Capitol Hill testimony, cited in the New York Times 11/11/88"&gt;
        [Creators of viruses are] stealing a car for the purpose of joyriding.
      &lt;/blockquote&gt;

      &lt;p&gt;I don't propose to offer definitive statements on the morality of the worm's
        author, the ethics of publishing security information or the security needs of
        the UNIX computing community, since people better (and less) qualified than I are
        still copiously flaming on these topics in the various network newsgroups and
        mailing lists. For the sake of the mythical ordinary system administrator who
        might have been confused by all the information and misinformation, I will try
        to answer a few of the most relevant questions in a narrow but useful way.&lt;/p&gt;

      &lt;p&gt;&lt;i&gt;Did the worm cause damage?&lt;/i&gt; The worm did not destroy files, intercept
        private mail, reveal passwords, corrupt databases or plant trojan horses. It did
        compete for CPU time with, and eventually overwhelm, ordinary user processes.
        It used up limited system resources such as the open file table and the process
        text table, causing user processes to fail for lack of same. It caused some
        machines to crash by operating them close to the limits of their capacity,
        exercising bugs that do not appear under normal loads. It forced administrators
        to perform one or more reboots to clear worms from the system, terminating user
        sessions and long-running jobs. It forced administrators to shut down network
        gateways, including gateways between important nation-wide research networks,
        in an effort to isolate the worm; this led to delays of up to several days in the
        exchange of electronic mail, causing some projects to miss deadlines and others
        to lose valuable research time. It made systems staff across the country drop
        their ongoing hacks and work 24-hour days trying to comer and kill worms. It
        caused members of management in at least one institution to become so frightened
        that they scrubbed all the disks at their facility that were online at the
        time of the infection, and limited reloading of files to data that was verifiably
        unmodified by a foreign agent. It caused bandwidth through gateways that were
        still running after the infection started to become substantially degraded the
        gateways were using much of their capacity just shipping the worm from one
        network to another. It penetrated user accounts and caused it to appear that a
        given user was disturbing a system when in fact they were not responsible.
        It's true that the worm could have been far more harmful that it actually turned
        out to be: in the last few weeks, several security bugs have come to light
        which the worm could have used to thoroughly destroy a system. Perhaps we
        should be grateful that we escaped incredibly awful consequences, and perhaps
        we should also be grateful that we have learned so much about the weaknesses
        in our systems' defenses, but I think we should share our gratefulness with
        someone other than the worm's author.&lt;/p&gt;

      &lt;p&gt;&lt;i&gt;Was the worm malicious?&lt;/i&gt; Some people have suggested that the worm was an
        innocent experiment that got out of hand, and that it was never intended to spread
        so fast or so widely. We can find evidence in the worm to support and to
        contradict this hypothesis. There are a number of bugs in the worm that appear
        to be the result of hasty or careless programming. For example, in the worm's if
        &lt;i&gt;init()&lt;/i&gt; routine, there is a call to the block zero function &lt;i&gt;bzero()&lt;/i&gt;
        that incorrectly uses the block itself rather than the block's address as an
        argument. It's also possible that a bug was responsible for the ineffectiveness
        of the population control measures used by the worm. This could be seen as
        evidence that a development version of the worm "got loose" accidentally, and
        perhaps the author originally intended to test the final version under controlled
        conditions, in an environment from which it would not escape. On the other hand,
        there is considerable evidence that the worm was designed to reproduce quickly
        and spread itself over great distances. It can be argued that the population
        control hacks in the worm are anemic by design: they are a compromise between
        spreading the worm as quickly as possible and raising the load enough to be
        detected and defeated. A worm will exist for a substantial amount of time and
        will perform a substantial amount of work even if it loses the roll of the
        (imaginary) dice; moreover, 1 in 7 worms become immortal and can't be killed
        by dice rolls. There is ample evidence that the worm was designed to hamper
        efforts to stop it even after it was identified and captured. It certainly
        succeeded in this, since it took almost a day before the last mode of infection
        (the &lt;i&gt;finger&lt;/i&gt; server) was identified, analyzed and reported widely; the
        worm was very successful in propagating itself during this time even on systems
        which had fixed the &lt;i&gt;sendmail&lt;/i&gt; &lt;i&gt;debug&lt;/i&gt; problem and had turned off
        &lt;i&gt;rexec&lt;/i&gt;. Finally, there is evidence that the worm's author deliberately
        introduced the worm to a foreign site that was left open and welcome to casual
        outside users, rather ungraciously abusing this hospitality. He apparently
        further abused this trust by deleting a log file that might have revealed
        information that could link his home site with the infection. I think the
        innocence lies in the research community rather than with the worm's author.&lt;/p&gt;

      &lt;p&gt;&lt;i&gt;Will publication of worm details further harm security?&lt;/i&gt; In a sense, the worm
        itself has solved that problem: it has published itself by sending copies to
        hundreds or thousands of machines around the world. Of course a bad guy who
        wants to use the worm's tricks would have to go through the same effort that we
        went through in order to understand the program, but then it only took us a
        week to completely decompile the program, so while it takes fortitude to hack
        the worm, it clearly is not greatly difficult for a decent programmer. One of
        the worm's most effective tricks was advertised when it entered - the bulk of
        the &lt;i&gt;sendmail&lt;/i&gt; hack is visible in the log file, and a few minutes' work
        with the sources will reveal the rest of the trick. The worm's fast password
        algorithm could be useful to the bad guys, but at least two other faster
        implementations have been available for a year or more, so it isn't very secret,
        or even very original. Finally, the details of the worm have been well enough
        sketched out on various newsgroups and mailing lists that the principal hacks
        are common knowledge. I think it's more important that we understand what
        happened, so that we can make it less likely to happen again, than that we
        spend time in a futile effort to cover up the issue from everyone but the bad
        guys. Fixes for both source and binary distributions are widely available, and
        anyone who runs a system with these vulnerabilities needs to look into these
        fixes immediately, if they haven't done so already.&lt;/p&gt;

      &lt;a name="p6"&gt;&lt;/a&gt;
      &lt;h2&gt;6. Conclusion&lt;/h2&gt;

      &lt;blockquote cite="R H Morris, quoted in the New York Times 11/5/88"&gt;
        It has raised the public awareness to a considerable degree.
      &lt;/blockquote&gt;

      &lt;p&gt;This quote is one of the understatements of the year. The worm story was on the
        front page of the New York Times and other newspapers for days. It was the
        subject of television and radio features. Even the Bloom County comic strip
        poked fun at it.&lt;/p&gt;

      &lt;p&gt;Our community has never before been in the limelight in this way, and judging
        by the response, it has scared us. I won't offer any fancy platitudes about
        how the experience is going to change us, but I will say that I think these
        issues have been ignored for much longer than was safe, and I feel that a better
        understanding of the crisis just past will help us cope better with the next one.
        Let's hope we're as lucky next time as we were this time.&lt;/p&gt;

      &lt;a name="ack"&gt;&lt;/a&gt;
      &lt;h2&gt;Acknowledgments&lt;/h2&gt;

      &lt;p&gt;No one is to blame for the inaccuracies herein except me, but there are plenty
        of people to thank for helping to decompile the worm and for helping to document
        the epidemic. Dave Pare and Chris Torek were at the center of the action during
        the late night session at Berkeley, and they had help and kibitzing from Keith
        Bostic, Phil Lapsley, Peter Yee, Jay Lepreau and a cast of thousands. Glenn Adams
        and Dave Siegel provided good information on the MIT AI Lab attack, while Steve
        Miller gave me details on Maryland, Jeff Forys on Utah, and Phil Lapsley, Peter
        Yee and Keith Bostic on Berkeley. Bill Cheswick sent me a couple of fun
        anecdotes from AT&amp;amp;T Bell Labs. Jim Haynes gave me the run-down on the security
        problems turned up by his busy little undergrads at UC Santa Cruz. Eric Allman,
        Keith Bostic, Bill Cheswick, Mike Hibler, Jay Lepreau, Chris Torek and Mike
        Zeleznik provided many useful review comments. Thank you all, and everyone else
        I forgot to mention.&lt;/p&gt;

      &lt;p&gt;Matt Bishop's paper "A Fast Version of the DES and a Password Encryption
        Algorithm", (c)1987 by Matt Bisbop and the Universities Space Research
        Association, was helpful in (slightly) parting the mysteries of DES for me.
        Anyone wishing to understand the worm's DES hacking had better look here first.
        The paper is available with Bishop's &lt;i&gt;deszip&lt;/i&gt; distribution of software for
        fast DES encryption. The latter was produced while Bishop was with the Research
        Institute for Advanced Computer Science at NASA Ames Research Center; Bishop
        is now at Dartmouth College (bishop@bear.dartmouth.edu). He sent me a very
        helpful note on the worm's implementation of &lt;i&gt;crypt()&lt;/i&gt; which I leaned on
        heavily when discussing the algorithm above.&lt;/p&gt;

      &lt;p&gt;The following documents were also referenced above for quotes or for other
        material:&lt;/p&gt;

      &lt;p&gt;&lt;i&gt;Data Encryption Standard&lt;/i&gt;, FIPS PUB 46, National Bureau of Standards,
        Washington D.C., January 15,1977.&lt;/p&gt;

      &lt;p&gt;F. T. Grampp and R. H. Morris, "UNIX Operating System Security," in the
        &lt;i&gt;AT&amp;amp;T Bell Laboratories Technical Journal&lt;/i&gt;, October 1984, Vol. 63, No.
        8, Part 2, p. 1649.&lt;/p&gt;

      &lt;p&gt;Brian W. Kernighan and Dennis Ritchie, &lt;i&gt;The C Programming Language&lt;/i&gt;,
        Second Edition, Prentice Hall: Englewood Cliffs, NJ, (C)1988.&lt;/p&gt;

      &lt;p&gt;John Markoff, "Author of computer 'virus' is son of U.S. Electronic Security
        Expert," p. 1 of the &lt;i&gt;New York Times&lt;/i&gt;, November 5, 1988.&lt;/p&gt;

      &lt;p&gt;John Markoff, "A family's passion for computers, gone sour," p. 1 of the
        &lt;i&gt;New York Times&lt;/i&gt;, November 11, 1988.&lt;/p&gt;

      &lt;p&gt;Robert Morris and Ken Thompson, "Password Security: A Case History," dated
        April 3, 1978, in the &lt;i&gt;UNIX Programmer's Manual&lt;/i&gt;, in the &lt;i&gt;Supplementary
        Documents&lt;/i&gt; or the &lt;i&gt;System Manager's Manual&lt;/i&gt;, depending on where and when
        you got your manuals.&lt;/p&gt;

      &lt;p&gt;Robert T. Morris, "A Weakness in the 4.2BSD Unix TCP/IP Software," AT&amp;amp;T Bell
        Laboratories Computing Science Technical Report #117, February 25, 1985. This
        paper actually describes a way of spoofing TCP/IP so that an untrusted host
        can make use of the &lt;i&gt;rsh&lt;/i&gt; server on any 4.2 BSD UNIX system, rather than
        an attack based on breaking into accounts on trusted hosts, which is what the
        worm uses.&lt;/p&gt;

      &lt;p&gt;Brian Reid, "Massive UNIX breakins at Stanford," RISKS-FORUM Digest, Vol. 3,
        Issue 56, September 16, 1986.&lt;/p&gt;

      &lt;p&gt;Dennis Ritchie "On the Security of UNIX," dated June 10,1977, in the same manual
        you found the Morris and Thompson paper in.&lt;/p&gt;

      &lt;p&gt;Ken Thompson, "Reflections on Trusting Trust," 1983 ACM Turing Award Lecture,
        in the Communications of the ACM, Vol. 27, No. 8, p. 761, August 1984.&lt;/p&gt;

      &lt;br&gt;&lt;hr&gt;&lt;br&gt;&lt;br&gt;

      &lt;b&gt;Footnotes&lt;/b&gt;

      &lt;dl&gt;
        &lt;dt&gt;(&lt;a name="f1" href="#rf1"&gt;1&lt;/a&gt;)&lt;/dt&gt;
        &lt;dd&gt;The Internet is a logical network made up of many physical networks,
          all running the IP class of network protocols.&lt;/dd&gt;&lt;br&gt;

        &lt;dt&gt;(&lt;a name="f2" href="#rf2"&gt;2&lt;/a&gt;)&lt;/dt&gt;
        &lt;dd&gt;VAX and Sun-3 are models of computers built by Digital Equipment Corp.
          and Sun Microsystems Inc., respectively. UNIX is a Registered Bell of
          AT&amp;amp;T Trademark Laboratories.&lt;/dd&gt;&lt;br&gt;

        &lt;dt&gt;(&lt;a name="f3" href="#rf3"&gt;3&lt;/a&gt;)&lt;/dt&gt;
        &lt;dd&gt;Actually, like much of the code in the Berkeley distribution, the
          &lt;i&gt;finger&lt;/i&gt; server was contributed from elsewhere; in this case, it
          appears that MIT was the source.&lt;/dd&gt;&lt;br&gt;

          &lt;dt&gt;(&lt;a name="f4" href="#rf4"&gt;4&lt;/a&gt;)&lt;/dt&gt;
          &lt;dd&gt;See for example Appendix B, section 1.4 of the second edition of
            The C Programming Language by Kernighan and Ritchie.&lt;/dd&gt;&lt;br&gt;

          &lt;dt&gt;(&lt;a name="f5" href="#rf5"&gt;5&lt;/a&gt;)&lt;/dt&gt;
          &lt;dd&gt;One minor exception: the &lt;i&gt;sendmail&lt;/i&gt; attack doesn't use &lt;i&gt;sendworm()&lt;/i&gt;
            since it needs to handle the SMTP protocol in addition to the command
            interpreter interface, but the principle is the same.&lt;/dd&gt;&lt;br&gt;

          &lt;dt&gt;(&lt;a name="f6" href="#rf6"&gt;6&lt;/a&gt;)&lt;/dt&gt;
          &lt;dd&gt;For those mindful of details: The first call to &lt;i&gt;cracksome()&lt;/i&gt; is
            consumed reading system files. The worm must spend at least one call
            to cracksome0 in the second state attacking trivial passwords. This
            accounts for at least one pass through the main loop. In the third
            state, &lt;i&gt;cracksome()&lt;/i&gt; tests one password from its list of favorites on
            each call; the worm will exit if it lost a roll of the dice and more
            than ten words have been checked, so this accounts for at least six
            loops, two words on each loop for five loops to reach 10 words, then
            another loop to pass that number. Altogether this amounts to a
            minimum of 7 loops. If all 7 loops took the maximum amount of time
            waiting for clients this would require a minimum of 17.5 minutes, but
            the 2-minute check can exit early if a client connects and the server
            loses the challenge, hence 15.5 minutes of waiting time plus runtime
            overhead is the minimum lifetime. In this period a worm will attack
            at least 8 hosts through the host infection routines, and will try
            about 18 passwords for each account, attacking more hosts if accounts
            are cracked.&lt;/dd&gt;&lt;br&gt;
      &lt;/dl&gt;</content>
    <link href="https://jmthornton.net/blog/p/worm-of-1988"/>
    <summary>On the evening of November 2, 1988, a self-replicating program was released upon the Internet. Within the space of hours this program had spread across the US.</summary>
    <published>2017-10-31T09:00:00-06:00</published>
  </entry>
  <entry>
    <id>https://jmthornton.net/blog/p/tcl-weird</id>
    <title>TCL Wat</title>
    <updated>2026-02-25T09:00:00-06:00</updated>
    <content type="html">&lt;p&gt;
        &lt;em&gt;I was talking about my TCL-writing days with a friend and they very helpfully pointed
        out that one of my claims here was backward and some of the examples could be clearer. I've
        corrected the inaccuracies and improved examples throughout. All these gotchas and examples
        are based on TCL 8.6, and I have no idea if later versions make changes.&lt;/em&gt;
      &lt;/p&gt;

      &lt;hr /&gt;

      &lt;p&gt;
        At FlightAware, I read and write TCL every day. This means I've
        run into more than a few edge cases in the language, and have even
        tracked down a few bugs in
        &lt;code class="language-tcl"&gt;TCLlib&lt;/code&gt;. Here's a collection of
        the weirdest things I've encountered.
      &lt;/p&gt;

      &lt;h2&gt;Empty string is a boolean superposition&lt;/h2&gt;

      &lt;p&gt;
        TCL's &lt;code class="language-tcl"&gt;string is&lt;/code&gt; command can
        test whether a value is "true" or "false". Without the
        &lt;code class="language-tcl"&gt;-strict&lt;/code&gt; flag, an empty string
        is simultaneously both:
      &lt;/p&gt;

      &lt;pre&gt;&lt;code class="language-tcl"&gt;string is true  ""  ;# =&gt; 1
string is false ""  ;# =&gt; 1&lt;/code&gt;&lt;/pre&gt;

      &lt;p&gt;
        The empty string passes both checks. It is true. It is also
        false. It exists in boolean superposition.
      &lt;/p&gt;

      &lt;p&gt;
        With &lt;code class="language-tcl"&gt;-strict&lt;/code&gt;, the empty string
        is instead neither true nor false:
      &lt;/p&gt;

      &lt;pre is:raw&gt;&lt;code class="language-tcl"&gt;set val ""
if {[string is true -strict $val]} {
    puts "This is true"
} elseif {[string is false -strict $val]} {
    puts "This is false"
} else {
    puts "This is neither true nor false"
}
# =&gt; This is neither true nor false&lt;/code&gt;&lt;/pre&gt;

      &lt;h2&gt;Dict set creates from nothing, chokes on something&lt;/h2&gt;

      &lt;p&gt;
        The &lt;code class="language-tcl"&gt;dict set&lt;/code&gt; command happily
        creates a dictionary if the variable doesn't exist yet:
      &lt;/p&gt;

      &lt;pre&gt;&lt;code class="language-tcl"&gt;dict set my_dict key value
# my_dict is now: key value&lt;/code&gt;&lt;/pre&gt;

      &lt;p&gt;
        But if the variable already exists as a plain string, it fails
        with a confusing error:
      &lt;/p&gt;

      &lt;pre is:raw&gt;&lt;code class="language-tcl"&gt;set my_var "hello"
dict set my_var key value
# =&gt; missing value to go with key&lt;/code&gt;&lt;/pre&gt;

      &lt;p&gt;
        The error message "missing value to go with key" makes it sound
        like you forgot an argument, but the real problem is that
        &lt;code class="language-tcl"&gt;dict set&lt;/code&gt; tries to interpret
        &lt;code class="language-tcl"&gt;"hello"&lt;/code&gt; as an existing
        dictionary, and since the string has an odd number of space-separated strings (in this case, one), it doesn't parse as
        key-value pairs.
      &lt;/p&gt;

      &lt;h2&gt;Upvar and pass-by-name&lt;/h2&gt;

      &lt;p&gt;
        TCL is pass-by-value by default. If you pass a variable to a
        function, the function gets a copy, and the caller's variable is
        unchanged. But &lt;code class="language-tcl"&gt;upvar&lt;/code&gt; lets you
        opt into pass-by-name by passing a variable's &lt;em&gt;name&lt;/em&gt; as a
        string and creating a local alias to the caller's variable:
      &lt;/p&gt;

      &lt;pre is:raw&gt;&lt;code class="language-tcl"&gt;proc increment {varName} {
    upvar $varName var
    incr var
}

proc swap {a b} {
    upvar $a aVar $b bVar
    set temp $aVar
    set aVar $bVar
    set bVar $temp
}&lt;/code&gt;&lt;/pre&gt;

      &lt;p&gt;
        This is actually pretty useful once you get used to it, but it's
        definitely not what you'd expect coming from languages where you
        can't reach into your caller's scope by convention.
      &lt;/p&gt;

      &lt;h2&gt;Insane string matching&lt;/h2&gt;

      &lt;p&gt;
        Some TCL commands, like
        &lt;code class="language-tcl"&gt;string is&lt;/code&gt;, allow matching on
        shortened or abbreviated strings. While this can lead to "concise"
        code, it may also introduce unintended and interesting bugs:
      &lt;/p&gt;

      &lt;pre&gt;&lt;code class="language-tcl"&gt;string is true true
# =&gt; 1
string is tru true
# =&gt; 1
string is tr true
# =&gt; 1
string is t true
# =&gt; 1
string is fa 0
# =&gt; 1&lt;/code&gt;&lt;/pre&gt;

      &lt;p&gt;
        If a prefix is ambiguous, TCL at least throws an error:
        &lt;code class="language-tcl"&gt;string is a "hello"&lt;/code&gt; gives
        &lt;code&gt;ambiguous class "a": must be alnum, alpha, ascii, ...&lt;/code&gt;.
        But unambiguous prefixes silently match, which has no practical
        benefit and serves only to create bugs. Why would a reasonable
        person ever want to check if a string "is tr" in production code?
      &lt;/p&gt;

      &lt;h2&gt;Split nested list, get flat list&lt;/h2&gt;

      &lt;p&gt;
        When using &lt;code class="language-tcl"&gt;split&lt;/code&gt; on a nested
        list, the result is a list with members representing the escaped
        syntax of the nested list(s):
      &lt;/p&gt;

      &lt;pre is:raw&gt;&lt;code class="language-tcl"&gt;set a {aa bb {cc dd}}
split $a " "
# =&gt; {aa bb \{cc dd\}}&lt;/code&gt;&lt;/pre&gt;

      &lt;p&gt;
        Notice that the braces of the nested list
        &lt;code class="language-tcl"&gt;&amp;#123;cc dd&amp;#125;&lt;/code&gt; have been escaped,
        resulting in a string that looks like a dict, but is not. This is
        because under the hood,
        &lt;code class="language-tcl"&gt;split&lt;/code&gt; operates on the string
        representation (in C, sort of), not the list structure. This has been the source
        of a number of real bugs at FlightAware.
      &lt;/p&gt;

      &lt;h2&gt;Dict loops don't garbage-collect&lt;/h2&gt;

      &lt;p&gt;
        When using &lt;code class="language-tcl"&gt;dict with&lt;/code&gt; inside a
        loop, it sets local variables for each key in the current
        dictionary entry. But when the next iteration has fewer keys,
        the variables from the previous iteration aren't unset, they
        linger with their old values:
      &lt;/p&gt;

      &lt;pre is:raw&gt;&lt;code class="language-tcl"&gt;set data {
    item1 {var1 value1 var2 value2}
    item2 {var2 newvalue2}
}

foreach item {item1 item2} {
    dict with data $item {
        puts "Item: $item, var1: $var1, var2: $var2"
    }
}&lt;/code&gt;&lt;/pre&gt;

      &lt;p&gt;Output:&lt;/p&gt;
      &lt;pre&gt;&lt;code&gt;Item: item1, var1: value1, var2: value2
Item: item2, var1: value1, var2: newvalue2&lt;/code&gt;&lt;/pre&gt;

      &lt;p&gt;
        In the second iteration,
        &lt;code class="language-tcl"&gt;item2&lt;/code&gt; only defines
        &lt;code class="language-tcl"&gt;var2&lt;/code&gt;. But
        &lt;code class="language-tcl"&gt;var1&lt;/code&gt; still holds
        &lt;code class="language-tcl"&gt;value1&lt;/code&gt; from the first
        iteration. If you're familiar with C, imagine a
        &lt;code class="language-c"&gt;struct&lt;/code&gt; being partially
        overwritten without zeroing the rest. The
        variables created by
        &lt;code class="language-tcl"&gt;dict with&lt;/code&gt; are not scoped to
        the loop body, and TCL doesn't clean them up between iterations.
        This has been a real source of bugs at FlightAware.
      &lt;/p&gt;

      &lt;h2&gt;Comments are not comments&lt;/h2&gt;

      &lt;p&gt;
        In TCL, &lt;code class="language-tcl"&gt;#&lt;/code&gt; is not special
        syntax for comments, it's a command! A no-op command, but a
        command nonetheless. This means it only works in command-name
        position: the start of a line or after a semicolon.
      &lt;/p&gt;

      &lt;pre is:raw&gt;&lt;code class="language-tcl"&gt;# this works
set x 5 ;# this also works
set x 5 # this is NOT a comment
# =&gt; wrong # args: should be "set varName ?newValue?"&lt;/code&gt;&lt;/pre&gt;

      &lt;p&gt;
        That last line fails because TCL sees
        &lt;code class="language-tcl"&gt;#&lt;/code&gt;,
        &lt;code class="language-tcl"&gt;this&lt;/code&gt;,
        &lt;code class="language-tcl"&gt;is&lt;/code&gt;, etc. as extra arguments to
        &lt;code class="language-tcl"&gt;set&lt;/code&gt;, not as a comment. The
        &lt;code class="language-tcl"&gt;;#&lt;/code&gt; idiom ends up very common in TCL programs
        precisely because you need the semicolon to start a new command
        before the &lt;code class="language-tcl"&gt;#&lt;/code&gt; can act as one.
      &lt;/p&gt;

      &lt;p&gt;
        It gets worse. Since TCL counts braces before it interprets commands (which is reasonable since scope matters), an unbalanced brace inside a comment breaks the parser:
      &lt;/p&gt;

      &lt;pre is:raw&gt;&lt;code class="language-tcl"&gt;proc broken {} {
    # closing brace } here ends the proc body early
    puts "hello"
}
# =&gt; wrong # args: should be "proc name args body"&lt;/code&gt;&lt;/pre&gt;

      &lt;p&gt;
        This is simply infuriating.
      &lt;/p&gt;

      &lt;h2&gt;Uplevel is OP&lt;/h2&gt;

      &lt;p&gt;
        The &lt;code class="language-tcl"&gt;uplevel&lt;/code&gt; command executes a
        script in a caller's scope. This lets you build new control
        structures (essentially macros) but it also means any function can
        reach into its caller's environment and modify variables:
      &lt;/p&gt;

      &lt;pre is:raw&gt;&lt;code class="language-tcl"&gt;proc with_logging {body} {
    puts "--- begin ---"
    uplevel 1 $body
    puts "--- end ---"
}

set x 10
with_logging {
    set x [expr {$x * 2}]
    puts "x is now $x"
}
puts "x after: $x"
# =&gt; --- begin ---
# =&gt; x is now 20
# =&gt; --- end ---
# =&gt; x after: 20&lt;/code&gt;&lt;/pre&gt;

      &lt;p&gt;
        The body passed to
        &lt;code class="language-tcl"&gt;with_logging&lt;/code&gt; runs in the
        &lt;em&gt;caller's&lt;/em&gt; scope, not inside the function. It reads and
        modifies the caller's &lt;code class="language-tcl"&gt;x&lt;/code&gt;
        directly. This is how people build DSLs and custom control
        structures in TCL, and it's genuinely powerful, but it also means
        any function you call might be silently mutating your local
        variables.
      &lt;/p&gt;

      &lt;h2&gt;Lack of static typing&lt;/h2&gt;

      &lt;p&gt;
        TCL is a dynamically typed language, which means variables don't
        have a fixed type. A variable can change its type during runtime:
      &lt;/p&gt;

      &lt;pre&gt;&lt;code class="language-tcl"&gt;set my_var 42
puts "Variable type: [string is integer $my_var]"
# =&gt; Variable type: 1
set my_var "hello"
puts "Variable type: [string is integer $my_var]"
# =&gt; Variable type: 0&lt;/code&gt;&lt;/pre&gt;

      &lt;p&gt;
        This can lead to flexibility in development, but may also cause unexpected behavior. Especially when
        dealing with JSON or other structured data formats. On the
        surface, this behavior is expected because "everything is a
        string". But it causes bugs when you understand TCL well enough to
        know that nothing is a string at the fundamental level.
      &lt;/p&gt;

      &lt;h2&gt;Subst is not a separate compilation step&lt;/h2&gt;

      &lt;p&gt;
        In TCL, the &lt;code class="language-tcl"&gt;subst&lt;/code&gt; command is
        used to perform variable and command substitutions within strings.
        This can be handy in some situations, but it's important to note
        that &lt;code class="language-tcl"&gt;subst&lt;/code&gt; is executed at
        runtime:
      &lt;/p&gt;

      &lt;pre is:raw&gt;&lt;code class="language-tcl"&gt;set x 10
set y "\$x + 5"
puts "Before substitution: $y"
# =&gt; Before substitution: $x + 5
puts "After substitution: [subst $y]"
# =&gt; After substitution: 10 + 5&lt;/code&gt;&lt;/pre&gt;

      &lt;p&gt;
        This may be surprising if you're expecting TCL to
        &lt;a href="https://philip.greenspun.com/tcl/introduction.adp"
          &gt;act like a Lisp&lt;/a
        &gt;, and means that you might miss syntax errors until the code is
        actually executed. Worse,
        &lt;code class="language-tcl"&gt;subst&lt;/code&gt; on user-provided input
        will execute any
        &lt;code class="language-tcl"&gt;[command]&lt;/code&gt; substitutions
        embedded in that input.
      &lt;/p&gt;

      &lt;h2&gt;Expr injection&lt;/h2&gt;

      &lt;p&gt;
        Without braces, &lt;code class="language-tcl"&gt;expr&lt;/code&gt;
        substitutes variables and commands &lt;em&gt;before&lt;/em&gt; evaluating the
        expression. This means embedded
        &lt;code class="language-tcl"&gt;[commands]&lt;/code&gt; in variable values
        get executed:
      &lt;/p&gt;

      &lt;pre is:raw&gt;&lt;code class="language-tcl"&gt;set x {[exec rm important_file]}
expr $x + 1   ;# executes rm!
expr {$x + 1} ;# safe, $x treated as data&lt;/code&gt;&lt;/pre&gt;

      &lt;p&gt;
        The braced version treats &lt;code class="language-tcl"&gt;$x&lt;/code&gt;
        as data within the expression parser, where it fails safely as a
        non-numeric value. The unbraced version does TCL substitution
        first, so the &lt;code class="language-tcl"&gt;[exec ...]&lt;/code&gt; runs
        before &lt;code class="language-tcl"&gt;expr&lt;/code&gt; ever sees it. TCL
        style guides consider unbraced
        &lt;code class="language-tcl"&gt;expr&lt;/code&gt; a defect, but it's common
        enough in real code that this is one of the language's most
        dangerous foot-guns.
      &lt;/p&gt;

      &lt;h2&gt;Two ways of catching errors&lt;/h2&gt;

      &lt;p&gt;
        TCL provides error handling mechanisms via the
        &lt;code class="language-tcl"&gt;catch&lt;/code&gt;,
        &lt;code class="language-tcl"&gt;try&lt;/code&gt;, and
        &lt;code class="language-tcl"&gt;error&lt;/code&gt; commands:
      &lt;/p&gt;

      &lt;pre is:raw&gt;&lt;code class="language-tcl"&gt;if {[catch {some_command} result]} {
    puts "Error encountered: $result"
}

try {
    some_command
} on error {errorMsg} {
    puts "Error encountered: $errorMsg"
}&lt;/code&gt;&lt;/pre&gt;

      &lt;p&gt;
        The &lt;code class="language-tcl"&gt;catch&lt;/code&gt; command can be used to
        capture any errors or exceptions that occur within a script.
        &lt;code class="language-tcl"&gt;try&lt;/code&gt; and
        &lt;code class="language-tcl"&gt;error&lt;/code&gt; offer more sophisticated
        error handling, but they're still not as clean as exceptions in
        other languages.
      &lt;/p&gt;

      &lt;p&gt;
        One particularly tricky aspect is that these error handling
        mechanisms will also set the
        &lt;code class="language-tcl"&gt;errorCode&lt;/code&gt; global variable, which
        can be both useful and a curse to debugging. The global
        &lt;code class="language-tcl"&gt;::errorInfo&lt;/code&gt; variable (the stack
        trace) compounds this further because it persists from previous errors
        and can mislead you into debugging the wrong call site. When a TCL
        library function is the source of the error, these globals can
        leak error state between different parts of your code, making it
        hard to track down where errors are actually coming from. It's
        especially problematic when functions use error handling as a way
        of checking for dictionary key existence (*cough* &lt;code class="language-tcl"&gt;json&lt;/code&gt; library).
      &lt;/p&gt;

      &lt;h2&gt;Lower-case numbers&lt;/h2&gt;

      &lt;p&gt;
        The &lt;code class="language-tcl"&gt;string tolower&lt;/code&gt; command
        attempts to convert a string's characters to lowercase. When
        encountering mixed or non-alphabetic input, the results may be
        unexpected:
      &lt;/p&gt;

      &lt;pre&gt;&lt;code class="language-tcl"&gt;set number_string "42A_B8C"
string tolower $number_string
# =&gt; 42a_b8c&lt;/code&gt;&lt;/pre&gt;

      &lt;p&gt;
        Even though the numbers remain the same, the non-alphabetic
        characters have been modified. This is technically correct
        behavior, but it's weird that you can "lowercase" a number. Then again, "everything is a string".
      &lt;/p&gt;

      &lt;h2&gt;Incr creates variables&lt;/h2&gt;

      &lt;p&gt;
        The &lt;code class="language-tcl"&gt;incr&lt;/code&gt; command increments a
        variable. Even if the variable doesn't exist.
      &lt;/p&gt;

      &lt;pre&gt;&lt;code class="language-tcl"&gt;incr nonexistent
# nonexistent is now 1
incr another 5
# another is now 5&lt;/code&gt;&lt;/pre&gt;

      &lt;p&gt;
        The same pattern applies to
        &lt;code class="language-tcl"&gt;lappend&lt;/code&gt;, which creates an
        empty list if the variable doesn't exist. This is occasionally
        convenient, but it means a typo in a variable name silently
        creates a new counter instead of erroring. That makes for a frustrating prod bug.
      &lt;/p&gt;

      &lt;hr /&gt;

      &lt;p&gt;
        So there you have it, a collection of TCL weirdness from my time
        working at FlightAware. Some of these are actually reasonable
        design decisions when you understand the philosophy behind TCL,
        but others are just... wat.
      &lt;/p&gt;</content>
    <link href="https://jmthornton.net/blog/p/tcl-weird"/>
    <summary>A collection of weird TCL edge cases and gotchas I've encountered while working at FlightAware.</summary>
    <published>2020-04-10T09:00:00-06:00</published>
  </entry>
  <entry>
    <id>https://jmthornton.net/blog/p/xmrstakcompile</id>
    <title>I successfully compiled the xmr-stak miner with CUDA</title>
    <updated>2018-02-04T09:00:00-06:00</updated>
    <content type="html">&lt;p&gt;I&amp;#39;ve been mining Monero for a while now, and I use &lt;a
        href="https://github.com/fireice-uk/xmr-stak"&gt;xmr-stak&lt;/a&gt; on most of my machines (except the ones using
      ARM). Of course, my most powerful machine also happens to be my primary personal computer, so I&amp;#39;ve been
      pretty careful with it. I installed xmr-stak a handful of moons ago, and I remember struggling with it a bit. However, brilliant old me didn&amp;#39;t bother to record how I actually got it to work, so it was a whole new adventure when I decided to update the software. So learning from my mistake, I&amp;#39;m recording what I did here so I can repeat it in the future. If a couple of other people find this and find it useful, all the better.&lt;/p&gt;

      &lt;p&gt;To start off, so you know you&amp;#39;re not totally wasting your time, here&amp;#39;s the specs I&amp;#39;m working with:&lt;/p&gt;

      &lt;p&gt;&lt;pre&gt;
OS: Linux Mint 18.3 Sylvia
Kernel: x86_64 4.13.0-26-generic
CPU: Intel Core i7-4700MQ @ 3.4GHz x 4
GPU: NVidia GT 755M x 2&lt;/pre&gt;&lt;/p&gt;

      &lt;hr/&gt;

      &lt;p&gt;Now, the problems came down to CUDA. Obviously, with two GPUs, I don&amp;#39;t want to only mine on the CPU (which was working
      fine). That&amp;#39;s like getting onto a two-engine commercial jet and trying to takeoff with the exhaust from the auxillary
      power unit. Okay, that&amp;#39;s a bit dramatic, I can still get around 200 H/s from my CPU. Anyway, part of the issue was
      compatibility between CUDA and my driver. When I started this ordeal, I was using CUDA 9.0 and it was working fine.
      However, I thought as long as I&amp;#39;m updating xmr-stak, why not update CUDA to 9.1? Well I also happen to be using driver
      384.111, but 9.1 requires 385 or something. Of course, 9.1 offers to install the driver for you, but you have to be in
      runlevel 3, and I just didn&amp;#39;t want to get into risky stuff like that on my main computer (not
      again, anyway). So I tried to go back to CUDA 9.0
      and xmr-stak just refused to compile again and again. Here&amp;#39;s a sampling of errors I continually ran into:&lt;/p&gt;

      &lt;p&gt;&lt;pre&gt;
Could NOT find CUDA (missing:  CUDA_INCLUDE_DIRS) (found suitable version &amp;quot;9.0&amp;quot;, minimum required is &amp;quot;7.5&amp;quot;)&lt;/pre&gt;&lt;/p&gt;

      &lt;p&gt;&lt;pre&gt;
error: cuda_runtime.h: No such file or directory&lt;/pre&gt;&lt;/p&gt;

      &lt;p&gt;&lt;pre&gt;
Error generating
/xmr-stak/xmr-stak/build/CMakeFiles/xmrstak_cuda_backend.dir/xmrstak/backend/nvidia/nvcc_code/./xmrstak_cuda_backend_generated_cuda_core.cu.o&lt;/pre&gt;&lt;/p&gt;

      &lt;p&gt;&lt;pre&gt;
CMake Error at CMakeLists.txt:209 (message):
CUDA NOT found&lt;/pre&gt;&lt;/p&gt;

      &lt;h3&gt;How I got it to work&lt;/h3&gt;

      &lt;p&gt;Long story short, here&amp;#39;s everything I did to make it finally work:&lt;/p&gt;

      &lt;p&gt;&lt;pre is:raw&gt;&lt;code class="language-shell"&gt;sudo apt install cuda cuda-9-0 cuda-core-9-0 cuda-cublas-* cuda-cudart-* cuda-cufft-* cuda-documentation-9-0 cuda-runtime-9-0 cuda-nvgraph-* cuda-nvrtc-* cuda-gdb-src-9-0 --reinstall

git clone https://github.com/fireice-uk/xmr-stak.git

mkdir xmr-stak/build &amp;&amp; cd xmr-stak/build

export CC=/usr/bin/gcc

export CXX=/usr/bin/g++

export CUDA_ROOT=/usr/local/cuda

cmake -DCMAKE_LINK_STATIC=ON -DXMR-STAK_COMPILE=generic -DCUDA_ENABLE=ON -DOpenCL_ENABLE=OFF -DMICROHTTPD_ENABLE=ON -DOpenSSL_ENABLE=ON ..

make install -j 4&lt;/code&gt;&lt;/pre&gt;&lt;/p&gt;

      &lt;p&gt;For me, at least, this finally got it to compile and I can run it now! I often leave it mining while I&amp;#39;m sleeping or at work. The internal fans provide a nice white noise.&lt;/p&gt;

      &lt;p&gt;Note, if the GPUs fail to start mining through the software, try reducing the thread count on both before you start looking for other problems. I have mine set to 124 threads with 6 blocks on each GPU, which is lower than the defaults.&lt;/p&gt;

      &lt;hr/&gt;

      &lt;h3&gt;Profiles&lt;/h3&gt;

      &lt;p&gt;To maximise the amount of mining I can do, I actually have three &amp;quot;profiles&amp;quot; ready to run on my computer. In case you&amp;#39;re interested, here&amp;#39;s some options.&lt;/p&gt;

      &lt;h4&gt;All-out (CPU + GPU)&lt;/h4&gt;

      &lt;p&gt;This is probably what you&amp;#39;re going for and will get the most bang for your hardware. I compiled using the commands above (all those flags make a difference), and I&amp;#39;m using these two config files:&lt;/p&gt;

      &lt;p&gt;&lt;strong&gt;nvidia.txt&lt;/strong&gt;&lt;/p&gt;

      &lt;p&gt;&lt;pre is:raw&gt;&lt;code class="language-json"&gt;"gpu_threads_conf" :
  [
    // gpu: GeForce GT 755M architecture: 30
    //      memory: 1810/1991 MiB
    //      smx: 2
    { "index" : 0,
    "threads" : 124, "blocks" : 6,
    "bfactor" : 4, "bsleep" :  0,
    "affine_to_cpu" : false,
    },
    // gpu: GeForce GT 755M architecture: 30
    //      memory: 1972/1999 MiB
    //      smx: 2
    { "index" : 1,
    "threads" : 124, "blocks" : 6,
    "bfactor" : 4, "bsleep" :  0,
    "affine_to_cpu" : false,
    },
  ],&lt;/code&gt;&lt;/pre&gt;&lt;/p&gt;

      &lt;p&gt;&lt;strong&gt;cpu.txt&lt;/strong&gt;&lt;/p&gt;

      &lt;p&gt;&lt;pre is:raw&gt;&lt;code class="language-json"&gt;"cpu_threads_conf" :
  [
    { "low_power_mode" : false, "no_prefetch" : true, "affine_to_cpu" : 0 },
    { "low_power_mode" : false, "no_prefetch" : true, "affine_to_cpu" : 1 },
    { "low_power_mode" : false, "no_prefetch" : true, "affine_to_cpu" : 2 },
    { "low_power_mode" : false, "no_prefetch" : true, "affine_to_cpu" : 3 },
  ],&lt;/code&gt;&lt;/pre&gt;&lt;/p&gt;

      &lt;p&gt;On my system, this gets me around 600 H/s. Not bad, but low enough for me to start considering getting some old GPUs for my weakling Dell Vostro tower.&lt;/p&gt;

      &lt;h4&gt;CPU-full&lt;/h4&gt;

      &lt;p&gt;This profile is sans-GPU, if you ever want that. For this, I compiled without CUDA, using the normal install method but with this set of &lt;code&gt;cmake&lt;/code&gt; flags:&lt;/p&gt;

      &lt;p&gt;&lt;pre&gt;&lt;code class="language-shell"&gt;cmake -DCMAKE_LINK_STATIC=ON -DXMR-STAK_COMPILE=generic -DCUDA_ENABLE=OFF -DOpenCL_ENABLE=OFF -DMICROHTTPD_ENABLE=ON -DOpenSSL_ENABLE=ON ..&lt;/code&gt;&lt;/pre&gt;&lt;/p&gt;

      &lt;p&gt;Notice the &lt;code class="language-shell"&gt;-DCUDA_ENABLE=OFF&lt;/code&gt; which makes it CPU-only (on NVidia systems). Then this is my &lt;strong&gt;cpu.txt&lt;/strong&gt;, same as for the all-out profile above:&lt;/p&gt;

      &lt;p&gt;&lt;pre is:raw&gt;&lt;code class="language-json"&gt;"cpu_threads_conf" :
  [
    { "low_power_mode" : false, "no_prefetch" : true, "affine_to_cpu" : 0 },
    { "low_power_mode" : false, "no_prefetch" : true, "affine_to_cpu" : 1 },
    { "low_power_mode" : false, "no_prefetch" : true, "affine_to_cpu" : 2 },
    { "low_power_mode" : false, "no_prefetch" : true, "affine_to_cpu" : 3 },
  ],&lt;/code&gt;&lt;/pre&gt;&lt;/p&gt;

      &lt;p&gt;Running this profile gets me around 200 H/s.&lt;/p&gt;

      &lt;h4&gt;CPU-lite&lt;/h4&gt;

      &lt;p&gt;Here&amp;#39;s the one I really made the profiles for. I run this one in the background while I&amp;#39;m doing light or moderate regular computing. I&amp;#39;ll often run this alongside a handy &lt;code class="language-shell"&gt;monerod --max-concurrency 1&lt;/code&gt; to keep my local blockchain up to date.&lt;/p&gt;

      &lt;p&gt;Compile without CUDA as with CPU-full:&lt;/p&gt;

      &lt;pre&gt;&lt;code class="language-shell"&gt;cmake -DCMAKE_LINK_STATIC=ON -DXMR-STAK_COMPILE=generic -DCUDA_ENABLE=OFF -DOpenCL_ENABLE=OFF -DMICROHTTPD_ENABLE=ON -DOpenSSL_ENABLE=ON ..&lt;/code&gt;&lt;/pre&gt;

      &lt;p&gt;Notice the &lt;code class="language-shell"&gt;-DCUDA_ENABLE=OFF&lt;/code&gt; which makes it CPU-only (on NVidia systems). Here&amp;#39;s the &lt;strong&gt;cpu.txt&lt;/strong&gt; for the lite version:&lt;/p&gt;

      &lt;p&gt;&lt;pre is:raw&gt;&lt;code class="language-json"&gt;"cpu_threads_conf" :
  [
    { "low_power_mode" : true, "no_prefetch" : false, "affine_to_cpu" : false },
    { "low_power_mode" : true, "no_prefetch" : false, "affine_to_cpu" : false },
  ],&lt;/code&gt;&lt;/pre&gt;&lt;/p&gt;

      &lt;p&gt;Running this profile keeps me around 60-75 H/s and doesn&amp;#39;t drain enough CPU power for me to notice most of the time. If you&amp;#39;re using a pool that offers a separate port for low-end CPUs, I&amp;#39;d use that for this profile.&lt;/p&gt;</content>
    <link href="https://jmthornton.net/blog/p/xmrstakcompile"/>
    <summary>After many errors and failures, I found a method to successfully compile the xmr-stak unified XMR miner with CUDA support</summary>
    <published>2018-02-04T09:00:00-06:00</published>
  </entry>
</feed>
