How the sausage is (currently) made

I have already written about my thoughts on how ethical I think coding with AI is and how I currently feel about bot sitting as a coding tool, as well as the current state of this page as a product of bot sitting. To round off this collection of initial meta posts, wanted to give a comprehensive recipe of my current workflow to create the page and its content.

Ingredients

  • IDE: Cursor, a VS Code flavor that has various AI integrations in an all-in-one package. I mostly used it as a convenient entry point to agentic coding. It offers a paid plan that includes tokens for all major models and providers. Alternatively, you may use of your own API keys if you want to avoid a subscription and pay as you use instead of a flat rate. I ran out the 14 day free trial without hitting the limit on the free tokens, so I think for hobby coding, the 20$ subscription will be fully sufficient. I have not yet made up my mind on whether I will subscribe or use API keys if I continue wit Cursor. I will see how fast I run out 5$ API credit at Anthropic. Cursor error when trying to use API keys for agents:
Agent and Edit rely on custom models that cannot be billed to an API key. Please use a Pro or Business subscription and/or disable API keys. Ask should still work.
  • Site generator: Jekyll, this allows me to set up templates for blog and microblog posts and page elements, such as the header and footer, and build the site based on these templates when I add content. Jekyll also generates the RSS feed.
  • Version control, website building and deployment to Nocities: GitHub with GitHub Actions.
  • Markdown editor: Obsidian

Workflow

In short, I write content for the page on some kind of local device in Markdown, then I push the changes to GitHub, the page gets built and deployed. If I want to change the actual page, I open Cursor, make a coffee and get ready to argue with robots or try and fail myself.

Add content

Since the page’s blueprint is on GitHub, I am very flexible in the environment I use to work on the page. I write blog posts in bed using Ubuntu on an ancient (2011ish) MacBook Air, I write microblogs on my phone using the Obsidian and GitHub apps or I change some stuff that I suddenly remember on my windows machine at work. As long as I don’t want to change too much code or just add content, all I need to do is write a Markdown file with the correct YAML frontmatter in my ‘drafts’ folder and move it to the ‘blogposts’ folder when its done.

Update the page

I have a GitHub Action set up that runs on every push to the origin repository. The action will spin up a Ruby environment, use Jekyll to create pages from the blueprint into a ‘site’ directory and deploys the pages in that directory to Neocities using the CLI. My Neocities API key is stored in the GitHub repository as a secret, so that Actions has access to it and can push to Neocities on my behalf. This is an elegant setup, it means that Neocities only stores the current pages and the actual source for that page lives in an independent place that I control (well, GitHub does, but I have local copies). This makes it very easy to host the page somewhere else, including on my own hardware, if I ever want to do that.

I could build with Jekyll locally and then deploy to Neocities, skipping the GitHub Action. However, using the current setup, I don’t have to worry about installing Ruby, Jekyll and Neocities CLI everywhere I work and I can update the page from my phone, since everything happens automagically on the GitHub servers, whenever I change something. This combination of Jekyll and GitHub Actions means that I have to interface with code very little, unless I want to.

One push-build-deploy takes about 20-30 seconds on GitHub Actions. With the free plan, a private repository currently has 2000 minutes of Actions included per month. That’s many thousand pushes, so plenty for content updates and waves of 10 pushes while trouble shooting on main.

Developing the page

The actual coding part of changing the page is pretty straightforward with Cursor. I open my local copy of the git repository and outline what I want to achieve on a piece of paper or in my head. I try to break down the tasks into very concrete chunks so that I can start new chats with little context needed as often as possible. The longer the chats become, the worse the bots perform in my experience. Every now and then, I ask Cursor-Claude to make a devlog or similar that contains information about the page and setup and when I need the bots to have wider context, I ask them to read that file first.

When something doesn’t work and a given bot doesn’t manage to fix it and I have no idea what’s wrong either, I start a new chat, change the model and try again. Eventually, if nothing helps, I will do the hard work of actually trying to fix it myself, which involves a lot of trial and error, googling, talking to web Claude and sometimes even thinking. These are the most frustrating parts, but also the most rewarding and valuable ones, since I actually tend to learn something from them. Ultimately, the trouble shooting is not that different from unassisted coding, it just takes longer to get in the zone when it’s not you who made the mess.

Since this page is not frequently read by anyone and I’m lazy, I just push whenever I think everything works as intended and check if it looks okay on the live page. The proper way would of course be to make a dev branch in my git repository, change stuff, build the site locally, check all the pages and then merge into main to deploy. But as written above, it doesn’t cost me anything to test on the live page. If everything breaks, I roll back the last commits and develop locally.

Reflections on the setup

Using Jekyll for page generation has several advantages and drawbacks in my view. The advantages are that I avoid code repetition and looking at HTML whenever I want to add content. The blueprint for the page is very compact and Jekyll expands it into full pages for me. This means that I have an easier time reading the elements and pages, but also that I have to look at more files. To know what’s on the index page, I have to look at index.html, the default layout template, the headertemplate, footertemplate and so on. On the other hand, the index.html just contains its content and a total of maybe 10-20 lines of HTML and Liquid code, which is much less than the built index page, which contains hundreds of lines of HTML. I also only need to change one header file instead of updating the header on the index page, the blog page and so on (not that I would do it myself).

In addition to being more readable yet fragmented for me, the Jekyll setup is also far more efficient for the bots to read. One advantage of bot sitting is that I don’t really have to care about the length of the code, because I don’t have to write it. However, maintaining it is still more efficient the cleaner the code is. Every repetition costs tokens, wastes energy on the model server and has the potential to lead the model on a goose chase. However, the fragmentation of the code can also lead the bots on goose chases, since they don’t necessarily understand the code structure, although they have access to the whole repository. From a bot perspective, I think that saving tokens is ultimately the most important argument for Jekyll.

The drawbacks are that Jekyll is a modern layer of impurity. It removes me further from the page by layering technology between me and the HTML. Again, I feel that my modern sensibilities and lazy reliance on convenient solutions are rubbing up against the spirit of the personal web. Ultimately, that is a pretty stupid concern, considering the bot sitting is far more lazy and impure than Jekyll in that regard. I just like to find fault in things.

Anyway, back to learning how to solve the Rubik’s cube faster!