My note-making system for this year

The problem: I’m a tab hoarder. Anything I’m working on at the moment is generally kept in an open tab in Firefox, as work in progress. Typically I have 100-200 tabs open. Once I’ve finished a personal project, I close the tabs, sometimes bookmarking them first, but not really filing away the knowledge I’ve acquired in a useful way. Even when I’m documenting work projects, I’m not sure I’ve gathered the supporting information and ideas in a structured way. I’m not sure I’ve opened any bookmarks I’ve made in the last few years. When I want information back, I might fruitlessly search my history and end up using a search from scratch to find the information again. It’s a terrible system, and I rely too much on my memory and open tabs to recall information. Another side-effect is that my thoughts get stored in memory in an unprocessed state. I’m not getting any younger, and my reliance on storing everything in my brain is not future-proof.

More …

Be more productive with use of your BASH history

If you spend time working in the terminal, whether for work or leisure, a lot of the commands you type often have a dependency on other recent commands, might be repetitive actions, or be very similar to other commands previously run. Gaining mastery over your shell history is a great way to be more productive in the terminal.

More …

A basic spell-checker for static sites

If your blog is generated with a static site generator such as Hugo or Jekyll, you might not catch spelling errors in your content. An ideal devops-style workflow should check your markdown quality and spelling automatically when pushing a commit. As a starting point, I’m going to provide an easy script to manually spell-check your markdown before committing changes.

More …

Command-line downloads from sites which require login

…also known as “Solving the Working from Home bandwidth problem”. A common task I perform as part of my job is to download large files from websites onto a server. Often my aim is to download such files directly onto the Linux servers residing in the datacentre, and most of the time this is straight-forward: open an ssh connection to the server, and pass the url to the wget command. The file is promptly downloaded directly to the server at several Gigabits per second.

More …