Using nofollow links in Hugo's Markdown

I seem to have more posts about nofollow links than actual nofollow links, but here’s one more. Markdown doesn’t support nofollow links by default, so you either have to write them in HTML or tweak your template to handle them. Update: after reviewing the options, I ended up using a bounce-URL instead. Nofollow links with HTML This is kinda simple. Just write HTML directly in markdown: anchor <a href="https://example.com" rel="nofollow">anchor</a> Nofollow links with markdown There are probably easier ways to do this, but I didn’t spot any off-hand.

Using nofollow links in Hugo's Markdown »

Converting ancient content into markdown

So you want to get that ancient blog post or article that you put up and convert it into Markdown? This is what has worked for me. YMMV. Find the content Crawl and mirror the site to get everything This is kinda easy, and makes it possible for you to get all of the content that’s linked locally. It doesn’t make viewing them in a browser easy though. This uses wget.

Converting ancient content into markdown »

Crawl a website to get a list of all URLs

Sometimes you just need a list of URLs for tracking. There exist loud & fancy tools to crawl your site, but you can also just use wget. wget --mirror --delete-after --no-directories http://your.website.com 2>&1 | grep '^--' | awk '{print $3}' | sort >urls.txt or, if you just want the URL paths (instead of http://example.com/path/filename.htm just /path/filename.htm): wget --mirror --delete-after --no-directories http://your.website.com 2>&1 | grep '^--' | awk '{print $3}' | sort | sed 's/^.

Crawl a website to get a list of all URLs »

Command lines for the weirdest things

Just a collection of command line tweaks. These work on Linux, probably mostly on MacOS too (and who knows, with things for Windows probably). (content irrelevant, just adding an image) Basics General command-line tips Todo: find some other sources Pipe tricks: | more - show output paginated (use space to get to next page) | less - similar to more, but with scroll up/down | sort - sort the output lines alphabetically | uniq - only unique lines (needs “sort” first) | uniq -c - only unique lines + include the number of times each line was found (needs sort first) | sort -nr - sort numerically (“n”) and reverse order (“r”, highest number first) | wc -l - count number of lines Searching for things (more grep options)

Command lines for the weirdest things »

CO2 in your meeting room / office

Want to know what level of CO2 you have in your meeting room or office-space? Want to know when you need to start ventilating air to stay productive? Here’s a simple calculator that works out how much CO2 people produce (when they’re not overly active), and what level of CO2 that produces in a closed space. Simple calculator Room size [in meters]: (wide) x (long) x (high) Number of people: recalculate

CO2 in your meeting room / office »

Tiny USB keyboard with ATMEGA 32u4 - it works!

After a USB keyboard with an ATTINY85, a try at one with ATMEGA 32u4, I’m now at revision 2/3 for the ATMEGA 32u4 single key, USB keyboard. Overview Same as before: a simple USB keyboard with 1 key reprogrammable tiny mechanical keyboard key cheap enough to give away actually works debuggable The “cheap” aspect was mostly to justify making & buying some :). Hardware design Since the previous design mostly worked, I tweaked to make two versions.

Tiny USB keyboard with ATMEGA 32u4 - it works! »

Crawling all (most) of the web's robots.txt comments

Starting from this tweet … View tweet … I hacked together a few-lined robots.txt comment parser. I thought it was fun enough to drop here. Crawling the web for all robots.txt file comments curl -s -N http://s3.amazonaws.com/alexa-static/top-1m.csv.zip \ >top1m.zip && unzip -p top1m.zip >top1m.csv while read li; do d=$(echo $li|cut -f2 -d,) curl -Ls -m10 $d/robots.txt | grep "^#" | sed "s/^/$d: /" | tee -a allrobots.txt done < top1m.

Crawling all (most) of the web's robots.txt comments »