Website
What it's for
This site is a knowledge repository for topics I'm interested in. I could just make notes and store them privately, but by sharing I end up holding myself to a higher standard. There's also a chance someone out there might find something useful, which is a rewarding notion even if I don't actually find out about it.
How it's made
This website is currently built using a custom static site generator consisting of a few shell scripts. The scripts provide the only file manipulation, there's no template language.
- img.sh - I run this whenever new images are added, it uses ImageMagick to convert source images into four sizes (dimensions shown here are examples):
- Greyscale dithered GIF: 407 x 305 px
- Full color JPG: 814 x 611 px
- Full color WebP: 814 x 611 px
- Full color High Resolution JPG: 1628 x 1221 px
- topic.sh - this prompts with questions and then adds a topic folder and starter files for it
- build.sh - this creates all the site HTML files, it takes only milliseconds to run
I store the code and content for the website in a git repo, I process updates locally and then commit changes. To make updates live, I SSH into the server and do a git pull. The server is a Raspberry Pi 2b in my home.
Optimizations
Previously I was just loading oversized images directly in IMG tags. This works well in all browsers, but is very unfriendly for low bandwidth visitors. To improve the site performance in low bandwidth situations, I began testing using Chrome's bandwidth throttling feature. I picked "Slow 3G" and immediately saw how it took over 40 seconds to load the page images.
To improve the site rendering I made the following changes:
- Changed all img tags to picture tags with img inside and pointed to new low resolution gifs
- Added the CSS
image-rendering: pixelated;
to keep the scaled up pixels sharp - Added loading="lazy" to img tags unless images are at top of page
- Added figcaption tags with alternate text (duplicate of alt attribute on image) and link to a high resolution version of image
These changes benefit all users, even with old browsers or javascript disabled, but I wanted to give an enhanced experience option for anyone using higher bandwidth and newer browsers too. I added a completely optional JS script which tests for session storage permission and basic event listener capability and then adds a button to the top right of the page (looks a link though) which, when clicked, adds two source tags within each picture tag. One source tag points to a WebP version of the image, the other a JPG version. If WebP is supported, it will be favored for it's lower file size. The presence of these source tags effectively swaps the image inline between low-resolution gif and higher resolution WebP or JPG.
These changes reduced the load time over Slow 3G to 7 seconds. The added responsiveness also helps high bandwidth visitors too, but less noticeably.
I also like to check other aspects of the site performance, and there's a number of ways to do this. The quickest is to run a Lighthouse performance test (also part of Chrome's inspector). The site currently scores very well according to these automated tests.
I intend to figure out how to do more retro-browser testing, but to start I like testing my site in Lynx. Lynx is text based, and runs from the command line. It nicely reveals if the HTML structure of your content is well considered.
One of the nice side effects of the current image markup approach that I didn't anticipate is that even in text-based browsers, visitors can actually download the images if they want to.
Drawbacks
I'm mostly pleased with this approach, but there are tradeoffs too. The biggest one is that small scaled-up and pixelated versions of images work ok for general photography, but make less sense for charts and graphs (ironically like the images on this page).
Future considerations
The picture element and it's children could be used to also render specific images for high-dpi screens, so I may look into that kind of enhancement in the future.
Additionally, once I figure out how to test the javascript on older computers/browsers, I'll aim to make it as foolproof as possible. I don't currently now how well it works, so I'm just hoping my tests are functional and don't produce extra errors.