Skip to main content

· 5 min read
Marcus Cemes

My inspiration for this project came from using the amazing gatsby-image around four years ago. By defining a GraphQL schema, you could specify how you would like imported images to be processed at build-time:

query {
banner: file(relativePath: { eq: "images/background.jpg" }) {
childImageSharp {
fluid(maxWidth: 1920, quality: 80) {
...GatsbyImageSharpFluid
}
}
}
}

This was great, it created many different size variants in different codecs, and also generated a traced SVG preview that could be hard-coded into the HTML with Base64 encoding.

What really sold me was when I came across gatsby-transformer-sqip, with just a little bit of extra boilerplate, it was possible to replace the preview image with one generated by Michael Fogleman's primitive library written in Go:

childImageSharp {
sqip(mode: 1, numberOfPrimitives: 16, blur: 0) {
dataURI
}
fluid(maxWidth: 960, quality: 70) {
...GatsbyImageSharpFluid_withWebp_noBase64
}
}
<Picture
fluid={{
...image.childImageSharp.fluid,
base64: image.childImageSharp.sqip.dataURI,
}}
/>

It was very hackable! While the build times increased by a few orders of magnitude, the results are stunning. By using as few as 16 triangles, it was possible to approximate the image with only a few hundred bytes of SVG. I highly recommend checking out the GitHub README for some examples. This technique worked exceptionally well on my homepage which involved mountains, it produced grey triangles that morphed into detailed peaks.

The time came when I wanted to move to a different stack, and I realised that I couldn't take gatsby-image with me. So, as any sane person would do, I decided to create my own library to empower build-time image processing!

It was a fantastic learning experience to get to know the nitty-gritty details of Node.js, I experimented with threading, different processes, a broker for distributed image processing, and over time this evolved into IPP. It was a self-built tool that I would extend/patch whenever I needed it to do something just a little more.

Did I need a CLI tool for batch processing (thumbnails that I could upload to a backend), a webpack plugin (to simulate gatsby-image), or perhaps a serverless λ function that uses @ipp/core to implement an infinitely-scalable image processing backend for cheap? It was also (relatively) easy to mock-up a Docusaurus plugin to integrate IPP for the demo on the front page of this website.

All of this did come however at the cost of time. Around the time I started IPP, I also started my university studies. Some of you may have already experienced that while that is the time you feel the most inspired, it is also the worst time to be creating daily time sinks.

While I would love to be able to continue and polish IPP (it has already attracted some attention with, at the time of writing, 40 stars without any promotion), I simply lack the time. The biggest missing feature at the moment is solid documentation.

It truly was a pleasure to have a simple npm package with pre-compiled multi-platform binaries that could be pulled into any Node.js project, it's the thing I lack the most when trying out Rust, Go or Elixir for any web-related development (involving images!).

So what's next?

My vision for this project was to take it to a lower-level platform such as Rust. The main advantage of this would be platform agnosticism, most language ecosystems already provide some way of safely interfacing with WASM. Is it a bad idea to start a new project, before the old one is even complete? Yes. IPP taught me a lot and I would like to apply this knowledge to start again and Do It Right This Time™️, in such a way so that any backend stack could benefit from it.

The main hurdles that would need to be addressed off the top of my head are:

  • Compiling libraries from other languages such as Go (or rewrite?)
  • Creating a modular plugin system in a statically-linked language
  • Distribution of WASM/binaries that are compatible with different ecosystems

While just the thought of working on a new project excites me, this will have to stay in my head for now.

If you would like to take IPP for a spin, despite the lack of further development, for now, take a look at the source code, the core code is very simple. There is also an example of an implementation in my Gatsby website repository. For me, it was a Swiss-army knife that helped make images look just a little bit better over the network, and I would definitely use it again!

IPP provides a code-based, CLI-based and Webpack-based interface, a few pipes out of the box for image resizing, conversion, tracing and primitive generation, and you can even make your own pipes just by creating a package locally or on npm!

For now, I'm signing off here, and I wish you all the best of luck with your own projects and endeavours.

· 6 min read
Marcus Cemes

The first question you need to ask yourself is:

Why bother? Why is website image optimisation so important?

If you are not even thinking about resizing images then the answer is probably not. You could just serve the original images to your user. It may not have an immediate effect and you will probably get away with it. But what sets truly good websites apart is their attention to detail.

In short, it provides bandwidth savings (meaning fewer servers and less egress data, which is expensive in the cloud) and makes your website faster to load and more responsive (less work for the browser to downscale and paint the image), while also saving your user's internet allowance and battery. The only downside is the added complexity to create and serve these optimised images.

It's up to you to decide based on your particular needs, there are some [other guides][images-guide] that go into more detail. There is a recent movement for providing better online experiences (you may have heard of Progressive Web Apps), and I believe that images play a large role in keeping the experience fast and immersive.