It’s interesting how we can apply few optimistions and get 20-30% speedups and 10-20% reduction in those aws charges easily yet a lot of sites don’t use any of them.

Lazy Loading

Lazy loading is not hard to implement and there are vast resources out there explaining how to do it. Basic Idea is if user is seeing the first 10% of the site why load 100% of the site immediately, we can rather load 0-30% now and show it to user and while user is watching those images we lazily load the rest. We can also perform something like a Priority fetching, in which we fetch images based on their position in priority queue and lazy load the rest in background.

Serving GIF as Videos

Every Popular social networking site heavily embed animated GIFs. The interesting fact being in GIF89a spec notes it is clearly stated that this format was never designed for video storage or animation. This is because Animated GIFs store each frame as a lossless GIF and the degraded quality is due to GIFs being limited to a 256-color palette. MP4 video stores each key frame as a lossy JPEG. Delivering the same file as an MP4 video can often reduce file-size by 85% without noticeable quality loss. In production Animated GIFs uploaded to Twitter are actually converted to video inorder to improve user experience and reduce bandwidth consumption, similar happens in Imgur. We can use a tool like ffmpeg to convert our GIFs to mp4.

ffmpeg -i animated.gif -movflags faststart -pix_fmt yuv420p -vf "scale=trunc(iw/2)*2:trunc(ih/2)*2" video.mp4

Caching

We can store images in an on device cache so that the next time user request the same thing it goes first to cache and only in case of cache miss it hits the database.

Image Sprites?

This might be a little controversial but I believe using image sprites is a bad choice. Many developers used image spriting to reduce the number of HTTP requests by combining many images into a single larger image that is then further sliced when needed. It’s good but it has a worst case of cache invalidation, change in a very tiny part of image would invalidate whole image in cache.

Baseline[Sequential] Images

Instead of loading the complete image we can consider loading image from top to bottom. As more image gets loaded we can show more of it to users.

Progressive Images

A better way than baseline images is to use progressive images. It divide given image into number of scans. First scan showing blurry or low quality image. Each scan of an image progressively adds an increasing level of detail. A lot of companies like Facebook, Yelp, Pinterest, Twitter uses them. Facebook used them in their iOS app which resulted in data usage reduced by 10% and image load time 15% faster, for Yelp it resulted in 4.5% of their image size reduction. The problem with them is they are slower to decode and on top of that for some images it can become hard to tell when an image is completely loaded. We can use something like image min to build them.

//  Using imagemin 
const gulp = require('gulp');
const imagemin = require('gulp-imagemin');

gulp.task('images', function () {
    return gulp.src('images/*.jpg')
        .pipe(imagemin({
            progressive: true
        }))
        .pipe(gulp.dest('dist'));       
});

Blurring Chroma

Our eyes are more forgiving for the loss of color detail in an image than luminance(brightness). In average case this results in 15-50% reduction in file size. The simplest way to do this is by converting image to CIELAB color space and smoothing out the transitions in A and B channels.

Lossless Compression

It can be achieved by optimizing an image’s Huffman tables or removing EXIF data added by digital cameras. ImgBot and Mozjpeg are few tools which support lossless JPEG compression.

Google’s Butteraugli and Guetzli

Butteraugli an intersting tool which helps in measuring perceived differences between images. Guetzli is a JPEG encoder from Google that tries to find the smallest JPEG that is perceptually indistinguishable from the original to the human eye using Butteraugli. Guetzli aims to achieve 20-30% reduction in data size. It can be used with an ETL pipeline to decrease the size of our image Data. Facebook full-size images works around 85% quality, Google YouTube frontpage image around 70–82%, Wikipedia uses around 80%.

Remove Color Profiles:

A color profile is the information describing what that the color space of a device. Serving an image having colors that user can’t see is waste of bandwith, what we can do is depeding on the device we can present different version of images.

Resizing Images Properly:

Having a small device like a smartphone, and giving it an image equal to the size of a large montior and than resizing it on device is a huge wastage of bandwith and computation. In low end devices it’s easier to achieve memory swapping which could eventually lead to swap death and crashing the browser. It’s beneficial in long run to make an ETL pipeline that builds different version of new images having different sizes and later giving out images which is closest to device sizes where our application is being viewed. In production to improve user experience Twitter resized their images properly and saw decode time reduced from ~400ms to mere ~19ms.

Choosing Right Format

Higher file size doesn’t always imply higher image quality. Some file formats for a given image gives larger size as compared to others. A large portion of images on the interent is domainated by JPEG, which is a lossy compression algorithm that discard information in order to save space. We can use some new formats like HEIF(announced by Apple) which is similar to JPEG but goes one step further, JPEG breaks an image up into blocks and if one block is similar to another, HEIF records just the difference, which requires less storage space. We can also use something like SVG(Scalable Vector Graphics), original google logo was around 14000 bytes which with few optimisations can be buit under 262 bytes and further gzip to 172 bytes. SVG can be optimised by reducing the curves, removing hidden layers. We can also use a tool like SVGO which helps for optimizing SVGs by lowering the precision of numbers in your file definitions. Each digit after a point adds a byte and this is why changing the number of digits can heavily influence file size but changing precision can visually impact how your shapes look.

npm i -g svgo
svgo input.svg --precision=1 -o output.svg