Skip to the content.

Copyright© Anshul 2019

This book is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0) license.

Preface

A fast site is a good user experience[UX] and a satisfying UX leads to higher conversion rates. People like fast sites. Loading time has been a major contributing factor to page abandonment. Along the way i will provide sample code and quick tutorial wherever possible and explanations as detailed as possible. This book at core focuses on improving quality of production builds.

This book tries to showcases real world trends and practices to improve the performance of website and webapps.

Target Audience

This book is targeted to professional senior developers wishing to improve the performance of their website and webapps. I’ll try my best to keep everything simple and cover as many aspects as possible.

Prerequisite

Whilst this book is targeted at both beginners and senior developers, a basic understanding of JavaScript fundamentals is assumed. I will try to provide links wherever possible for advance topics.

Note To Readers

I am neither affilated or endorsed by any organisation or entity to promote their services or products. Their may be better alternatives available that i might not know, i always welcome feedbacks and suggestions. All Chapter are mostly independent and you can start from whichever you find most interesting. I intend to publish all my books for free digitally, surely i would appreciate if you help me buy coffee.


Table Of Contents


Introduction

The current web has evolved a lot in past decade from viewing static site to playing AAA 3D games in browser. Over 4.33 billion people were active internet users as of July 2019. In this over growing economy website visibility and high conversion rates is highly important. Search engine like Google among various other criteria also rank websites on the basis of site speed. In the study “The Need for Mobile Speed” it was found that 53% of mobile site visits are abandoned if the page took longer than 3 seconds to load. Another research showed a 2 second delay during transaction resulted in shopping cart abandonment rates upto 87% and 79% shoppers dissatisfied with website performance are less likely to buy from the same site again.

A 1 second delay in page response can result in a 7% reduction in conversion [ If an e-commerce site is making $100K per day, a mere 1 second could potentially cost $2,500,000 in lost sales every year]. So there is a constant need in making the web faster and easier accessible to everyone.

Images

A picture is worth thousand of words. According to the HTTP Archive, 60% of the data transferred to fetch a web page is images composed of JPEGs, PNGs and GIFs. As of July 2017, images accounted approximately 0.45MB of the content loaded for every 1.0MB average site.

Image optimization consists of:

Chrome Lighthouse and PageSpeed Insights are few tools that will help in auditing the website for best performance practices.

Choosing Right Format

Larger file size have more information but that information may simply be noises or undetectable to human eye.

Remember: Higher file size doesn’t always imply higher image quality.

Raster vs Vector Graphics

Raster(or Bitmap) Graphics represent images by encoding the values of each pixel within a rectangular grid of pixels. These are used where photorealism is necessary. Example: JPEG or PNG

Vector Graphics represent images made using points, lines and polygons offering high resolution and zoom independency. Example: SVG

JPEG:

JPEG originated in 1992 and trace its footsteps in phones, digital cameras and webcams. 45% of the images seen on sites crawled by HTTP Archive are JPEG. JPEG is a lossy compression algorithm that discard information in order to save space.

JPEG Compression Modes:

Baseline[Sequential] JPEG: Users see the top of image with more of it revealed as the image loads.

Progressive JPEG: Progressive JPEGs or PJPEGs divide given image into number of scans. First scan showing blurry or low quality image. Each scan of an image progressively adds an increasing level of detail.

Lossless: It can be achieved by optimizing an image’s Huffman tables or removing EXIF data added by digital cameras. Jpegoptim, ImgBot and Mozjpeg are few tools which support lossless JPEG compression.

Progressive JPEGs

Currently these are widely being used in the industry. So i decided to cover it specially. PJPEGs allows users to see roughly what an image is when only part of the file is received, allowing them to make a call on whether to wait for it to fully load.

PJPEGs have higher compression as compared to baseline JPEGs, consuming 3-10% less bandwidth for images over 10KB. This is due to the fact that in baseline JPEGs blocks are encoded one at a time, whereas in PJPEGs the Discrete Cosine Transform coefficients across more than one block can be encoded together and the option of each scan having its own dedicated optional Huffman table leads to better compression.

PJPEGs in Production

Disadvantages Of PJPEGs

PJPEGs are slower to decode than baseline JPEGs raising a concern on lower end mobile devices with limited resources. For smaller images PJPEGs can be larger than their baseline counterparts.

For some users PJPEGs can be disadvantageous as it can become hard to tell when an image is completely loaded.

Creating PJPEGs

ImageMagick and imagemin supports exporting Progressive JPEGs.

//  Using imagemin 
const gulp = require('gulp');
const imagemin = require('gulp-imagemin');

gulp.task('images', function () {
    return gulp.src('images/*.jpg')
        .pipe(imagemin({
            progressive: true
        }))
        .pipe(gulp.dest('dist'));       
});

Butteraugli: It is a model for measuring the difference between two images based on human perception.

Blurring Chroma

It depends mostly on limitations in our eye. Our eyes are more forgiving for the loss of color detail in an image than luminance(brightness). In average case this results in 15-50% reduction in file size. The simplest way to do this is by converting image to CIELAB color space and smoothing out the transitions in A and B channels.

Guetzli

Modern JPEG encoders attempt to produce smaller, higher fidelity JPEG files while maintaining compatibility with existing browsers. Guetzli is a JPEG encoder from Google that tries to find the smallest JPEG that is perceptually indistinguishable from the original to the human eye. At core it uses Butteraugli to measure differences. Guetzli aims to achieve 20-30% reduction in data size. Only downside being Guetzli generates only sequential (nonprogressive) JPEGs due to faster decompression speeds they offer.

Thereby Guetzli is more suitable when you’re optimizing images as part of a build process for a static site or situations where image optimization is not performed on demand.

Remember Image file-size is much more dependent on the choice of quality than the choice of codec, always ship images at lower quality(<100).

Industry Trends:

Facebook full-size images: 85

Windows live background: 82

Wikipedia images: 80

Google Images thumbnails: 74–76

YouTube frontpage JPGs: 70–82

Yahoo frontpage JPGs: 69–91

Twitter images: 30–100

HEIF:

High Efficiency Image File Format or HEIF is relatively new format for images and image sequences for storing HEVC-encoded images. Apple in 2015 announced at WWDC they would explore switching to HEIF over JPEG for iOS, citing up to 2× savings on file-size. HEIF uses video compression technology called HEVC (High Efficiency Video Coding). HEIF is similar to JPEG but goes one step further, JPEG breaks an image up into blocks and if one block is similar to another, HEIF records just the difference, which requires less storage space. Unlike JPEG being single image, HEIF can be a single image or a sequence of images. HEIF uses 16 bit color whereas JPEG using only 8bit color, smaller size and higher quality HEIF has made JPEG quite redundant.

Industry

In 2019 Canon Inc. in its flagship professional camera Canon EOS-1D X Mark III has adapted the HEIF format, it’s just the matter of time when companies like Nikon, Sony, Hassleblad fall in line paving grave of JPEG and making HEIF as standard.

As of October 2019, no browsers support HEIF natively. I personally think HEIF is highly promising, let’s see how it turns out :)

SVG:

Scalable Vector Graphics or SVG is an Extensible Markup Language(XML) based vector image format for two-dimensional graphics with support for interactivity and animation. Original famous google logo was around 14000 bytes, with few optimizations i managed to build similar in just 262 bytes and gzip version just 172 bytes.

SVG Optimization

Instead of paths, use predefined SVG shapes like <rect>, <circle> and <polygon>. If you must use paths, try to reduce your curves and paths. Simplifying will reduce overall size. Delete layers that are invisible. Avoid any Illustrator or Photoshop effects as they get converted to larger raster images.

SVGO is a tool which helps for optimizing SVGs by lowering the precision of numbers in your file definitions. Each digit after a point adds a byte and this is why changing the number of digits can heavily influence file size but changing precision can visually impact how your shapes look.

SVGO command line tutorial:

npm i -g svgo
svgo input.svg --precision=1 -o output.svg

SVG is just text based so make sure to Minify your SVG files. Use Gzip or Brotli to further compress. Top browsers can decompress gzip files without assistance.

Remove Color Profiles:

A color profile is the information describing what that the color space of a device. They ensure an image looks as similar as possible on different kinds of screens.Images can have an embedded color profile as described by the International Color Consortium (ICC) to represent precisely how colors should appear. Embedded color profiles can also heavily increase the size of images(>100KB). Tools like ImageOptim automatically remove them.

Note: Removing color profile may cause image looking dissimilar, so do experiment the tradeoff beforehand.

Resize your Images Properly:

Devices comes in varying sizes. When a browser fetches an image, it has to decode the image from the original source format to a bitmap in memory. Larger images also come with an increase in memory size costs. Loading a large image and then resizing is highly inefficient and resource intensive. In low end devices it’s easier to achieve memory swapping which could eventually lead to swap death and crashing the browser.

srcset allows us to serve image with the correct size:

<img srcset="myphoto_320w.jpg,
             myphoto_640w.jpg 2x,
             myphoto_960w.jpg 3x"
     src="myphoto_960w.jpg" alt="Image Here">

Industry Practices:

GIF vs Video:

GIFs are all over the internet, everyone is sending them including me ^_^). Every Popular social networking site heavily embed animated GIFs. The interesting fact being in GIF89a spec notes it is clearly stated that this format was never designed for video storage or animation. This is because Animated GIFs store each frame as a lossless GIF and the degraded quality is due to GIFs being limited to a 256-color palette.

MP4 video stores each key frame as a lossy JPEG. Delivering the same file as an MP4 video can often reduce file-size by 85% without noticeable quality loss.

You can use ffmpeg to convert animated GIFs to H.264 MP4s :

ffmpeg -i animated.gif -movflags faststart -pix_fmt yuv420p -vf "scale=trunc(iw/2)*2:trunc(ih/2)*2" video.mp4

Industry Practices:

Fun Facts

Many people in past argued with me for the use of Image Sprites for images. Many developers used image spriting to reduce the number of HTTP requests by combining many images into a single larger image that is then further sliced when needed. It’s good but it has a worst case of cache invalidation, change in a very tiny part of image would invalidate whole image in cache.

Javascript and CSS

Refactor your Codebase

Dead code is still downloaded, parsed, and compiled by the browser. Refactor your codebase periodically, everytime you make changes there’s a possibility there would be dead code which does absolutely nothing, unused CSS rules or libraries which are no longer used. You can use Chrome Lighthouse to check for dead Code in your website or webapp.

Always minify CSS and Javascript. Minifying significantly reduces the size of text-based resources, which in turn decreases the loading time.

To be continued…

Last updated on: 07/12/2019