SVG Low Quality Image Placeholders

Created on 11 May 2025, about 2 months ago

Problem/Motivation

Drupal has supported configurable lazy vs eager image loading for a couple of years, since πŸ“Œ Leverage the 'loading' html attribute to enable lazy-load by default for images in Drupal core Fixed and related issues.

There can be a trade-off in a couple of situations:

1. Large 'hero' images that are known to be above the fold (e.g. a 'main image' field on nodes) can be configured to load eager, but then it's potentially quite a large file to download.

2. With media embeds in ckeditor5 or views lists, we don't really want editors to have to make the decision. Similarly we don't really have a way in views to show the first six images in a list eager and the next 24 lazy, especially when rendering view modes because it would mess up render caching.

For a lot of sites, things are predictable enough on most pages that what we have so far is a big step beyond everything being loaded eager and covers the majority of situations, but you can still end up with lazy images above the fold and eager images below the fold when things get less predictable, and eager loading images still block.

A possible solution is Low Quality Image Placeholders, which has been around for a technique for a while, but as far as I know not in Drupal very much, only found https://www.drupal.org/project/imageapi_optimize_lqip β†’ so far.

The idea of LQIP is to load a very small (about 1kb) low quality image placeholder eagerly, and then when the full image is loaded lazily, it will replace the placeholder without impacting the Largest Contentful Paint score (because it's the same dimensions).

For lighthouse and other LCP tools, because the full image is the same size as the placeholder, only the first image is counted towards LCP because they're the same size.

For actual site visitors (which is who we should prioritise instead of tricking lighthouse algorithms), they see a pixelated or blurry version of the image at first, which comes into focus when the full image is loaded, it's like a re-implementation of progressive rendered JPEGs in HTML. If it happens quickly enough, you won't even notice.

This would give us a third option beyond eager and lazy - lazy with LQIP. If it works well enough, we could make it the default, because it can't go too wrong in either direction compared to eager and lazy.

However LQIP has its own trade-offs.

The oldest article I can find on LQIP (and the first one on google) is from 2017 https://medium.com/@imgix/lqip-your-images-for-fast-loading-2523d9ee4a62

It loads the smaller image as its own file, and while it talks about 'low bandwidth' connections it doesn't discuss latency. Latency can be 500ms or more (slow 4g or middling 3g connections), and then the latency of the request itself can be more of an issue than the actual file size when it downloads. Additionally, adding more eagerly loaded files could mean the lazy loaded files are loaded even later.

So I think implementing LQIP like this would be counter-productive in a lot of situations - it could make things worse.

There are also problems with the actual placeholder images - highly compressed pngs or jpegs look very blocky and 'wrong' to users, so if they actually look at the image before the real one loads, it will seem off.

So I had a thought - what if instead of an image, we used CSS or SVG to approximate the contours of the image, without needing to load a separate file. The placeholdering would be more obvious, but it would be extremely fast and potentially more of an 'honest' placeholder than a super low res jpg. This would then be loaded without any additional http requests, and still might be 'good enough' in terms of approximating the image - arguably better because SVG can blur the image without a large file size.

And it turns out, of course someone else already had the same idea:

https://github.com/denisbrodbeck/sqip

But... it's only implemented in node and Go. It would be possible to write something that has a node and/or Go dependency, but that limits its applicability. There are also web services that support SQIP creation but that's an external dependency that could go down at any time.

I think this gives us two options:

1. base64 encode a low res image style directly in the image tag.

2. See if we can implement SQIP in PHP.

Steps to reproduce

Proposed resolution

I looked for a PHP implementation of SQIP and couldn't find one, however, both GD and imagemagick support getting the colour at a particular pixel location https://www.php.net/manual/en/function.imagecolorat.php / https://www.php.net/manual/en/imagick.getimagepixelcolor.php

So.. I think we could do something like this:

1. Aggressively downsize the original image so that it's highly pixelated. e.g. we could try 25/25 for a 250/250 image, the ratio could be configurable.

2. Build an SVG based on the above - 25X25 would be 625 squares.

2a. we might want to 'compress' the image further, by sorting all of the colours, and generating a colour palette of a subset of these, say 64 colours. This way, adjacent squares with the same colour could be a rectangle instead. It should allow for a higher initial resolution that way.

3. Apply gaussian blur to the SVG, this would bring back some of the contours and colour depth that we've removed, and it will look like 'definitely a placeholder' instead of a suspiciously low-res image - which is the idea behind SQIP.

This can all be done in an image style, but we'd need to queue the image style creation because it won't generally be HTTP-requested - instead we'd inline the SVG in the image tag, reading it off disk when generating the responsive image element.

This makes the initial HTML document a bit bigger, but should only be a few kb on most pages.

It may be that there's a PHP-native way to do this logic, but I haven't found it yet.

Another approach would be to generate a ver

Remaining tasks

There are two separate things to look at:

1. Adding support for inline-LQIP to responsive images, this could be base64 encoded image or SVG, but the config would be the same.

2. Trying an SQIP implementation to see if we can make something that works.

User interface changes

Introduced terminology

SVG Low Quality Image Placeholder.

API changes

Data model changes

Release notes snippet

✨ Feature request
Status

Active

Version

11.0 πŸ”₯

Component

image system

Created by

πŸ‡¬πŸ‡§United Kingdom catch

Live updates comments and jobs are added and updated live.
  • Performance

    It affects performance. It is often combined with the Needs profiling tag.

Sign in to follow issues

Comments & Activities

  • Issue created by @catch
  • πŸ‡¬πŸ‡§United Kingdom catch

    Having written it up, the most important thing here is the 'inline' - if we can make that work, then SVG hopefully lets us make better placeholders but tiny webp would work too.

    For the image style, we don't actually need a queue, when rendering the HTML, if the placeholder derivative file exists on disk we can load it and inline it, if it doesn't exist, we can render the URL, set a max-age of 0 (or 30s), and disable the placeholder. When the URL is visited, it'll create the file on disk, and the next time it's render it'll get inlined. Should only happen once in the lifecycle of an individual image, and often immediately after the content is created.

  • πŸ‡¬πŸ‡§United Kingdom catch
  • πŸ‡¬πŸ‡§United Kingdom catch

    https://github.com/Otamay/potracio is a PHP port of potrace that's GPL licensed.

    Some good discussion of the various placeholder generation approaches in https://github.com/axe312ger/sqip/issues/116

  • πŸ‡¬πŸ‡§United Kingdom catch

    https://leanrada.com/notes/css-only-lqip looks interesting and potentially adaptable.

    https://csswizardry.com/2023/09/the-ultimate-lqip-lcp-technique/ explains how a bad implementation can do nothing or worse than nothing.

  • πŸ‡ͺπŸ‡¨Ecuador jwilson3

    Taking https://csswizardry.com/2023/09/the-ultimate-lqip-lcp-technique/ into account, if a goal is to have the LQIP size get counted as the LCP and avoid the full image override that, then it seems like the CSS-only version might not work since it is only a 3x2 pixel image. On the other hand, I wonder how the inline SVG approach could work for the LCP calculation. the SQIP relies on client-side (CSS blur technique, which is processor intensive). It seems there are tradeoffs all around, the comment on https://github.com/axe312ger/sqip/issues/116 seems to suggest that doing the image blur on the server side would be beneficial, since lots of images with blur is CPU/GPU intensive client-side.

    Big fan of SVG here, but thinking generally PHP and Drupal seem better positioned for a server-side raster-image LQIP approach, and as long as we can figure out an algorithm for a 0.0055BPP and ideally, a WebP conversion step, assuming available serverside.

    The choices and tradeoffs come down to where to put the burden of

    • One time server-side in memory raster with blur using tools natively available to PHP.
    • One time server-side raster image without blur using tools natively available to PHP + CSS client-side blur.
    • One time server-side vector image generation using non-native tooling (or writing a PHP library) + CSS client-side blur. NEEDS LCP validation.

    Some references:

  • πŸ‡¬πŸ‡§United Kingdom catch

    if a goal is to have the LQIP size get counted as the LCP and avoid the full image override that, then it seems like the CSS-only version might not work since it is only a 3x2 pixel image (though its uncertain if CSS blur affects the bits-per-pixel calculation -- though I doubt it).

    This ought to be testable with chrome/chromium itself - the performance log shows the LCP candidates (this is the basis of the LCP calculations for core performance graphs: https://gander.tag1.io/). It definitely needs to be tested like that, but I don't think we should rule it out until doing so. We'd need to check that the img tag with the css approach is treated as an LCP candidate, and that when the actual image loaded, it's not a new LCP candidate.

    if we're trying to get a sufficient BPP ratio for really large above-the-fold hero images maybe it is better to just generate and store an image server side?

    We'd still need to embed the image as a base64 encoded string in the HTML to avoid doubling the http requests. Once you get to the point of loading the LQIP from disk it undermines the entire point IMO - latency is usually a bigger problem than file size overall especially since we already webp or avif compress (and resize) the final image.

  • πŸ‡¨πŸ‡­Switzerland 4aficiona2

    Thanks for addressing this and moving this forward! Would be really nice to have the core option "lazy with LQIP" like you proposed.

    Technique-wise I'm not sure if the base64 variant/option is the most performant and sustainable one.

    We'd still need to embed the image as a base64 encoded string in the HTML to avoid doubling the http requests. Once you get to the point of loading the LQIP from disk it undermines the entire point IMO

    Also referencing here Harry, eventhough it's from 2017 https://csswizardry.com/2017/02/base64-encoding-and-performance/

    I'd favor the one-time serverside generation of the blurred LQIP or SQIP image over a client-side blur which will consume more energy (since its for each request) than doing this once on the serverside and does not depend on the capabilities of the users device.

    Having in mind picture / srcset / sizes https://developer.mozilla.org/en-US/docs/Web/HTML/Guides/Responsive_imag... using an actual remote image src leaves also a higher freedom when handling this in image styles.

  • πŸ‡ͺπŸ‡¨Ecuador jwilson3

    Good point about latency. Thanks for clarifying it. I read that in the IS, but didn't understand, or at least it didn't sink in.

    Thinking a bit more on server-side blur β€” to get a decent-looking blur baked into a raster image, the placeholder needs to be relatively large β€” e.g., 600x400 β€” unlike a tiny 15x10 that works fine when upscaled and blurred with CSS client-side. That larger size means more bytes, which makes base64 inlining less appealing due to HTML bloat. And even then, a 600x400 server-side blurred placeholder might still look blocky when upscaled to the final display size, especially on high-DPI screens.

    For server-side blur, there’s a point of diminishing returns at around ~10% of the real image size or ~5KB in payload. A 600x400 pixel with heavy blur applied could be in the 10k to 40k range.

  • πŸ‡¬πŸ‡§United Kingdom catch

    using an actual remote image src leaves IMO also a higher freedom when handling this in image styles.

    I don't think an actual remote image is a good option though because it's then doubling the http requests - once for the placeholder, once for the image itself. The placeholder has to be loaded eager, which would undermine a default of 'LQIP + lazy load' for views listings and similar where a lot of content might be below the fold, then it could be a lot more than double the requests.

    I'd favor the one-time serverside generation of the blurred LQIP or SQIP image over a client-side blur which will consume more energy (since its for each request) than doing this once on the serverside and does not depend on the capabilities of the users device.

    This depends though - a client-side blur of a bitmap is going to take more energy than just serving a blurred bitmap, it's hard to tell exactly how much but it'll be more.

    But rendering a blurred SVG or pure-CSS placeholder, especially if there are zero additional http requests and they are a small addition to HTML page weight might be very cheap - especially in comparison to the rest of the page.

    The more I think about this, most of the options seem counter-productive in one way or another (bloated HTML, extra http requests etc.), but either SQIP or CSSIP seem like they might be viable - as long as we're happy with the actual end user experience and as long as they don't actually fail to get registered as the LCP.

  • πŸ‡ͺπŸ‡¨Ecuador jwilson3

    While the server-side blur is probably feasible, I don't think this precludes us considering tiny inline images (smaller than 20x20px).

    Maybe next steps could be to create a few simple HTML page examples using the different approaches to see how Lighthouse LCP behaves.

    Proposed tests:

    • Baseline 1: JPG Hero image with lazy loading (Drupal's default option for all images).
    • Baseline 2: JPG Hero image with eager loading.
    • LQIP w/o CSS blur: Hero image with basic LQIP technique using a tiny base64 inline WebP placeholder image (a scaled down version of the hero, sizes to no more than 20Γ—20 pixels).
    • LQIP w/ CSS blur: Hero image with LQIP technique using a tiny base64 inline WebP placeholder image with client-side blur applied.
    • SQIP
    • Pure CSS LQIP.

    Points to consider for fair comparison:

    1. payload size: inline WebP base64 versus inline SVG versus CSS (+ JS) size.
    2. placeholder render quality: a CSS blur will look good but could the <20Γ—20px LQIP base64 payload image with no blur applied work (for reduced client-side processor power and shaving bytes off CSS)?
    3. load experience: is there any jank between the the visual shift from placeholder to full-res image? is it annoying enough to need a JS fade-in effect?
    4. Largest Contentful Paint score: Does each technique provide an efficient Lighthouse' LCP based on the placeholder, or do any of them get a longer LCP due to the full-res image loading in later?
    5. processing requirements: This will be hardest to confirm outright, but a tiny inline image with no CSS blur and minimal to no JS is more efficient than more complex solutions, and should count for something unless any of the previous points disqualify it.
  • πŸ‡¦πŸ‡ΊAustralia mstrelan

    Whenever I see inline styles, scripts or images the first thing that comes to mind is Content Security Policy (CSP). For base64 images we need img-src data:, not entirely sure about SVG.

  • πŸ‡ͺπŸ‡¨Ecuador jwilson3

    I set up a test site with a few LQUIP approaches to be able to test the visual load transitions (using a poor man's "delay" dropdown parameter to simulate latency and to actually see the LQIP for more than a brief second).

    1. A couple baselines (OOTB Drupal eager/lazy load settings).
    2. A couple basic LQIPs based on an inline square 8x8 thumbnail inspired by how Unsplash does it. Unsplash uses an inline BMP, but I also tested with inline PNG (same size as BMP), and an inline WebP which produced a much smaller inline payload as well as a slightly different visual blur (the BMP and PNG were visually equivalent). The key here is to add a simple box blur to the 8x8 thumbnail to avoid browsers rendering jagged edges between adjacent high-contrast pixels when scaling up the thumbnail to full-res size in the browser. I also tested without blur, and with larger thumbnails like 16x9, but none of these options look as visually appealing as the simple, blurred 8x8 square image.
    3. A LQIP WebP Smooth using an 8x8 blurred WebP inline thumbnail with smooth fade-in effect requiring an "onload" JS to transition from low- to high-res. IMO this is the clear winner to satisfy the visual perspective, architectural simplicity (no 3rd party deps aside from GD), and resource usage both client- and server-side.
    4. The Ultimate LQIP technique, which depends on 2 LQIPs and suffers from the twice the number of http requests.
    5. The Blurhash technique. Blurhas has an existing Drupal module, but calculating the blurhash is fairly resource intensive on the server-side, and the Drupal module doesn't have any caching. It also depends on the clever base83 hash being decoded with Javascript on the client-side, but the 3rd party library is a JS module which complicates usage for Drupal requiring the use of import and knowing the path to the library JS file.
    6. A couple CSS-blur techniques including:
      • a client-side blur of a small thumbnail. CSS blur applied to an image looks really bad around the edges of the image and is a non-starter.
      • CSS-only LQIP technique. This has horrible CSS complexity to create the integer hash and the clien-side CSS gradient code to "decode" the integer hash. Also, the resulting effect of the grayscale gradient applied onto the image's calculated average color (which must be calculated server-side from the source image) look extremely simplistic compared to the 90-byte WebP thumbnail.

    I had a look at LCP for each of these with WebPageTest and PageSpeed Insights, but couldn't find a solid winner emerge. In my tests, the "Ultimate LQIP" option had the worst LCP of them all on WebPageTest, so it appears that the LCP algorithm may vary based on what tool is being used. YMMV.

    Ultimately, I think the LCP goal is possible but difficult to achieve for hero images with an LQIP approach alone, since you need twice the requests and the >BPP0.055 ratio ends up creating a fairly large image for the LCP (44-kilobyte) versus a simple 90-byte 8x8 thumbnail. However, it is also worth noting that none of the other approaches I've found that depend on a low-res blurred image or css-gradient will positively affect LCP since they inherently do not meet the minimum BPP ratio.

    Looking forward to having others' thoughts, insights, and reviews.

    Code here: https://github.com/jameswilson/3523781-Drupal-LQIP
    Site here: https://3523781-drupal-lqip.elementalidad.com/

  • πŸ‡ͺπŸ‡¨Ecuador jwilson3

    For base64 images we need img-src data:, not entirely sure about SVG.

    If you load SVG via data URI (e.g., src="data:image/svg+xml;base64,..."), then it would be covered by img-src data:. But we'll also need to ensure any generated SVGs (especially via 3rd parties) are additionally XSS sanitized.

    It's a good point to consider, and maybe we'd have to have a site-level configuration to pick using inline data uris, or additional requests for the thumbnails.

  • πŸ‡¬πŸ‡§United Kingdom catch

    On the demo, while the delay is useful to be able to see what the placeholders look like, adding the css .loaded class in inline js with a CSS rule is causing it to be the LCP again. So I don't think it's showing what the LCP would be for the different approaches.

    Of the actual placeholders, I looked at these four:

    https://3523781-drupal-lqip.elementalidad.com/lqip-webp-smooth.php
    https://3523781-drupal-lqip.elementalidad.com/blurhash.php?delay=3000
    https://3523781-drupal-lqip.elementalidad.com/css-lqip.php?delay=0
    https://3523781-drupal-lqip.elementalidad.com/sqip.php?delay=3000

    ..and SQIP manages to most closely resemble the original image. But as you point out there's currently not a PHP implementation of SQIP so we'd have to write one... css-lqip is doing particularly bad out of the four, but for the css-lqip demo it did seem to be doing a bit better, maybe it's a harder image for it to approximate?

  • πŸ‡¬πŸ‡§United Kingdom catch

    Regarding #14 if we generate SVGs via a library, then we don't need SVG sanitization, we'd only need that for uploads.

  • πŸ‡ͺπŸ‡¨Ecuador jwilson3

    Re: #15 (.loading class LCP issue) Thank you. I'll try to make sense of what you're saying and get to the bottom of this soon. But happy to have a PR if you know what the fix would be offhand. (The project is setup for DDEV running locally).

    Re: #16 (SQIP) While SQIP is a "superior" image, there are two major reasons it is problematic: it is both resource intensive on the server-side and on the client-side. For the server-side, even if we were to reimplement `sqip` in PHP, I expect it will be a fair bit more resource intensive than simply scaling down a raster image to 8x8 and applying a simple box blur, which Drupal image styles can do for us OOTB today. On the client-side, the problem is the <g filter="blur(12px)"> which is applied via browser rendering. you can inspect https://3523781-drupal-lqip.elementalidad.com/images/hero.sqip.svg to see the file that was generated by the npm library.

    The sqip command took about 2.6 seconds to run, used about 4 CPU cores, (thats >10s total compute time) for the 400kb image.

    If you look at the animated GIF demonstrating the processing progression of the underlying binary used by sqip, it becomes fairly obvious this is intensive work: https://github.com/fogleman/primitive?tab=readme-ov-file#progression

  • πŸ‡ͺπŸ‡¨Ecuador jwilson3

    Re: #15

    adding the css .loaded class in inline js with a CSS rule is causing it to be the LCP again. So I don't think it's showing what the LCP would be for the different approaches.
    • The .loaded class is used on several of the examples (SQIP, BlurHash, Ultimate LQIP, CSS-only LQIP) to change opacity with a smooth transition between the blurred placeholder and the full-res version.
    • For contrast, the LQIP WebP Smooth example uses a slightly different technique to change opacity smoothly, via an inline style opacity:1 loaded via inline JS onload, but the smoothing transition effect on the opacity is still defined in CSS (I don't believe the distinction between inline vs CSS ultimately matters for performance other than if your page has many images at some point you're sending down a lot of duplicitous bytes).
    • Finally, the basic examples (LQIP BMP, LQIP PNG, and LQIP WEBP) do not use any smoothing transition and just rely on browser loading to show the full-res image and overlay the low-res placeholder.

    I don't think LCP is affected by the opacity change (both with or without smoothing effect). Rather, the problem with the LCP is that all examples except "Ultimate LQIP" use placeholders that have a Bits-Per-Pixel ratio far less than the recommended 0.05 which means LCP will always consider the repaint of the full-res image.

    The takeaway is that we cannot effectively reduce LCP with the LQIP technique unless the placeholder image is large enough >0.05BPP. And for the image to be "large enough" to take over the LCP for the 1200 pixel wide hero in the example site, it has to be on the order of 40k in size, which IMO rules out the option of base64 inlining. This implies the LQIP must be a reference to another image file, and yet another request (with potential latency) to download the placeholder image, somewhat defeating the other intended of the goal of the LQIP (having something on screen fast, at page load time, ideally piggy-backed inline via the HTML request).

  • πŸ‡ͺπŸ‡¨Ecuador jwilson3

    I added a way to run lighthouse (locally, via the npm library) against all of the pages in the test site. I ran against the pages using a 1s delay and without any delay at all. Each run gives slightly different performance score but generally stays within a 2 point range in the mid nineties on each page, except for one. The "Ultimate LQIP" page can vary between a score of 100 AND then go consistently below 90 (as low as 87) in different runs. This must have something to do with the extra intermediate low-res image latency being what amounts to a double edged sword.

    https://3523781-drupal-lqip.elementalidad.com/results.php

    https://github.com/jameswilson/3523781-Drupal-LQIP/commit/245cbbc1d787a2...

Production build 0.71.5 2024