- Issue created by @catch
- π¬π§United Kingdom catch
Having written it up, the most important thing here is the 'inline' - if we can make that work, then SVG hopefully lets us make better placeholders but tiny webp would work too.
For the image style, we don't actually need a queue, when rendering the HTML, if the placeholder derivative file exists on disk we can load it and inline it, if it doesn't exist, we can render the URL, set a max-age of 0 (or 30s), and disable the placeholder. When the URL is visited, it'll create the file on disk, and the next time it's render it'll get inlined. Should only happen once in the lifecycle of an individual image, and often immediately after the content is created.
- π¬π§United Kingdom catch
https://github.com/Otamay/potracio is a PHP port of potrace that's GPL licensed.
Some good discussion of the various placeholder generation approaches in https://github.com/axe312ger/sqip/issues/116
- π¬π§United Kingdom catch
https://leanrada.com/notes/css-only-lqip looks interesting and potentially adaptable.
https://csswizardry.com/2023/09/the-ultimate-lqip-lcp-technique/ explains how a bad implementation can do nothing or worse than nothing.
- πͺπ¨Ecuador jwilson3
Taking https://csswizardry.com/2023/09/the-ultimate-lqip-lcp-technique/ into account, if a goal is to have the LQIP size get counted as the LCP and avoid the full image override that, then it seems like the CSS-only version might not work since it is only a 3x2 pixel image. On the other hand, I wonder how the inline SVG approach could work for the LCP calculation. the SQIP relies on client-side (CSS blur technique, which is processor intensive). It seems there are tradeoffs all around, the comment on https://github.com/axe312ger/sqip/issues/116 seems to suggest that doing the image blur on the server side would be beneficial, since lots of images with blur is CPU/GPU intensive client-side.
Big fan of SVG here, but thinking generally PHP and Drupal seem better positioned for a server-side raster-image LQIP approach, and as long as we can figure out an algorithm for a 0.0055BPP and ideally, a WebP conversion step, assuming available serverside.
The choices and tradeoffs come down to where to put the burden of
- One time server-side in memory raster with blur using tools natively available to PHP.
- One time server-side raster image without blur using tools natively available to PHP + CSS client-side blur.
- One time server-side vector image generation using non-native tooling (or writing a PHP library) + CSS client-side blur. NEEDS LCP validation.
Some references:
- the go SQIP implementation: https://github.com/fogleman/primitive
- the node SQIP implementation: https://github.com/axe312ger/sqip
- a modern node LQIP implementation: https://transitive-bullshit.github.io/lqip-modern/
- a php LQIP implementation https://gist.github.com/voduytuan/4a46e2ba5dcb353e0f60bdc483b1a5f3 which generates a base64 encoded string, that I'm suspicious of if we're trying to get a sufficient BPP ratio for really large above-the-fold hero images maybe it is better to just generate an image server side? Possibly the base64 vs image derivative could be a configuration option we do not expose in the UI but allow from config in code since figuring out which option to go with is fairly complicated.
- π¬π§United Kingdom catch
if a goal is to have the LQIP size get counted as the LCP and avoid the full image override that, then it seems like the CSS-only version might not work since it is only a 3x2 pixel image (though its uncertain if CSS blur affects the bits-per-pixel calculation -- though I doubt it).
This ought to be testable with chrome/chromium itself - the performance log shows the LCP candidates (this is the basis of the LCP calculations for core performance graphs: https://gander.tag1.io/). It definitely needs to be tested like that, but I don't think we should rule it out until doing so. We'd need to check that the img tag with the css approach is treated as an LCP candidate, and that when the actual image loaded, it's not a new LCP candidate.
if we're trying to get a sufficient BPP ratio for really large above-the-fold hero images maybe it is better to just generate and store an image server side?
We'd still need to embed the image as a base64 encoded string in the HTML to avoid doubling the http requests. Once you get to the point of loading the LQIP from disk it undermines the entire point IMO - latency is usually a bigger problem than file size overall especially since we already webp or avif compress (and resize) the final image.
- π¨πSwitzerland 4aficiona2
Thanks for addressing this and moving this forward! Would be really nice to have the core option "lazy with LQIP" like you proposed.
Technique-wise I'm not sure if the base64 variant/option is the most performant and sustainable one.
We'd still need to embed the image as a base64 encoded string in the HTML to avoid doubling the http requests. Once you get to the point of loading the LQIP from disk it undermines the entire point IMO
Also referencing here Harry, eventhough it's from 2017 https://csswizardry.com/2017/02/base64-encoding-and-performance/
I'd favor the one-time serverside generation of the blurred LQIP or SQIP image over a client-side blur which will consume more energy (since its for each request) than doing this once on the serverside and does not depend on the capabilities of the users device.
Having in mind picture / srcset / sizes https://developer.mozilla.org/en-US/docs/Web/HTML/Guides/Responsive_imag... using an actual remote image src leaves also a higher freedom when handling this in image styles.
- πͺπ¨Ecuador jwilson3
Good point about latency. Thanks for clarifying it. I read that in the IS, but didn't understand, or at least it didn't sink in.
Thinking a bit more on server-side blur β to get a decent-looking blur baked into a raster image, the placeholder needs to be relatively large β e.g., 600x400 β unlike a tiny 15x10 that works fine when upscaled and blurred with CSS client-side. That larger size means more bytes, which makes base64 inlining less appealing due to HTML bloat. And even then, a 600x400 server-side blurred placeholder might still look blocky when upscaled to the final display size, especially on high-DPI screens.
For server-side blur, thereβs a point of diminishing returns at around ~10% of the real image size or ~5KB in payload. A 600x400 pixel with heavy blur applied could be in the 10k to 40k range.
- π¬π§United Kingdom catch
using an actual remote image src leaves IMO also a higher freedom when handling this in image styles.
I don't think an actual remote image is a good option though because it's then doubling the http requests - once for the placeholder, once for the image itself. The placeholder has to be loaded eager, which would undermine a default of 'LQIP + lazy load' for views listings and similar where a lot of content might be below the fold, then it could be a lot more than double the requests.
I'd favor the one-time serverside generation of the blurred LQIP or SQIP image over a client-side blur which will consume more energy (since its for each request) than doing this once on the serverside and does not depend on the capabilities of the users device.
This depends though - a client-side blur of a bitmap is going to take more energy than just serving a blurred bitmap, it's hard to tell exactly how much but it'll be more.
But rendering a blurred SVG or pure-CSS placeholder, especially if there are zero additional http requests and they are a small addition to HTML page weight might be very cheap - especially in comparison to the rest of the page.
The more I think about this, most of the options seem counter-productive in one way or another (bloated HTML, extra http requests etc.), but either SQIP or CSSIP seem like they might be viable - as long as we're happy with the actual end user experience and as long as they don't actually fail to get registered as the LCP.
- πͺπ¨Ecuador jwilson3
While the server-side blur is probably feasible, I don't think this precludes us considering tiny inline images (smaller than 20x20px).
Maybe next steps could be to create a few simple HTML page examples using the different approaches to see how Lighthouse LCP behaves.
Proposed tests:
- Baseline 1: JPG Hero image with lazy loading (Drupal's default option for all images).
- Baseline 2: JPG Hero image with eager loading.
- LQIP w/o CSS blur: Hero image with basic LQIP technique using a tiny base64 inline WebP placeholder image (a scaled down version of the hero, sizes to no more than 20Γ20 pixels).
- LQIP w/ CSS blur: Hero image with LQIP technique using a tiny base64 inline WebP placeholder image with client-side blur applied.
- SQIP
- Pure CSS LQIP.
Points to consider for fair comparison:
- payload size: inline WebP base64 versus inline SVG versus CSS (+ JS) size.
- placeholder render quality: a CSS blur will look good but could the <20Γ20px LQIP base64 payload image with no blur applied work (for reduced client-side processor power and shaving bytes off CSS)?
- load experience: is there any jank between the the visual shift from placeholder to full-res image? is it annoying enough to need a JS fade-in effect?
- Largest Contentful Paint score: Does each technique provide an efficient Lighthouse' LCP based on the placeholder, or do any of them get a longer LCP due to the full-res image loading in later?
- processing requirements: This will be hardest to confirm outright, but a tiny inline image with no CSS blur and minimal to no JS is more efficient than more complex solutions, and should count for something unless any of the previous points disqualify it.
- π¦πΊAustralia mstrelan
Whenever I see inline styles, scripts or images the first thing that comes to mind is Content Security Policy (CSP). For base64 images we need
img-src data:
, not entirely sure about SVG. - πͺπ¨Ecuador jwilson3
I set up a test site with a few LQUIP approaches to be able to test the visual load transitions (using a poor man's "delay" dropdown parameter to simulate latency and to actually see the LQIP for more than a brief second).
- A couple baselines (OOTB Drupal eager/lazy load settings).
- A couple basic LQIPs based on an inline square 8x8 thumbnail inspired by how Unsplash does it. Unsplash uses an inline BMP, but I also tested with inline PNG (same size as BMP), and an inline WebP which produced a much smaller inline payload as well as a slightly different visual blur (the BMP and PNG were visually equivalent). The key here is to add a simple box blur to the 8x8 thumbnail to avoid browsers rendering jagged edges between adjacent high-contrast pixels when scaling up the thumbnail to full-res size in the browser. I also tested without blur, and with larger thumbnails like 16x9, but none of these options look as visually appealing as the simple, blurred 8x8 square image.
- A LQIP WebP Smooth using an 8x8 blurred WebP inline thumbnail with smooth fade-in effect requiring an "onload" JS to transition from low- to high-res. IMO this is the clear winner to satisfy the visual perspective, architectural simplicity (no 3rd party deps aside from GD), and resource usage both client- and server-side.
- The Ultimate LQIP technique, which depends on 2 LQIPs and suffers from the twice the number of http requests.
- The Blurhash technique. Blurhas has an existing Drupal module, but calculating the blurhash is fairly resource intensive on the server-side, and the Drupal module doesn't have any caching. It also depends on the clever base83 hash being decoded with Javascript on the client-side, but the 3rd party library is a JS module which complicates usage for Drupal requiring the use of
import
and knowing the path to the library JS file. - A couple CSS-blur techniques including:
- a client-side blur of a small thumbnail. CSS blur applied to an image looks really bad around the edges of the image and is a non-starter.
- CSS-only LQIP technique. This has horrible CSS complexity to create the integer hash and the clien-side CSS gradient code to "decode" the integer hash. Also, the resulting effect of the grayscale gradient applied onto the image's calculated average color (which must be calculated server-side from the source image) look extremely simplistic compared to the 90-byte WebP thumbnail.
I had a look at LCP for each of these with WebPageTest and PageSpeed Insights, but couldn't find a solid winner emerge. In my tests, the "Ultimate LQIP" option had the worst LCP of them all on WebPageTest, so it appears that the LCP algorithm may vary based on what tool is being used. YMMV.
Ultimately, I think the LCP goal is possible but difficult to achieve for hero images with an LQIP approach alone, since you need twice the requests and the >BPP0.055 ratio ends up creating a fairly large image for the LCP (44-kilobyte) versus a simple 90-byte 8x8 thumbnail. However, it is also worth noting that none of the other approaches I've found that depend on a low-res blurred image or css-gradient will positively affect LCP since they inherently do not meet the minimum BPP ratio.
Looking forward to having others' thoughts, insights, and reviews.
Code here: https://github.com/jameswilson/3523781-Drupal-LQIP
Site here: https://3523781-drupal-lqip.elementalidad.com/ - πͺπ¨Ecuador jwilson3
For base64 images we need img-src data:, not entirely sure about SVG.
If you load SVG via data URI (e.g., src="data:image/svg+xml;base64,..."), then it would be covered by
img-src data:
. But we'll also need to ensure any generated SVGs (especially via 3rd parties) are additionally XSS sanitized.It's a good point to consider, and maybe we'd have to have a site-level configuration to pick using inline data uris, or additional requests for the thumbnails.
- π¬π§United Kingdom catch
On the demo, while the delay is useful to be able to see what the placeholders look like, adding the css .loaded class in inline js with a CSS rule is causing it to be the LCP again. So I don't think it's showing what the LCP would be for the different approaches.
Of the actual placeholders, I looked at these four:
https://3523781-drupal-lqip.elementalidad.com/lqip-webp-smooth.php
https://3523781-drupal-lqip.elementalidad.com/blurhash.php?delay=3000
https://3523781-drupal-lqip.elementalidad.com/css-lqip.php?delay=0
https://3523781-drupal-lqip.elementalidad.com/sqip.php?delay=3000..and SQIP manages to most closely resemble the original image. But as you point out there's currently not a PHP implementation of SQIP so we'd have to write one... css-lqip is doing particularly bad out of the four, but for the css-lqip demo it did seem to be doing a bit better, maybe it's a harder image for it to approximate?
- π¬π§United Kingdom catch
Regarding #14 if we generate SVGs via a library, then we don't need SVG sanitization, we'd only need that for uploads.
- πͺπ¨Ecuador jwilson3
Re: #15 (.loading class LCP issue) Thank you. I'll try to make sense of what you're saying and get to the bottom of this soon. But happy to have a PR if you know what the fix would be offhand. (The project is setup for DDEV running locally).
Re: #16 (SQIP) While SQIP is a "superior" image, there are two major reasons it is problematic: it is both resource intensive on the server-side and on the client-side. For the server-side, even if we were to reimplement `sqip` in PHP, I expect it will be a fair bit more resource intensive than simply scaling down a raster image to 8x8 and applying a simple box blur, which Drupal image styles can do for us OOTB today. On the client-side, the problem is the
<g filter="blur(12px)">
which is applied via browser rendering. you can inspect https://3523781-drupal-lqip.elementalidad.com/images/hero.sqip.svg to see the file that was generated by the npm library.The sqip command took about 2.6 seconds to run, used about 4 CPU cores, (thats >10s total compute time) for the 400kb image.
If you look at the animated GIF demonstrating the processing progression of the underlying binary used by sqip, it becomes fairly obvious this is intensive work: https://github.com/fogleman/primitive?tab=readme-ov-file#progression
- πͺπ¨Ecuador jwilson3
Re: #15
adding the css .loaded class in inline js with a CSS rule is causing it to be the LCP again. So I don't think it's showing what the LCP would be for the different approaches.
- The
.loaded
class is used on several of the examples (SQIP, BlurHash, Ultimate LQIP, CSS-only LQIP) to change opacity with a smooth transition between the blurred placeholder and the full-res version. - For contrast, the LQIP WebP Smooth example uses a slightly different technique to change opacity smoothly, via an inline style
opacity:1
loaded via inline JSonload
, but the smoothing transition effect on the opacity is still defined in CSS (I don't believe the distinction between inline vs CSS ultimately matters for performance other than if your page has many images at some point you're sending down a lot of duplicitous bytes). - Finally, the basic examples (LQIP BMP, LQIP PNG, and LQIP WEBP) do not use any smoothing transition and just rely on browser loading to show the full-res image and overlay the low-res placeholder.
I don't think LCP is affected by the opacity change (both with or without smoothing effect). Rather, the problem with the LCP is that all examples except "Ultimate LQIP" use placeholders that have a Bits-Per-Pixel ratio far less than the recommended 0.05 which means LCP will always consider the repaint of the full-res image.
The takeaway is that we cannot effectively reduce LCP with the LQIP technique unless the placeholder image is large enough >0.05BPP. And for the image to be "large enough" to take over the LCP for the 1200 pixel wide hero in the example site, it has to be on the order of 40k in size, which IMO rules out the option of base64 inlining. This implies the LQIP must be a reference to another image file, and yet another request (with potential latency) to download the placeholder image, somewhat defeating the other intended of the goal of the LQIP (having something on screen fast, at page load time, ideally piggy-backed inline via the HTML request).
- The
- πͺπ¨Ecuador jwilson3
I added a way to run lighthouse (locally, via the npm library) against all of the pages in the test site. I ran against the pages using a 1s delay and without any delay at all. Each run gives slightly different performance score but generally stays within a 2 point range in the mid nineties on each page, except for one. The "Ultimate LQIP" page can vary between a score of 100 AND then go consistently below 90 (as low as 87) in different runs. This must have something to do with the extra intermediate low-res image latency being what amounts to a double edged sword.
https://3523781-drupal-lqip.elementalidad.com/results.php
https://github.com/jameswilson/3523781-Drupal-LQIP/commit/245cbbc1d787a2...