Account created on 26 December 2007, over 17 years ago
  • Web Architect, Developer, Themer at Bluespark 
#

Merge Requests

More

Recent comments

🇪🇨Ecuador jwilson3

It is probably also useful to see when any Webform Nodes exist but do not reference any webform. A report going in both directions is akin to a more proper Correspondence or Bijection graph, where we're trying to ensure each entity has a matching partner, a natural 1-to-1 mapping.

The basic information can be obtained, roughly, with a simple query like this:

SELECT `webform`.`webform_id`, `node__webform`.`entity_id` as node_id FROM `webform`
LEFT JOIN `node__webform` ON `node__webform`.`webform_target_id` = `webform`.`webform_id`
ORDER BY node_id;

But this only accounts for webforms that either have or don't have Webform Nodes, it doesn't account for Webform Nodes that don't link to webforms, or Webforms referenced by multiple Webform Nodes.

For a more complete picture, we need a more complex query:

-- Show pairs + orphans + cardinality checks in one result.
SELECT
  x.webform_id,
  x.node_id,
  CASE
    WHEN x.webform_id IS NULL THEN '1_NODE_WITHOUT_WEBFORM'
    WHEN x.node_id   IS NULL THEN '2_UNREFERENCED_WEBFORM'
    WHEN wn.cnt_nodes_for_form = 1 AND nw.cnt_forms_for_node = 1
         THEN '5_PAIRED_1_TO_1'
    WHEN wn.cnt_nodes_for_form > 1 THEN '3_FORM_REFERENCED_BY_MULTIPLE_NODES'
    WHEN nw.cnt_forms_for_node > 1 THEN '4_NODE_REFERENCES_MULTIPLE_FORMS'
    ELSE 'CHECK'
  END AS status,
  wn.cnt_nodes_for_form,
  nw.cnt_forms_for_node
FROM (
  -- A) all webforms, with their referencing node if any
  SELECT w.webform_id, n.entity_id AS node_id
  FROM webform AS w
  LEFT JOIN node__webform AS n
    ON n.webform_target_id = w.webform_id
   AND n.deleted = 0
  /* optionally constrain to the node bundle that should carry the field:
     AND n.bundle = 'webform_node' */
  UNION
  -- C) nodes whose field points to a webform id that doesn't exist
  SELECT w.webform_id, n.entity_id AS node_id
  FROM node__webform AS n
  LEFT JOIN webform AS w
    ON w.webform_id = n.webform_target_id
  WHERE w.webform_id IS NULL
    AND n.deleted = 0
  /* optionally: AND n.bundle = 'webform_node' */
) AS x
LEFT JOIN (
  -- count how many nodes reference each webform
  SELECT webform_target_id, COUNT(*) AS cnt_nodes_for_form
  FROM node__webform
  WHERE deleted = 0
  GROUP BY webform_target_id
) AS wn
  ON wn.webform_target_id = x.webform_id
LEFT JOIN (
  -- count how many webforms each node references (should be 0 or 1)
  SELECT entity_id, COUNT(*) AS cnt_forms_for_node
  FROM node__webform
  WHERE deleted = 0
  GROUP BY entity_id
) AS nw
  ON nw.entity_id = x.node_id
ORDER BY status, x.node_id, x.webform_id;
🇪🇨Ecuador jwilson3

Since we're removing email due to limitations above, it makes sense to also not use 'Co-authored-by' — in addition to the other arguments made above — because other platforms would typically expects a username + email in that format.

In #23 my proposal was that if consensus is that we're not trying to satisfy machines and us a standard format, then I'd propose taking it one single step further than just the technical email limitations written in the issue summary, and make the 'By: ' trailer readable as a single comma-separated list of usernames, which aligns more closely with how we used to do it.

Before:

Issue #999999 by user1, user2, user3, user4, user5, user6, user7, user8, user9: Convert MediaSource plugin discovery to attributes

After:

[#999999] task: Convert MediaSource plugin discovery to attributes

By: user1, user2, user3, user4, user5, user6, user7, user8, user9

Instead of:

[#999999] task: Convert MediaSource plugin discovery to attributes

By: user1
By: user2
By: user3
By: user4
By: user5
By: user6
By: user7
By: user8
By: user9

I didn't see anyone address this point yet and it was the primary point I was trying to make. Sorry for what sounds like it devolved into a bikeshed, but this goes directly to the point of considering how the data is used so we can get the format right once and not change it again for 10+ years.

🇪🇨Ecuador jwilson3

I'm surprised no one has mentioned wildcard support here.

webform_submission_*

Were this MR to have wildcard support, then we could likely close other 3rd party module integration issues like Honeypot Facets 3 submodule? Active and Add "Protect all Webforms" options to Honeypot's global module config Active . Additionally, 🐛 Conflicting ID when honeypot is rendered more than once on a page Active could potentially even be solved via wildcard support as well.

The honeypot module already has a dev dependency on Webforms and tests that enables webform, so we could use that module in a wildcard test on the MR here.

🇪🇨Ecuador jwilson3

As @longwave pointed out in previous comments, GitLab, GitHub and others seem to have come to some level of semi-standardization around the Co-authored-by: username <email> trailer, though this is not official in any way. So if the goal is for getting off the island, I'd go with that. On the other hand, if the goal is to just make the old "byline" info in the git commit message available (and easily human-scannable) for casual users, then a custom By: [username1]\nBy: [username2] is okay and By: [comma-separated list of users] even better, since IDE tools that do inline Git blame popups, like GitLense or Better Git Line Blame (which I use in Cursor) would get vertically truncated for really long multi-line commit messages, making it less scannable.

I think it comes down to what problem we're trying to solve. Since we use GitLab, it seems smart to make the effort and kill 2 birds with 1 stone by going with something that GitLab supports. https://github.blog/news-insights/product-news/commit-together-with-co-a...

From https://gitlab.com/gitlab-org/gitlab-foss/-/issues/31640

Gitlab should recognize Co-Authored-By: [name] <[email]> in the message body, and should display user icons on UI related to the commit

they don't actually link to the user from the commit message, as is suggested here, but they list the co-authors along with the main author in the interface

🇪🇨Ecuador jwilson3

Follow! My understanding is that AI bots are not actually submitting forms, but that the bots love clicking through links and the Facets issue boils down to the fact that Facet blocks aren't really checkboxes, but are in fact just links styled to look like checkboxes. Facets 3 switches to using actual checkboxes and forms instead of lists of links, which is promising and will probably save us for now.

But as time wears on and bots get more advanced, there might be real opportunity here. But my suggestion would be to first try adding additional form ids for the Facets 3 block forms to the existing Honeypot module settings on your site, to guage whether Honeypot + Facets forms it is having any effect in rebuffing bot requests. Allow for more flexible form configuration Active seems like a promising solution, if you're not opposed to manually entering facet form ids. If this does help, then maybe adding a UI option to manually enter custom form ids would be a good first step. A more custom UI just for Facet blocks could be possible, but probably complex, and easily doable from a few lines of custom form_alter code.

Here is what I'm doing to add Honeypot to all Webforms. Presumably it could be adapted for Facet blocks as well assuming they follow some kind of standard form id prefix in their naming conventions.


function mymodule_form_alter(&$form, FormStateInterface $form_state, $form_id) {
  // Add honeypot protection to all Webforms.
  // @todo remove when https://drupal.org/i/3544510 lands
  if (strpos($form_id, 'webform_submission') === 0) {
    if (\Drupal::moduleHandler()->moduleExists('honeypot')) {
      // There is no reason to not enable time restriction here since
      // Honeypot respects the time_limit set in the configuration.
      \Drupal::service('honeypot')->addFormProtection($form, $form_state, ['honeypot', 'time_restriction']);
    }
  }
🇪🇨Ecuador jwilson3

The form-specific "Protect [FORMNAME] webform with Honeypot" feature is provided by the Webform module, so we'll need a follow-up created in the Webform module issue queue to respect the value of the honeypot module's global setting, and disable the per-form option accordingly.

🇪🇨Ecuador jwilson3

Okay, that all makes sense, but now I'm trying to understand whether the (fallible, potentially inaccurate) author credits in the commit message is intended to serve machines or humans?

🇪🇨Ecuador jwilson3

I still prefer the simplicity and less wordy By.

If an issue has 20+ authors, then a newline and By: ends up getting really long itself. If we're back to only showing username instead of username <email>, then maybe we can also go back to a single line format like: By: author1, author2, author3, author4.

🇪🇨Ecuador jwilson3

I've come here via a link to this issue, currently buried inside a collapsed FAQ item on the new contrib record page (https://new.drupal.org/contribution-record/:id ), which is currently being encouraged to module maintainers.

If you are committing, you are encouraged to use the formatting here.

Is it really safe to recommend people start using a new commit message format if the part about multiple authors is not yet settled? Will issue credit be properly picked up, if the format ends up changing?

🇪🇨Ecuador jwilson3

Thank you for creating this issue to improve the Shortcut per Role module.

I am working through modules I help maintain to decide if this task is still relevant to the currently supported version of the module. There hasn't been any follow-up here for over 9 years which suggests that this is no longer relevant. Your thoughts on this will allow a decision to be made.

Since we need more information to move forward with this issue, the status is now Postponed (maintainer needs more info). If we don't receive additional information as to whether this issue still applies, it may be closed after three months.

Thanks!

🇪🇨Ecuador jwilson3

Thank you for creating this issue to improve the Shortcut per Role module.

I am working through modules I help maintain to decide if this task is still relevant to the currently supported version of the module. There hasn't been any follow-up here for over 9 years which suggests that this is no longer relevant. Your thoughts on this will allow a decision to be made.

Since we need more information to move forward with this issue, the status is now Postponed (maintainer needs more info). If we don't receive additional information as to whether this issue still applies, it may be closed after three months.

Thanks!

🇪🇨Ecuador jwilson3

Thank you for creating this issue to improve the Shortcut per Role module.

I am working through modules I help maintain to decide if this task is still relevant to the currently supported version of the module. There hasn't been any discussion here for over 9 years which suggests that this has either been implemented or is no longer relevant. Your thoughts on this will allow a decision to be made.

Since we need more information to move forward with this issue, the status is now Postponed (maintainer needs more info). If we don't receive additional information to help with the issue, it may be closed after three months.

Thanks!

🇪🇨Ecuador jwilson3
  • @drupal_infra on Twitter/X - last updated 3 months ago, lately only shows planned outages, has intuitive threading (first report shown first, follow-ups threaded underneath).
  • @drupalinfra on Mastodon - last updated 3 days ago, shows both planned and unplanned outages, has an unintuitive threading (follow-ups show up first, hard to find the original incident report).
  • #drupal-infrastructure on Slack - last updated 12 hours ago, shows early incident reports, direct support from infra maintainers, is intuitively threaded (first report shown first, follow-ups threaded within).

Having a third-party hosted incident report / status page would be super nice, and there are plenty to choose from:

But there are several issues:

  • There are too many platforms to choose from, leading to bikesheding (see: this issue).
  • Ultimately, the implementation requires assessment and buy in from maintainers, which takes even more time.
  • Implementing an entirely new channel just to replace the embed on the 500 error page seems like the wrong angle to come at the problem.
  • A new dedicated channel will upset existing and expected outage workflows for both community users and maintainers.
  • Setting the embed aside, it seems that most bases are covered with Mastodon and Slack, so introducing yet another channel adds noise — not to mention more busywork for Infra team to post updates across channels — unless steps are also taken to pull back from all the other platforms and centralize comms.

Reframing this problem from an end user perspective, when I see a 500 Error page the first thing I want to do is to be able to see if the incident is known, and if not know where to go to report it. I typically go to the https://x.com/drupal_infra first but only because that's where I'm directed to go. If I don't see anything reported there — it usually hasn't been — then I head over to #drupal-infrastructure channel on Drupal Slack to check for a report or discussion — it usually has been. From there, I can then finally see the link in the channel description to https://mastodon.social/@drupalinfra which might have different outage information than what is reported in Slack.

As an interim solution, could we just point to Mastodon and (optionally, the Drupal Slack channel) instead of Twitter/X on the 500 error page? Do we really even need to replace the embed on the 500 page?

🇪🇨Ecuador jwilson3

Thanks @aficiona2 for catching the math error. a 5kb image is way more reasonable for inlining with data URI.

Here are some things we can try to reduce the LQIP-LCP image size even further:

1. Lower the WebP quality parameter to its minimum (quality=1). However, in my testing, even at this setting, the generated file size is still much larger than the theoretical minimum bits-per-pixel limit.

  • Full-res image: ~40 KB
  • Theoretical LQIP-LCP image: ~5 KB
  • Actual LQIP with WebP configured to use quality=1: ~14 KB

Other potential approaches:

2. Downscale the image before converting to WebP at quality=1.

3. Apply a blur (box or Gaussian, with varying strengths) before converting to WebP at quality=1.

4. Downscale, then upscale the image before converting to WebP at quality=1.

5. Downscale, convert to WebP at quality=1, then upscale client-side (in the browser).

6. Experiment with other lossy formats/algorithms (e.g., JPEG, AVIF). However, some formats may not achieve smaller sizes than WebP at their lowest quality settings.

It’s worth noting that achieving the theoretical bits-per-pixel limit is highly dependent on the visual complexity and entropy in the source image. Simple images compress much better than complex, detailed ones. To find a reliable algorithm or processing pipeline that consistently approaches this limit across a variety of images and sizes will likely be more challenging than it appears, and may require a multi-pass approach (adding to algorithmic time). Extensive testing with diverse images will be necessary to identify a robust solution. For that reason, I humbly also add a 7th option:

7. Look to see if someone has already worked on an "LCP" algorithm to do this already.

🇪🇨Ecuador jwilson3

Why not use a single DDEV environment for a core branch and clone contrib projects into it?

Two main reasons:

  • DDEV environments are cheap to spin up. It’s better to isolate contrib modules in their own environments to avoid conflicts and ensure clean, reproducible setups. Sharing a single site across unrelated modules can create unnecessary complexity around config, database, and dependency management.
  • Composer isn’t great at working from local composer.json files across multiple contrib modules. It typically tries to fetch packages from the web, making it hard to test local changes to a module’s dependencies. By contrast, ddev-drupal-contrib builds directly from a module’s local composer.json, including require-dev dependencies for optional integrations.

This approach avoids local dependency tree hell and ensures all contributors run the same clean, isolated, and predictable environment.

🇪🇨Ecuador jwilson3

t would be nice if there was an automated solution to always minify SVGs in core in the same way, because different tools optimize in different ways, using different default options, often stripping out important attributes. Applications that generate SVGs also generate their own different ways of structuring the SVG, so every time you pull an "optimized" file into an SVG editor, and save it again, it may make unrelated changes on the roundtrip back to optimized SVG. Therefore, the SVGs require manual edits after optimization to restore settings.

I think what is missing here is clear agreement to treat SVGs as implementation assets, not hand-curated visual code.

🇪🇨Ecuador jwilson3

Also worth noting: running SVGs through optimizers like SVGO can make them harder to review and maintain. For example,

path="M11 0L14 3V16H2V0H11ZM3 15H13V4H10V1H3V15ZM11 3H12.5L11 1.5V3Z"

becomes

path="m11 0 3 3v13H2V0zM3 15h10V4h-3V1H3zm8-12h1.5L11 1.5z",

which is more compact but much less readable for quick diffs or manual review.

🇪🇨Ecuador jwilson3

Isn't this kind of micro-optimization mostly obviated by gzip or brotli? I'm happy to be wrong, but these savings seem negligible in practice—especially considering that Drupal core’s own HTML output doesn’t aggressively strip whitespace, newlines, or indentation either.

🇪🇨Ecuador jwilson3

Here is a partial config export for what I've tested:

    ckeditor5_paste_filter_pasteFilter:
      enabled: true
      filters:
        -
          enabled: true
          weight: -18
          search: '<o:p><\/o:p>'
          replace: ''
        -
          enabled: true
          weight: -17
          search: '(<[^>]*) (style="[^"]*")'
          replace: $1
        -
          enabled: true
          weight: -16
          search: '(<[^>]*) (face="[^"]*")'
          replace: $1
        -
          enabled: true
          weight: -15
          search: '(<[^>]*) (class="[^"]*")'
          replace: $1
        -
          enabled: true
          weight: -14
          search: '(<[^>]*) (valign="[^"]*")'
          replace: $1
        -
          enabled: true
          weight: -12
          search: '<font[^>]*>'
          replace: ''
        -
          enabled: true
          weight: -11
          search: '<\/font>'
          replace: ''
        -
          enabled: true
          weight: -10
          search: '<span[^>]*>'
          replace: ''
        -
          enabled: true
          weight: -9
          search: '<\/span>'
          replace: ''
        -
          enabled: true
          weight: -8
          search: '<p>&nbsp;<\/p>'
          replace: ''
        -
          enabled: true
          weight: -7
          search: '<p><\/p>'
          replace: ''
        -
          enabled: true
          weight: -6
          search: '<b><\/b>'
          replace: ''
        -
          enabled: true
          weight: -5
          search: '<i><\/i>'
          replace: ''
        -
          enabled: true
          weight: -4
          search: '<a name="OLE_LINK[^"]*">(.*?)<\/a>'
          replace: $1
        -
          enabled: true
          weight: -3
          search: '<p>\W*<br>\W*&nbsp;\W*<\/p>'
          replace: ''
        -
          enabled: true
          weight: -2
          search: '<p>\W*<br>\W*<\/p>'
          replace: ''
        -
          enabled: true
          weight: -13
          search: '(<[^>]*) (dir="ltr")'
          replace: $1
        -
          enabled: true
          weight: -1
          search: (<br>)+
          replace: '<br>'
        -
          enabled: true
          weight: 0
          search: (<p><br></p>)+
          replace: '<p><br></p>'
        -
          enabled: true
          weight: 1
          search: '<br>\n'
          replace: '<br>'
        -
          enabled: true
          weight: 2
          search: \n+
          replace: '<br>'
🇪🇨Ecuador jwilson3

I'm not certain if I should change the $use_case to a new number

The use_case needs to be incremented to the lowest unused number. (Currently that would be 18).

🇪🇨Ecuador jwilson3

Maintainer here. Thank you for the excellently written issue complete with MR!

I get the sense from the resulting markup that _label_help_append_title() is not ideal from an accessibility standpoint

You're probably right here. _label_help_append_title() is kind of the "worst case" scenario when nothing else works.

So what I recommend you do is to replace that with the other options to see if any of the other options work better and put the label help in the correct place, visually, on the form element.

_label_help_prepend_field_prefix($element, $content, $use_case);
_label_help_prepend_description($element, $content, $use_case);
_label_help_append_label_suffix($element, $content, $use_case);

If any of these work better than append_title, pls update to the MR. If not, leave a comment to that effect.

Thank you!

🇪🇨Ecuador jwilson3

I added ddev and ddev-drupal-contrib to Svg Image Field contrib module and to Rivet contrib theme, but ddev-drupal-contrib advocates for also committing the custom ddev commands it provides, it does become burdensome to keep things updated as the drop keeps moving (between ddev updates, drupal updates, ddev-drupal-contrib updates).

instead of putting a complete ddev config inside each module, a single config file should be sufficient.

Ideally, in the future, boilerplate code for providing local development environments will be reduced.

Strong 1+. This seems like the ideal goal to shoot for. For example, it would be fantastic if ddev addons (like ddev-drupal-contrib) could be added to a single .ddev/config.contrib.yml file and not have to commit all those custom commands. Users could run ddev config for their environment, and ddev pulls in the contrib file. Configuration overrides & customizations would go in .ddev/config.local.yml. Then, if the contrib module or theme itself needs custom DDEV commands, docker settings(?), or other things it could either commit them directly to the module's .ddev folder or create a separate ddev-addon-template, and add a dependency for their own ddev addon maintained in a separate codebase.

🇪🇨Ecuador jwilson3

I added a way to run lighthouse (locally, via the npm library) against all of the pages in the test site. I ran against the pages using a 1s delay and without any delay at all. Each run gives slightly different performance score but generally stays within a 2 point range in the mid nineties on each page, except for one. The "Ultimate LQIP" page can vary between a score of 100 AND then go consistently below 90 (as low as 87) in different runs. This must have something to do with the extra intermediate low-res image latency being what amounts to a double edged sword.

https://3523781-drupal-lqip.elementalidad.com/results.php

https://github.com/jameswilson/3523781-Drupal-LQIP/commit/245cbbc1d787a2...

🇪🇨Ecuador jwilson3

Any chance we could get a 3.1 (or 3.0.1) release that includes this fix? This generates a lot of unnecessary noise in logs.

We are seeing deprecation warnings on cron run after updating to PHP 8.3.21 and clearing caches. These are only notice errors, I think, so we should still be fine. But there are about 500 of them after running cron from the admin UI.

    Deprecated function: Creation of dynamic property 
    Drupal\feeds\Feeds\Item\SyndicationItem::$blog 
    is deprecated in Drupal\feeds\Feeds\Item\BaseItem->set() 
    (line 21 of modules/contrib/feeds/src/Feeds/Item/BaseItem.php).

    Deprecated function: Creation of dynamic property 
    Drupal\feeds\Feeds\Item\SyndicationItem::$parent:field_group 
    is deprecated in Drupal\Core\Queue\DatabaseQueue->claimItem() 
    (line 150 of core/lib/Drupal/Core/Queue/DatabaseQueue.php).
🇪🇨Ecuador jwilson3

Re: #15

adding the css .loaded class in inline js with a CSS rule is causing it to be the LCP again. So I don't think it's showing what the LCP would be for the different approaches.
  • The .loaded class is used on several of the examples (SQIP, BlurHash, Ultimate LQIP, CSS-only LQIP) to change opacity with a smooth transition between the blurred placeholder and the full-res version.
  • For contrast, the LQIP WebP Smooth example uses a slightly different technique to change opacity smoothly, via an inline style opacity:1 loaded via inline JS onload, but the smoothing transition effect on the opacity is still defined in CSS (I don't believe the distinction between inline vs CSS ultimately matters for performance other than if your page has many images at some point you're sending down a lot of duplicitous bytes).
  • Finally, the basic examples (LQIP BMP, LQIP PNG, and LQIP WEBP) do not use any smoothing transition and just rely on browser loading to show the full-res image and overlay the low-res placeholder.

I don't think LCP is affected by the opacity change (both with or without smoothing effect). Rather, the problem with the LCP is that all examples except "Ultimate LQIP" use placeholders that have a Bits-Per-Pixel ratio far less than the recommended 0.05 which means LCP will always consider the repaint of the full-res image.

The takeaway is that we cannot effectively reduce LCP with the LQIP technique unless the placeholder image is large enough >0.05BPP. And for the image to be "large enough" to take over the LCP for the 1200 pixel wide hero in the example site, it has to be on the order of 40k in size, which IMO rules out the option of base64 inlining. This implies the LQIP must be a reference to another image file, and yet another request (with potential latency) to download the placeholder image, somewhat defeating the other intended of the goal of the LQIP (having something on screen fast, at page load time, ideally piggy-backed inline via the HTML request).

🇪🇨Ecuador jwilson3

The data model and schema should support all fetchpriority values — no question there. But I’d question the utility of exposing them at the image field formatter level.

Take the carousel example: there are (at least) two common patterns in Drupal:

  1. A multi-cardinality image field used as the image source, or
  2. A series of entities (e.g. paragraphs, ECK) each with a single image field.

In both cases, there's no reliable context at the field formatter or view mode level to determine which images are visible at page load. Exposing fetch priority there risks encouraging awkward architectures — e.g., splitting fields just to differentiate high vs low.

The complexity only increases with variations like random first slides or responsive carousels showing N slides. Handling these correctly almost always requires custom logic (alter hooks, preprocess).

A UI trying to cover this would have to offer nuanced options like:

  • Unspecified (let browser decide)
  • High (for LCP-critical images)
  • Low (for images above the fold but not visible initially)

For multi-cardinality:

  • First is high, others unspecified
  • First is high, others low
  • First N are high, rest low/unspecified

…which quickly gets unwieldy and still doesn’t address entity-based carousels.

In short, this might be better solved downstream (e.g., in preprocess) where real context exists.

🇪🇨Ecuador jwilson3

Re: #15 (.loading class LCP issue) Thank you. I'll try to make sense of what you're saying and get to the bottom of this soon. But happy to have a PR if you know what the fix would be offhand. (The project is setup for DDEV running locally).

Re: #16 (SQIP) While SQIP is a "superior" image, there are two major reasons it is problematic: it is both resource intensive on the server-side and on the client-side. For the server-side, even if we were to reimplement `sqip` in PHP, I expect it will be a fair bit more resource intensive than simply scaling down a raster image to 8x8 and applying a simple box blur, which Drupal image styles can do for us OOTB today. On the client-side, the problem is the <g filter="blur(12px)"> which is applied via browser rendering. you can inspect https://3523781-drupal-lqip.elementalidad.com/images/hero.sqip.svg to see the file that was generated by the npm library.

The sqip command took about 2.6 seconds to run, used about 4 CPU cores, (thats >10s total compute time) for the 400kb image.

If you look at the animated GIF demonstrating the processing progression of the underlying binary used by sqip, it becomes fairly obvious this is intensive work: https://github.com/fogleman/primitive?tab=readme-ov-file#progression

🇪🇨Ecuador jwilson3

For base64 images we need img-src data:, not entirely sure about SVG.

If you load SVG via data URI (e.g., src="data:image/svg+xml;base64,..."), then it would be covered by img-src data:. But we'll also need to ensure any generated SVGs (especially via 3rd parties) are additionally XSS sanitized.

It's a good point to consider, and maybe we'd have to have a site-level configuration to pick using inline data uris, or additional requests for the thumbnails.

🇪🇨Ecuador jwilson3

I set up a test site with a few LQUIP approaches to be able to test the visual load transitions (using a poor man's "delay" dropdown parameter to simulate latency and to actually see the LQIP for more than a brief second).

  1. A couple baselines (OOTB Drupal eager/lazy load settings).
  2. A couple basic LQIPs based on an inline square 8x8 thumbnail inspired by how Unsplash does it. Unsplash uses an inline BMP, but I also tested with inline PNG (same size as BMP), and an inline WebP which produced a much smaller inline payload as well as a slightly different visual blur (the BMP and PNG were visually equivalent). The key here is to add a simple box blur to the 8x8 thumbnail to avoid browsers rendering jagged edges between adjacent high-contrast pixels when scaling up the thumbnail to full-res size in the browser. I also tested without blur, and with larger thumbnails like 16x9, but none of these options look as visually appealing as the simple, blurred 8x8 square image.
  3. A LQIP WebP Smooth using an 8x8 blurred WebP inline thumbnail with smooth fade-in effect requiring an "onload" JS to transition from low- to high-res. IMO this is the clear winner to satisfy the visual perspective, architectural simplicity (no 3rd party deps aside from GD), and resource usage both client- and server-side.
  4. The Ultimate LQIP technique, which depends on 2 LQIPs and suffers from the twice the number of http requests.
  5. The Blurhash technique. Blurhas has an existing Drupal module, but calculating the blurhash is fairly resource intensive on the server-side, and the Drupal module doesn't have any caching. It also depends on the clever base83 hash being decoded with Javascript on the client-side, but the 3rd party library is a JS module which complicates usage for Drupal requiring the use of import and knowing the path to the library JS file.
  6. A couple CSS-blur techniques including:
    • a client-side blur of a small thumbnail. CSS blur applied to an image looks really bad around the edges of the image and is a non-starter.
    • CSS-only LQIP technique. This has horrible CSS complexity to create the integer hash and the clien-side CSS gradient code to "decode" the integer hash. Also, the resulting effect of the grayscale gradient applied onto the image's calculated average color (which must be calculated server-side from the source image) look extremely simplistic compared to the 90-byte WebP thumbnail.

I had a look at LCP for each of these with WebPageTest and PageSpeed Insights, but couldn't find a solid winner emerge. In my tests, the "Ultimate LQIP" option had the worst LCP of them all on WebPageTest, so it appears that the LCP algorithm may vary based on what tool is being used. YMMV.

Ultimately, I think the LCP goal is possible but difficult to achieve for hero images with an LQIP approach alone, since you need twice the requests and the >BPP0.055 ratio ends up creating a fairly large image for the LCP (44-kilobyte) versus a simple 90-byte 8x8 thumbnail. However, it is also worth noting that none of the other approaches I've found that depend on a low-res blurred image or css-gradient will positively affect LCP since they inherently do not meet the minimum BPP ratio.

Looking forward to having others' thoughts, insights, and reviews.

Code here: https://github.com/jameswilson/3523781-Drupal-LQIP
Site here: https://3523781-drupal-lqip.elementalidad.com/

🇪🇨Ecuador jwilson3

Because the blurhash is calculated inside a hook_preprocess_image() function, unnecessary and costly regeneration of the hash happens after a Drupal cache rebuild or anytime the hook is executed such as during theme development when Twig caching is disabled. This severely hinders the theming process so as to be completely unusable on any page with an image of significant size.

Therefore, I would reiterate the importance of caching the generated blurhash either in the database (as issue summary suggests) or at the very least in the filesystem.

🇪🇨Ecuador jwilson3

While the server-side blur is probably feasible, I don't think this precludes us considering tiny inline images (smaller than 20x20px).

Maybe next steps could be to create a few simple HTML page examples using the different approaches to see how Lighthouse LCP behaves.

Proposed tests:

  • Baseline 1: JPG Hero image with lazy loading (Drupal's default option for all images).
  • Baseline 2: JPG Hero image with eager loading.
  • LQIP w/o CSS blur: Hero image with basic LQIP technique using a tiny base64 inline WebP placeholder image (a scaled down version of the hero, sizes to no more than 20×20 pixels).
  • LQIP w/ CSS blur: Hero image with LQIP technique using a tiny base64 inline WebP placeholder image with client-side blur applied.
  • SQIP
  • Pure CSS LQIP.

Points to consider for fair comparison:

  1. payload size: inline WebP base64 versus inline SVG versus CSS (+ JS) size.
  2. placeholder render quality: a CSS blur will look good but could the <20×20px LQIP base64 payload image with no blur applied work (for reduced client-side processor power and shaving bytes off CSS)?
  3. load experience: is there any jank between the the visual shift from placeholder to full-res image? is it annoying enough to need a JS fade-in effect?
  4. Largest Contentful Paint score: Does each technique provide an efficient Lighthouse' LCP based on the placeholder, or do any of them get a longer LCP due to the full-res image loading in later?
  5. processing requirements: This will be hardest to confirm outright, but a tiny inline image with no CSS blur and minimal to no JS is more efficient than more complex solutions, and should count for something unless any of the previous points disqualify it.
🇪🇨Ecuador jwilson3

I've rebased the MR on the latest 11.x in an attempt to clear unrelated CI failures...

🇪🇨Ecuador jwilson3

Why not just inherit the one from Drupal core?

🇪🇨Ecuador jwilson3

Said in another way, the concept of feature flags in Drupal are implemented as $config['module.settings']['enabled'] = FALSE;. Those feature flags can then also optionally be exposed in the admin UI, to facilitate quick debugging. Ultimately though, adding a module settings configuration is the only realistic approach to solving this.

The patch in #29 does exactly this. If I had to criticize it at all, it would be to rename the setting 'purge' to 'enabled', to match the property name introduced in the FastlyPurger class.

🇪🇨Ecuador jwilson3

I don't think it makes sense to have a label for an input that is hidden.

I totally agree with you. And my prior suggestion to convert to a fieldset w/ legend was rooted in this thinking. But a fieldset that only contains a hidden form element with a list of links also makes no sense semantically.

Semantically, the only thing I can think of that makes sense here is a <nav> wrapper with a span or div tag around the label. Of course, at that point we would lose the default Drupal form element label styling, which makes this issue somewhat of a breaking change.

And sadly this is why I ended up with the current workaround to just add the id attribute to the hidden input.

🇪🇨Ecuador jwilson3

Good point about latency. Thanks for clarifying it. I read that in the IS, but didn't understand, or at least it didn't sink in.

Thinking a bit more on server-side blur — to get a decent-looking blur baked into a raster image, the placeholder needs to be relatively large — e.g., 600x400 — unlike a tiny 15x10 that works fine when upscaled and blurred with CSS client-side. That larger size means more bytes, which makes base64 inlining less appealing due to HTML bloat. And even then, a 600x400 server-side blurred placeholder might still look blocky when upscaled to the final display size, especially on high-DPI screens.

For server-side blur, there’s a point of diminishing returns at around ~10% of the real image size or ~5KB in payload. A 600x400 pixel with heavy blur applied could be in the 10k to 40k range.

🇪🇨Ecuador jwilson3

Taking https://csswizardry.com/2023/09/the-ultimate-lqip-lcp-technique/ into account, if a goal is to have the LQIP size get counted as the LCP and avoid the full image override that, then it seems like the CSS-only version might not work since it is only a 3x2 pixel image. On the other hand, I wonder how the inline SVG approach could work for the LCP calculation. the SQIP relies on client-side (CSS blur technique, which is processor intensive). It seems there are tradeoffs all around, the comment on https://github.com/axe312ger/sqip/issues/116 seems to suggest that doing the image blur on the server side would be beneficial, since lots of images with blur is CPU/GPU intensive client-side.

Big fan of SVG here, but thinking generally PHP and Drupal seem better positioned for a server-side raster-image LQIP approach, and as long as we can figure out an algorithm for a 0.0055BPP and ideally, a WebP conversion step, assuming available serverside.

The choices and tradeoffs come down to where to put the burden of

  • One time server-side in memory raster with blur using tools natively available to PHP.
  • One time server-side raster image without blur using tools natively available to PHP + CSS client-side blur.
  • One time server-side vector image generation using non-native tooling (or writing a PHP library) + CSS client-side blur. NEEDS LCP validation.

Some references:

🇪🇨Ecuador jwilson3

With Lighthouse transitioning to "insights" instead of legacy "audits", fetchpriority is a new key signal for image delivery and LCP optimization. I hope this will stoke renewed interest and forward momentum on this issue.

Seems to me that theres not really a use case for specifying anything other than fetchpriority="high" because the default would be no explicity fetch priority and therefore fallback source order. Images located earlier in source order that should be set explicitly "low" seems like an anti-pattern and a true edge case, since Accessibility best practices dictate that the DOM order should match rendered order.

🇪🇨Ecuador jwilson3

I agree that preload should be kept separately from eager/lazy because preload adds a new HTML tag to the HEAD, as opposed to an attribute on the IMG tag.

Also, with Lighthouse now in the process of switching over to use "insights" instead of legacy "audits", there will also be forward momentum on the sister issue 📌 Add option for fetchpriority attribute within "image loading" area on image field formatter Active , which uses a similar checkbox toggle in the image field formatter UI. So when evaluating UI patterns it would be good to take into account that an ideal "final" product here would have two checkboxes, one for Preload and another for Fetch Priority.

🇪🇨Ecuador jwilson3

Was about to pick this up, but neither throbber-active.gif nor throbber-inactive.png from core have been copied into the seven theme, meaning it inherits whatever core has already.

The only icon I see in the seven theme codebase is https://git.drupalcode.org/project/seven/-/blob/2.0.x/images/loading-sma... referenced only by jQuery UI Dialog.

https://git.drupalcode.org/project/seven/-/blob/2.0.x/css/components/dia...

The loading-small.gif is out of scope of this issue, therefore, I suggest closing this one and opening a follow-up issue to pull down the animated SVG I did for core in 📌 Update loading icon and use SVG Needs review .

🇪🇨Ecuador jwilson3

I suggest starting over with a clean MR that just pulls in the SVG throbber that ended up getting committed to Core in #1974928: Update Drupal's default throbber icons using the same approach from that issue for the Seven theme. No modernizer needed.

🇪🇨Ecuador jwilson3

I'm clarifying what I understand to be "the ask" in the issue title.

I'm just curious why you couldn't do this with CSS though. Is there some default drupal behavior that looks broken without such an option built into the module?

🇪🇨Ecuador jwilson3

Thanks for the review and rebase. Merge pipeline to 2.3.x in progress.

🇪🇨Ecuador jwilson3

jwilson3 changed the visibility of the branch 3480807-update-dev-dependencies to hidden.

🇪🇨Ecuador jwilson3

jwilson3 changed the visibility of the branch 3480807-update-ddev to hidden.

🇪🇨Ecuador jwilson3

@lincoln-batsirayi Try step debugging with Xdebug. Ideally configure it to break on exceptions. You can also try disable modules one by one to find a possible culprit. Sidenote: this kind of support question is somewhat out of scope of this issue, so you might take any follow-up questions elsewhere (eg #drupal-support in Slack). I'm still not really convinced that having an uncaught exception for a 403 ending up in logs is useful in this case either.

🇪🇨Ecuador jwilson3

The only thing I can think of is that the label element is displayed here for visually consistent styling when this element appears alongside other exposed filter elements that *do* rely on the form element label.

Production build 0.71.5 2024