Merge Requests

More

Recent comments

🇷🇺Russia niklan Russia, Perm

Re-roll for Drupal 11.1.x

🇷🇺Russia niklan Russia, Perm

Checked. Works as expected. The close button is appended last in the modal body container and does not conflict with the sticky footer.

🇷🇺Russia niklan Russia, Perm

@rkoller visually, yes, it’s in the header. But if you try to apply styles to the .cm-footer, like position: sticky; bottom: 0, then the close button also becomes sticky at the bottom of the page because it is inside .cm-footer, which is clearly wrong. This is not an action button with an explicit label, which is fine for the footer; it’s just a UI element, and it’s expected to be always at the top. But its current position in the DOM makes it more difficult or impossible in some cases.

🇷🇺Russia niklan Russia, Perm

Shouldn't the Klaro module replace the Klaro JS markup in this case? Removing these conditions ensures the close button is consistent across all scenarios.

The conditions control when the Drupal module has to add its own close button. Else the close button fom Klaro JS is used.

In that case, shouldn't these conditions affect all cases, not just the consent modal in a specific state?

  • Drupal.behaviors.klaro.manager.confirmed — as I understand it, this becomes TRUE if consent information is present (in a cookie or localStorage), no matter what consent was chosen.
  • Drupal.behaviors.klaro.config.mustConsent — becomes TRUE when a user must choose consent and confirm it. Basically, this is used for the "Consent dialog" that is opened by default.

if ((Drupal.behaviors.klaro.manager.confirmed) && (!Drupal.behaviors.klaro.config.mustConsent)) {

So, this condition can be explained as follows: if the user has already performed some actions with consent (accepted, customized, or declined) AND there is no force for the "Consent dialog", disable the close button and let Klaro JS handle it.

But the “Consent dialog” from the rel="open-consent-manager" trigger is exactly the same. So, it sounds like the conditions are useless or wrong? The one with confirmed doesn’t make any sense. If it is confirmed, why would you try to hide the close button, or if it’s not mustConsent, again, why would you hide it?

I see the logic for the close button, which disables all the consents on click. But shouldn’t it be that way only when Drupal.behaviors.klaro.manager.confirmed is FALSE? So, if it is not yet confirmed, disable everything and close, but if it is confirmed, we should preserve the settings and just close the window?

      // Handle close button X.
      if (drupalSettings.klaro.show_close_button) {
        if (document.querySelector('#klaro-cookie-notice')) {
          var elem = document.querySelector('#klaro-cookie-notice');
        }
        else if (document.querySelector('.cm-modal.cm-klaro')) {
          var elem = document.querySelector('.cm-modal.cm-klaro .cm-header');
        }

        if (elem && !document.querySelector('.klaro-close')) {
          var close_label = Drupal.t("Close dialog and decline all", {},{context: 'klaro'});
          if (Drupal.behaviors.klaro.manager.confirmed) {
            var close_label = Drupal.t("Close dialog and keep existing consents", {},{context: 'klaro'});
          }
          var close_html = '<button title="' + close_label + '" type="button" class="cn-decline klaro-close" tabindex="0" aria-label="' + close_label + '"></button>';
          elem.insertAdjacentHTML('beforeend', close_html);;
          document.querySelector('.klaro-close')?.addEventListener('click', (e) => {
            if (!Drupal.behaviors.klaro.manager.confirmed) {
              Drupal.behaviors.klaro.manager.changeAll(false);
              Drupal.behaviors.klaro.manager.saveAndApplyConsents();
            }
          }, false);
          document.querySelector('.klaro').classList.add('klaro-close-enabled');
          document.querySelector('.klaro .cookie-modal .cm-modal .hide')?.remove();
        }
      }

Take a look at the .klaro-close event listener. It only changes all consents to a disabled state and applies the changes if the user has still not interacted with the consent dialogs in any way. However, if something has already been selected, it does nothing and simply closes the window.

Another approach could be to remove the close button added by Klaro JS if it meets the current conditions, instead of setting elem to false. In that case, the “Consent modal” will be without a close button, which is also acceptable I think.

🇷🇺Russia niklan Russia, Perm

Not sure what these conditions are meant for, but removing them fixes the problem. In that case, everything works as expected.

🇷🇺Russia niklan Russia, Perm

If an order must be preserved for backward compatibility and we don't want to complicate this too much (by providing a setting for that), maybe we can provide some kind of an API switch for that?

Something like:

Drupal.behaviors.klaro.config.closeButtonLocation = Drupal.behaviors.klaro.config.closeButtonLocation || '.cm-modal.cm-klaro .cm-footer';
var elem = document.querySelector(Drupal.behaviors.klaro.config.closeButtonLocation);

This would preserve existing installations and keep the element in the footer, but those who need it in the header can push this value through hook_page_attachments_alter() if needed and mention that in the README.

I don't see why this element needs to be in the footer in the first place, but since it's already here, let's not break projects for others. This can be changed in the next major release, but for now, the module can provide a documented workaround for that, but without explicit settings and UI.

🇷🇺Russia niklan Russia, Perm

Yes, we can't update the existing JS snippet. But, there are other problems related to the new properties:

  1. Shouldn't we provide a proper update for klaro_app entities in order to set default values for newly created properties? I have tested this locally and it looks like Drupal automatically detects this and, on the next configuration export, adds missing properties.
  2. Shouldn't we update the default klaro_app entities (./config/install) to match the schema? Drupal seems to do this at some point, by merging missing properties. But currently, they are becoming out of sync with actual entities, and this might be a problem in certain scenarios.
  3. The description of on_decline should be checked and updated to make it more specific when the callback is called. At the moment, I'm unsure which is correct, but it seems that the behavior is correct, so the description needs to be updated.
🇷🇺Russia niklan Russia, Perm

I think the update for existing Klaro app entities should also be provided, since the entities now have new properties and the existing configurations are invalid for the new schema. I'm curious about whether this update should be more cautious, considering how long it has been since this issue was created. In its current state, it will overwrite all existing callback data after the update, which could be a bit intrusive. Perhaps only set a default value if no value exists at the time of the update? This way, projects that relied on this merge request until the official release will retain their configuration.

🇷🇺Russia niklan Russia, Perm

Hi, can you please create a MR for the patch changes, so I can provide a review? There are clearly issues with STARTER.info.yml changes. In most cases, they are definitely wrong, and the paths are truncated at 80 characters, which is incorrect. This is actually not a .txt file; it's a YAML template that should not be treated like a TXT file, so the 80-character hard limit doesn't apply here. It just breaks things.

🇷🇺Russia niklan Russia, Perm

+1 for this change. PHPStan annotations allow doing more explicit return types and detecting more bugs and issues before it goes live.

🇷🇺Russia niklan Russia, Perm

Thank you. I've tested it and it fixed the issue.

Now, I can even see that parsed keys do not contain any leading zeroes and it makes sense in relation to the data within the index itself.

Index: search\n
Keys: 'FOO-012345'\n
Parsed keys: 'foo 12345'\n
Searched fields: [ALL]\n
🇷🇺Russia niklan Russia, Perm

I've prepared a patch for the tokenizer which basically "emulates" the behavior of the database server. This might be helpful for someone, but it's a dirty solution and should only work in most cases. It hasn't been tested with other search server types, so it will probably require reindexing indexes. However, since it's only related to database servers and is likely used when there's no dedicated search engine, it shouldn't cause any problems unless multiple servers are used together.

🇷🇺Russia niklan Russia, Perm

Ok, actually see file_file_download(), it returns void in all the cases instead of NULL.

But we can't use void as the return type and combine it with other types. Does that mean all hook implementations must explicitly return NULL and void return should be deprecated somehow?

🇷🇺Russia niklan Russia, Perm

Hi, it should be:

  properties:
    is_active:
      type:
        - 'null'
        - boolean

Other types are basically cast to a string by the YML parser, and boolean becomes 'boolean' internally as any other type because they are both strings. However, the null type is special for YML, and it is not cast to anything. It just becomes a simple NULL. In fact, this has never worked, and your previous example should also have thrown an exception when you pass an actual NULL value.

Previously, if a component could actually have a null value dynamically, and was mostly used for a fail-safe solution (for example, a field with data was empty, but you expected it to be present in most cases), it would fail in production without a clear understanding of why. This was a much more dangerous approach, which I faced multiple times. Without other patches ( Unhelpful error message with enumeration problem Active ), you will not even know which component has failed on the page. It is just a WSOD.

As proposed initially, we could add some kind of protection by casting all null types to 'null' internally to improve the DX. Or we could simply add a note to the change record mentioning this. I am not sure which is better. I think we should get input from the core maintainers here.

From a component developer's perspective, the requirement to wrap null into a string seems unnecessary, because other data types do not require that. This is what the exception added in that issue is all about - it highlights the problems and errors in the component that were previously hidden until a NULL value was passed and it was confusing why the component failed when the type was clearly listed. At least now, it triggers an exception before the code is deployed, but it looks like it's not clear what's actually wrong. Maybe we should improve the exception message by adding extra information about null.

From the code developer's point of view, the component's validator is already difficult to understand, in my opinion. Adding such workarounds to automatically fix these issues will not solve the problem, and may even make it worse. Additionally, we are walking on a slippery slope, as there are likely other cases that are handled differently by the YML parser, and would require additional workarounds for those. I can't recall, but it's likely that if we try to work around this at the validator level it will be necessary in other parts of the component code, as the validator would only "fix it" for the validation process but it would still be null inside, and other parts of code might still fail. At the current stage, it detects problems very early and throws an exception, and then we don't worry about it much. It should either be fixed or deleted by the component's developer.

As another suggestion, we could add a best practice/tip/hint/guideline to wrap all values in strings. This would make it consistent and safer. Internally, component's code expects all type values to be wrapped in strings, so it makes sense to do this. However, this would require wrapping everything except null. This is what it actually should be from the code perspective:

  properties:
    is_active:
      type:
        - 'null'
        - 'boolean'

^ and this is the "proper" example and usage, based on how YML parser will treat it.

🇷🇺Russia niklan Russia, Perm

It's just my assumption, but I don't think it can affect Google Analytics in any way. Prefetching is basically loading the HTML page from the server. Since Google Analytics is a client-side library, it will not run until the page has been rendered. Also, I seriously doubt that Google Analytics treats prefetches in any way, because they can be easily detected. Not only do they use a special link format, but they also include a request header with "Purpose: prefetch". Not to mention, the library itself is built and maintained by Google. If that were the case, they would surely mention it on their repo. I think it is just coincidence.

But, as far as I understand, the library itself (not the Drupal module) can work with the Speculation Rules API. In theory, this could cause an issue because it can also prerender pages, instead of simply preloading, and pre-rendering also means that some JavaScript might be executed. (But I'm not sure; I haven't dug into it too deeply.)

You might want to check the Speculation Rules on the website. They can be in the response headers or on the page.

<script type="speculationrules">…
Speculation-Rules: "…."

If you are using Cloudflare or a similar service, please double-check that. For example, Cloudflare has an option called "Speed Brain" which is essentially the Speculation Rules API.

Knowing that Quicklink (library) can interact with speculative rules, and it can render pages in the background, this could be a problem in theory that affects Google Analytics statistics.

🇷🇺Russia niklan Russia, Perm

With a given fix, an element is always used from the context instead of a global reference via DrupalSettings.

🇷🇺Russia niklan Russia, Perm

Hi, I think your concerns are already covered by these settings:

You also have the option to ignore links that use specific CSS selectors. If you hide links with CSS, consider using a class like "visually-hidden" and adding it to this option. In this case, they will not be pre-fetched.

It is also possible to use selectors such as :not(.prefetch) to disable pre-fetching for anything except links with the prefetch class, allowing you to control what should be pre-fetched and what shouldn't. For example, product pages on an e-commerce site.

Also, it is better to set the 'Cache-Control' setting with this module enabled. In this way, the pages that the user has already visited will be pre-fetched from the memory cache instead of making a request to the server.

I agree that these module and library should be used with caution, as they are not a silver bullet and will not magically improve performance. They do have their drawbacks and should be used wisely. If your website consists primarily of static pages without user sessions, this is a great tool to improve overall page responsiveness to user actions. With the right Drupal setup, all these issues are minor for most hosting providers.

🇷🇺Russia niklan Russia, Perm

niklan changed the visibility of the branch 11.1.x to hidden.

🇷🇺Russia niklan Russia, Perm

Tested locally and it seems to have resolved the issue as well.

🇷🇺Russia niklan Russia, Perm

Updated MR and improved data provider.

P.S. Sorry for the delay, I did not receive notification from MR's comments.

🇷🇺Russia niklan Russia, Perm

Basically, the validation itself is covered by \Drupal\Tests\Core\Theme\Component\ComponentValidatorTest::dataProviderValidatePropsInvalid and ctaTarget violates the allowed properties in the enum. What is not covered is the message itself.

Should it be a dedicated test method for that case or should the \Drupal\Tests\Core\Theme\Component\ComponentValidatorTest::testValidatePropsInvalid assert the exception message as well?

What is the best way to cover it?

🇷🇺Russia niklan Russia, Perm

Added comment and adjusting issue title

🇷🇺Russia niklan Russia, Perm

Updated summary and simplified it a little, since there is a failtest and a possible solution exists.

🇷🇺Russia niklan Russia, Perm

Changed message to be:

[foo:foo/example] Does not have a value in the enumeration ["foo","bar"]. "baz" provided

🇷🇺Russia niklan Russia, Perm

Well, the pipeline confirms that it fixes a problem and doesn't break anything else (at least what has been tested). On my project, everything works as expected now as well. Honestly, for me, it is hard to explain why this helps.

Just an assumption from debugging: It looks like such components are represented twice in the Twig Node Tree, one of them is some kind of container (wrapper), it doesn't have a payload with variables, but \Drupal\Core\Template\ComponentNodeVisitor::leaveNode still adds a validation function to it, which leads to an exception.

🇷🇺Russia niklan Russia, Perm

I'm almost given up, but found a major clue. It seems that \Drupal\Core\Template\ComponentNodeVisitor::leaveNode should exit if the node has a parent.

It seems like when you render a node with a parent set, this filter validate_component_props receives a wrong context, which leads to an exception during validation.

I've tested locally, and everything seems to be working fine now.

🇷🇺Russia niklan Russia, Perm

Did a quick and dirty test with just Twig, and it works fine.

script.php:

<?php

use Twig\Environment;
use Twig\Loader\FilesystemLoader;

$loader = new FilesystemLoader(__DIR__ . '/templates');
$twig = new Environment($loader, [
  'cache' => __DIR__ . '/cache',
]);
echo $twig->render('foo.twig');

./templates/foo.twig:

<div class="wrapper">
  {% embed 'bar.twig' with {} only %}{% endembed %}
</div>

./templates/bar.twig:

Hello from Bar!

Output:

$ drush scr var/scripts/twig/test.php 
<div class="wrapper">
  Hello from Bar!
</div>
🇷🇺Russia niklan Russia, Perm

I'm not sure what I should add/change in MR at this point. I can't understand where this bug comes from. It seems to me that it is potentially a bug in Twig itself. I'm not familiar with Twig's internal code, but I think I should try to find out where the template is compiled to find out why it compiles it with a wrong template reference which leads to a recursion.

I just don't know where to approach this task. It's definitely not a ComponentValidator problem. Everything hints and leads to Twig. Basically, that 'example:foo' component tried to be rendered twice, while it is a single call. And the second call comes from a wrong Twig compiled template, which is a little hard to follow and understand, so I want to try to find out where and how these templates are built in the first place.

Maybe I'll try to make a failtest later.

🇷🇺Russia niklan Russia, Perm

Also, I'm thinking, can this message be improved? I personally don't like the end result right now with [foo:foo] [example]. Maybe it is better to be something like: [provider:component/properties/property] — [foo:foo/properties/example]?

🇷🇺Russia niklan Russia, Perm

Added condition to append information about value only if it is provided.

🇷🇺Russia niklan Russia, Perm

Also tried to add a value provided, which will help to find an issue even faster:

Drupal\Core\Render\Component\Exception\InvalidComponentException: [foo:foo] [example] Does not have a value in the enumeration ["foo","bar"]. "baz" provided. in Drupal\Core\Theme\Component\ComponentValidator->validateProps() (line 203 of core/lib/Drupal/Core/Theme/Component/ComponentValidator.php).

🇷🇺Russia niklan Russia, Perm

Witht the merge request the error message will be:

Drupal\Core\Render\Component\Exception\InvalidComponentException: [foo:foo] [example] Does not have a value in the enumeration ["foo","bar"] in Drupal\Core\Theme\Component\ComponentValidator->validateProps() (line 203 of core/lib/Drupal/Core/Theme/Component/ComponentValidator.php).

🇷🇺Russia niklan Russia, Perm

Niklan changed the visibility of the branch 8.x-2.x to hidden.

🇷🇺Russia niklan Russia, Perm

Added check for property type definition values, and if it contains a non-string value, throws an exception. In my case, it is now clear where the problem is:

Drupal\Core\Render\Component\Exception\InvalidComponentException: The component "foo:bar" uses non-string types for properties: required. in Drupal\Core\Theme\Component\ComponentValidator->validateDefinition() (line 96 of core/lib/Drupal/Core/Theme/Component/ComponentValidator.php).

$schema: https://git.drupalcode.org/project/drupal/-/raw/HEAD/core/assets/schemas/v1/metadata.schema.json
name: Bar
props:
  type: object
  properties:
    required:
      type: [null, boolean]
      title: Required
🇷🇺Russia niklan Russia, Perm

I propose to throw an exception if the types of the property contain values that are not strings, which will include floats and special YAML types. We definitely do not want to correct the value !!float 12.3 in any way. We should throw a meaningful exception, indicating the component ID and property name. Then developers can easily identify and fix the problem.

If we are going to fix NULL for developers automatically, we still have to check for other types and throw an exception. So it's just more code with basically the same result.

🇷🇺Russia niklan Russia, Perm

Added the test that will fail with such definition. It is practically useless, I think, but it shows that the input and expected data are not the same. If you try to render it, a PHP error occurs. Maybe we should throw an exception in that case, I don't know.

🇷🇺Russia niklan Russia, Perm

It also fails when using libraries with external dependencies, but with a slightly different exception.

Only file JavaScript assets can be optimized.

E.g. of such library:

example:
  version: 1
  remote: https://example.com
  license:
    name: Example
    url: https://example.com/license
    gpl-compatible: true
  js:
    js/init.js: { }
    //example.com/script.js: { preprocess: false }

The suggested solution won't work in this case because the type here is external, which is handled in the method \Drupal\Core\Asset\JsCollectionOptimizer::optimize, but not in the method \Drupal\Core\Asset\JsCollectionOptimizerLazy::optimizeGroup. But for now, I’m out of ideas on how to solve this issue for external libraries. The result of the lazy method should be optimized JavaScript.

For those who have encountered this problem, I suggest explicitly setting the following in the code: {preprocess: false, minified: true} for such JavaScript files.

It's also worth mentioning that the solution from MR and Drupal\Core\Asset\JsCollectionOptimizer::optimize does not include injecting library license information.

🇷🇺Russia niklan Russia, Perm

I created the MR with the fix and it works with the google_tag module. However, I'm still not sure about this part of the code: $data = file_get_contents($group['items'][0]['data']);. The same code occurs in the methodDrupal\Core\Asset\JsCollectionOptimizer::optimize. It seems that there may be more than one 'item' in the group, and in such a case, some content may be missing. Need some advice here is that safe or not.

🇷🇺Russia niklan Russia, Perm

I'm facing the same issue. I don't understand the current logic. If the library doesn't want to be preprocessed, why is it automatically minified by default? In that case it makes no sense in preprocess: false.

I believe the issue lies in the logic implemented in two methods: \Drupal\Core\Asset\JsCollectionOptimizerLazy::optimizeGroup and \Drupal\Core\Asset\JsCollectionOptimizer::optimize.

The \Drupal\Core\Asset\JsCollectionOptimizer::optimize respects asset group preprocess status:

          if (!$js_group['preprocess']) {
            $uri = $js_group['items'][0]['data'];
            $js_assets[$order]['data'] = $uri;
          }

The \Drupal\Core\Asset\JsCollectionOptimizerLazy::optimizeGroup function optimizes the group regardless of any conditions. It doesn't have a condition and simply moves on to the optimization process.

This method should looks like:

  public function optimizeGroup(array $group): string {
    $data = '';
    $current_license = FALSE;

    // No preprocessing, single JS asset: just use the existing URI.
    if ($group['type'] === 'file' && !$group['preprocess']) {
      $data = file_get_contents($group['items'][0]['data']);
    }
    else {
      foreach ($group['items'] as $js_asset) {
        // Ensure license information is available as a comment after
        // optimization.
        if ($js_asset['license'] !== $current_license) {
          $data .= "/* @license " . $js_asset['license']['name'] . " " . $js_asset['license']['url'] . " */\n";
        }
        $current_license = $js_asset['license'];
        // Optimize this JS file, but only if it's not yet minified.
        if (isset($js_asset['minified']) && $js_asset['minified']) {
          $data .= file_get_contents($js_asset['data']);
        }
        else {
          $data .= $this->optimizer->optimize($js_asset);
        }
        // Append a ';' and a newline after each JS file to prevent them from
        // running together.
        $data .= ";\n";
      }
    }

    // Remove unwanted JS code that causes issues.
    return $this->optimizer->clean($data);
  }->optimizer->clean($data);
  }
🇷🇺Russia niklan Russia, Perm

I encountered the same issue again, and there was a lot of noise in the logs about it. If you just want to fix images and avoid patching the core system, you can use the following script (adapt it to your needs):

#!/usr/bin/env bash
# Fixes iCCP: known incorrect sRGB profile
cd web/sites/default/files || exit
find . -type f -name '*.png' -exec mogrify \{\} \;

This tool will go through all your PNG images and re-save them with fixes for any issues found. However, it fixes more than just the iCCP profile, so use it with caution. I haven't faced any issues yet.

If you know exactly which images have problems, it's best to run the command directly on those files, like this (you can play with broken files from that issue):

mogrify filename.png

I recommend testing this tool locally before applying it to your live files. Alternatively, you could run the entire process locally and then sync the changes using rsync.

Please note that this command may take some time to complete, because it will open and save each image, regardless of whether it has any issues or not. The more and bigger in size files you have, the slower the process will be.

It utilizes the Mogrify binary provided with the ImageMagick package.

🇷🇺Russia niklan Russia, Perm

Having the same issue.

Call to undefined method Drupal\Core\Entity\Sql\SqlContentEntityStorage::findMatchingRedirect() in Drupal\easy_breadcrumb\EasyBreadcrumbBuilder->getRequestForPath()

I don't understand how it's supposed to work. Previously, the method for the service RedirectRepository was called.

$redirect = \Drupal::service('redirect.repository')
  ->findMatchingRedirect($redirect_path, [], $this->languageManager->getCurrentLanguage()
  ->getId());

As mentioned in the original post, the code was changed:

$redirect = $this->entityTypeManager->getStorage('redirect')->findMatchingRedirect($redirect_path, [], $this->languageManager->getCurrentLanguage()->getId());

It loads the storage for the redirect entity and calls ::findMatchingRedirect() on it. Maybe I'm missing something, but the redirect entity doesn't define its own custom storage that would implement that method. In that case, Drupal falls back to the default storage, which is SqlContentEntityStorage for content entity types. But this storage is clearly not implementing that method, which leads to an error.

🇷🇺Russia niklan Russia, Perm

Hello again. I see some folks not very happy with silencing all the errors from that call. I understand that as well, it can be helpful sometimes. Without it, I won't be found this issue as well. Maybe we wrap this call $image = @$function($this->getSource()); by a condition?

We have already configuration `system.logging` with `error_level` setting. Maybe we will do it like:

if ($error_level === 'verbose') {
  $image = $function($this->getSource());
}
else {
  $image = @$function($this->getSource());
}

This allows a developer to find out that there is a problem, but it won't be logged and showed on production instances. It might be looking ugly, but it solves most of the concerns here. Currently, as I remember correctly (I've already fixed that images), this will be logged on production on every over page with a broken image when it's processed for the first time (if no style yet created). In some cases, it can create a lot of useless noise in the logging system.

🇷🇺Russia niklan Russia, Perm

Hi folks, I've just created a similar issue and found this one. I'm not an expert in a11y, but I expected all these field types in the grid should be tabbable. I don't even thought that you should be able to navigate them with the arrow keys. Do they conflict with each other or what? I think all of them should be available by just TABbing other them, no?

🇷🇺Russia niklan Russia, Perm

I think #3390914 📌 Grid-style field type keyboard navigation Needs work is related directly, but it solves another problem as I see.

🇷🇺Russia niklan Russia, Perm

This profiling results for drush site:install --existing-config with a specific website configuration. It is not an issue on default profiles (because they are simple and small) or a smaller website (which have fewer configs). But a multilingual website, with something like 10 languages, a lot of content types, fields, etc., configs grows in size very fast, and this is where the issue starts. The profiler is clearly shows, that the more configurations you have, the more calls for Drupal\Core\Config\FileStorage->getAllCollectionNamesHelper() which is bottlenecked by a php::SplFileInfo->isDir() - the unique calls of which is hundreds times fewer than the actual one. In my example, it means for one unique file, it calls this check 587 times.

This issue reproduced everywhere, on different developers PCs and CI's.

🇷🇺Russia niklan Russia, Perm

I think plugin should be automatic by automatic = TRUE,:

 * @EmailAdjuster(
 *   id = "maillog",
 *   label = @Translation("Maillog"),
 *   description = @Translation("Use Maillog handling."),
 *   weight = 100000,
 *   automatic = TRUE,
 * )

Currently, to make it work, you have to add it manually, and this is stored in configuration, means, that you have to bring maillog to production because of that plugin. By making it automatic, Symfony Mailer will include it for every email and won't affect any configuration changes. Anyway this is controller by Maillog settings, not by adjuster.

🇷🇺Russia niklan Russia, Perm

It looks like some code will break. Can we decorate that service specifically for installation process? Because I don't think this caching is actually needed for regular runtime process, but it's a very significant improvement for installation. I don't see any cases where configurations are changed on disc during installation, hence, this caching should not do any harm.

🇷🇺Russia niklan Russia, Perm

This is a very dirty patch to see how tests will react on it.

This is TOP 10 calls during installation after fix applied.

You can see that both, Drupal\Core\Config\FileStorage->getAllCollectionNamesHelper() and php::SplFileInfo->isDir() disappears from the list.

  • Drupal\Core\Config\FileStorage->getAllCollectionNamesHelper() calls went from 36'392 → 4'551
  • php::SplFileInfo->isDir() calls went from 13'937'541 → 23'710

I measured installation time by time drush site:install --existing-config and results are:

  • Before: real 8m 38.06s
  • After: real 7m 9.09s

It's roughly ~15% increase in site installation speed.

🇷🇺Russia niklan Russia, Perm

OK this is a nice simple/safe patch and I believe it fixes the bug. Please can other people test/review?

The patch in #24 contains the test, why doing it manually? You can test with it on all supported Drupal versions by module. Most people coming here are more likely already on Drupal 10.1+, but it still should work on Drupal 10.0- after it released.

🇷🇺Russia niklan Russia, Perm

I'm not sure, especially considering that PrimitiveBase is an abstract class that extends another abstract class TypedData with exact same methods, and logic inside.

Maybe we should do overwise, remove logic of these methods from TypedData? Just make them simply abstract and empty? Because for now they are simply broken (which test is shows): they assume that $value property is presented, but it is not defined by it, and I don't think it should define it as well. But in this case we have to refactor some other classes which extends TypedData.

For example, \Drupal\Core\Config\Schema\Element extends TypedData directly, because of the problem, it also defines $value property which is makes no sense and just extra code, it should extend PrimitiveBase or provide custom setter and getter, because if TypedData at some point will be changed, it will break. This all a little bit confusing, because TypedData using a property that it doesn't define, and other code define it for it, but we also have PrimitiveBase that does that and duplicate methods as well.

🇷🇺Russia niklan Russia, Perm

Test that simply shows that it can lead to a problem with undefined $value property.

🇷🇺Russia niklan Russia, Perm

@Mingsong you are right, #24 doesn't cache CSS in any way for 10.1+ workaround (for 10.0- Drupal handles it itself). So it is always generated in runtime for each email (which is also can be bottleneck). It just doesn't sound right to me to save it into cache.render, because, if I want to override it for some reason, it will affect the whole render cache for no reason. I'll prefer to have control over it.

Thank you, @catch, for your input regarding Cache API usage in that case.

🇷🇺Russia niklan Russia, Perm

I can't see anywhere those content would be cached on disk (file).

Yes, because this logic is moved to \Drupal\system\Controller\AssetControllerBase::deliver() + \Drupal\Core\Asset\AssetDumperUriInterface::dumpToUri() in 10.1

Drupal cache all page contents that have been ever accessed in the database.
The cache.render bin is designed for caching large contents, such as html markup for entire page across the website.

Database in most cases for big Drupal websites is a bottleneck, and introducing additional I/O during email send, which can be in thousands in minute in bulks - waste of resources and slowing database further. If an email has 10 assets, it will create a 10 DB requests per email at minimum.

Also, this is not a cache.render responsibility to store CSS, it is intended to store rendered HTML markup from render array. At least it should be using cache.data bin. If you want to store it in cache, I think it is better to introduce a dedicated bin for that (e.g. cache.symfony_mailer.css), in that case, developers will have control over CSS saved in this bin. For example, I can switch it to cache.backend.null, so each email will generate CSS in runtime every time it sends, or something like cache.backend.php to save it on disk and avoid database usage. At least it can be controlled.

But I still think, if we are going to «cache» result, it should mimic the core behavior on 10.1+ and use asset dumper for that.

Production build 0.71.5 2024