Account created on 13 October 2005, almost 19 years ago
#

Merge Requests

More

Recent comments

🇬🇧United Kingdom catch

Double checked the performance pipeline on 11.x and it's down to 8m30 now with the combination of this and 📌 Consolidate Umami performance tests Fixed , was previously closer to 13 minutes.

https://git.drupalcode.org/project/drupal/-/pipelines/235402

🇬🇧United Kingdom catch

Set up an MR in a sandbox issue to change the cache target to point back to the MR's own jobs - this is because the actual change here restricts the artificat download to only core branches, which means the cache is always empty until we actually commit this.

phpstan: https://git.drupalcode.org/project/drupal/-/jobs/2245377
cspell: https://git.drupalcode.org/project/drupal/-/jobs/2245381

And the actual cache warming job itself. https://git.drupalcode.org/project/drupal/-/jobs/2245376

The cache warming job also fetches its own cache, because this will save CPU cycles on the commit runs - then it'll write a new cache based on the changes. We don't have to do this, but it'll mean the updated files are available much quicker.

We will probably want a follow-up to remove the verbose output from the jobs, but I think it's worth keeping that until we see it actually working after commit.

🇬🇧United Kingdom catch

catch changed the visibility of the branch 3462763-cache-warming-job to hidden.

🇬🇧United Kingdom catch

Rebased and merged 📌 [PP-1] Use artifacts to share the cspell cache between runs Postponed into here because the cspell changes are very small relative to the MR size, and I think it's easier to see how it'll work with multiple jobs when there's two examples in here.

🇬🇧United Kingdom catch

OK the commit conflicts are because we haven't done 📌 Resync .gitlab-ci.yml and .gitignore following Yarn 4 in 11.x Needs review yet. Put up an MR there. Once that's in, this should be a clean(er) cherry-pick.

🇬🇧United Kingdom catch

catch changed the visibility of the branch 3428614-resync-.gitlab-ci.yml-and to hidden.

🇬🇧United Kingdom catch

catch made their first commit to this issue’s fork.

🇬🇧United Kingdom catch

Although it might cost you a little in feedback after commit.

This should hopefully be OK because the on-commit jobs (at least as the current MR stands) will continue to run the separate lint jobs, they'll just run this extra cache warming job on top. So if we introduce a lint failure, we should still be able to see what went wrong in those individual jobs. However it definitely introduces some level of uncertainty because any jobs that don't run on MRs are hard to test - should possibly add a when: manual to make that easier?

🇬🇧United Kingdom catch

@pameela it's not a blocker to starshot contrib efforts, but I think it's at least a soft blocker to core integration. We originally decided to remove text with summary about eight years ago in 🌱 Deprecate text_with_summary Closed: duplicate but it didn't happen at the time.

To fully deprecate install profiles in core and provide recipe selection in the installer, we need to convert Umami and Standard profiles to recipes. When we convert them to recipes, the constituent parts of those recipes may/should also be re-usable recipes, but then they need to make sense.

Umami was using the text + summary field type despite not using the summary anywhere (as in, none of the content populates the filed, and the special formatter for it is never used), recently changed in 🐛 Don't use text_with_summary in Umami Needs review .

Standard also uses text with summary, but it's not a good basis for building a site from (IMO, I always make a completely separate summary field if needed). So if we want the current core install profiles to work nicely together based on common recipes for an article content type etc., we need to clean this up.

🇬🇧United Kingdom catch

Actually no. We should split the change out to here since it's tricky.

🇬🇧United Kingdom catch

Shouldn't affect the nightwatch test at all but re-ran it just in case.

🇬🇧United Kingdom catch

Since this takes a full minute off every core MR pipeline, bumping to major.

Combining this with some other changes, was able to get the minimum time for a full MR pipeline run down to 5m37s https://git.drupalcode.org/project/drupal/-/pipelines/234735

🇬🇧United Kingdom catch

catch changed the visibility of the branch 3462763-try-to-store to hidden.

🇬🇧United Kingdom catch

catch changed the visibility of the branch 3462763-cache-warming-job to hidden.

🇬🇧United Kingdom catch

Opened 📌 [PP-1] Use artifacts to share the cspell cache between runs Postponed for cspell. Using the cache the entire cspell job completes in about a minute which means we can remove the git diff logic entirely. Postponed on this one because it uses the same lint cache warming job as this.

🇬🇧United Kingdom catch

At some point we could ease in to encourage testing at D11 by simply switching the default for OPT_IN_TEST_NEXT_MAJOR to 1 as a temporary measure during the transition. Then we set it back to 0 when D11 becomes current and we have the 'real' next major.

This also seems like a great idea - maybe a couple of weeks after 11.0.0 for that one?

🇬🇧United Kingdom catch

This will also help the performance test run finish quicker. At the moment we're building composer and yarn to use the cache in performance tests, but it's much quicker to just run performance install in the actual test job before running the tests. Should save ~90s on those jobs.

🇬🇧United Kingdom catch

Tried removing the data provider but there's a lot of field/entity state to keep track of if we do that.

So trying the brute force approach of splitting this into two classes - one for the data provider test, one for the other test methods.

🇬🇧United Kingdom catch

Makes sense to me to postpone this for a few weeks. We could pick a half-way point between 11.0 and 11.1 like October 1st maybe?

🇬🇧United Kingdom catch

Added verbose logging, the logging is more verbose but it's not giving me any more hints about what's going wrong.

🇬🇧United Kingdom catch

Good point - I think I opened this before I found that one. Closing.

🇬🇧United Kingdom catch

Is it possible that exporting the DB to test on local can mess with the hashes?

If you're doing a manual database export/import then no.

If you're using drush sql:sanitize, that resets all passwords on the database (either to something random, or to a password you give it on the cli for easier testing).

However you should also check that $settings['hash_salt'] in your local settings.php matches the value on the production website. This is what ensures that hashes on different websites are unique, even with the same password, and if you change that, when Drupal calculates the hash on login, it won't match the one stored in the database.

Even users created after the migration can't log in.

Is this on your local install, or on production, or both?

🇬🇧United Kingdom catch

Core developers apparently knew for a very long time that this change would cause major disruption (#3400121: [policy, no patch] Allow both annotations and attributes in Drupal 11), but depreciation warning were never added (#3265945: Triggering deprecations for plugins using annotations when core plugin type has been converted to attributes) and, most importantly, this change was made BETWEEN MINOR POINT RELEASES OF CORE.

This is not a fair representation of how this change is being made.

We know that it will be disruptive, eventually all plugin implementations will need to convert from annotations to attributes. It's a big change.

However that is exactly the reason we aren't issuing deprecations yet. We decided that core would need to be fully converted before we issue deprecations for contrib plugins still using annotations.

This isn't an oversight, it's an attempt to reduce disruption. Adding the ability to issue deprecations would not have flushed out the issue in rules earlier because it's about a completely different thing (the plugin implementations themselves, not the plugin managers). Eventually we will need to deprecate annotation discovery altogether, that will be a further step after enabling plugin managers to issue their own deprecations. Again, doing that now would be premature and more disruptive.

Every single API addition (which is what this is, until we start triggering deprecations), is made in minor releases of core. We don't make any api additons in major releases of core because that would be more disruptive - completely breaking compatibility like Drupal used to in the 6/7/8 transitions. Putting MINOR in block capitals does not make a normal thing more scandalous.

In this case an API addition to core turned out to result in breaking a contrib module. The comments here about how rules does things are not 'blaming' rules, but pointing out that it does something unusual. If we'd known about the rules breakage before 10.3.0 we possibly could have done something - maybe leaving annotations alongside attributes, maybe something else. However it's impossible to predict every way that every change will affect every contrib module.

As you point out, gitlab makes it easier to test with multiple versions of core, so should help to find issues like this earlier in the future, i.e. add that level of prediction that we currently don't have.

🇬🇧United Kingdom catch

Committing from needs review to unblock core pipelines.

🇬🇧United Kingdom catch

Just a straightforward rebase.

However I noticed since we added when: manual to the performance job, the job is blocked - needs to allow fail too it looks like.

🇬🇧United Kingdom catch

So on module (un)install all the plugin files will be scanned in the directories, but only those missing from the file cache effectively also parsed? Sounds good - the file cache will have more but will not be retrieved.

Yes - it'll find all the files, but only actually parse the ones that aren't in the cache (which my MR should stop them going in), or whose mtime has changed.

I think I realised more what you meant in #21 though, scenario would be something like this:

1. Module A has a plugin that relies on an an attribute defined by an uninstalled module B -> doesn't enter into FileCache or discovery.
2. Module B is installed - now the plugin is in both FileCache and discovery.
3. Module B is uninstalled again - the discovery cache will be cleared, but the FileCache won't.

So now we have a file in FileCache that uses an attribute defined by an uninstalled module.

I guess that leaves a couple of questions:

1. Should we be clearing the file cache for attributes on module uninstall (just uninstall, since the MR should cover install).

2. Or is it OK, given that this is how annotations used to work anyway.

🇬🇧United Kingdom catch

Plugin discovery caches should be cleared on module install/uninstall, so that part ought to be OK I think?

🇬🇧United Kingdom catch

Looks good and let's backport to 10.3 yeah.

🇬🇧United Kingdom catch

catch made their first commit to this issue’s fork.

🇬🇧United Kingdom catch

Yeah I think we could go back to 10.3.x with this, just method overrides to fix a bug.

🇬🇧United Kingdom catch

I don't think we should get rid of the file cache here without performance numbers - but even worse we don't know what attribute discovery even looks like on a large site yet because not all of core and contrib is converted yet, so any numbers we did get would be artificially better than they might otherwise be.

However, I think I found a way to fix the error, avoid writing to cache just in this case, and leave the file cache for everything that currently would be successfully found and cached. Because this is an alternate approach, have made the change in a separate MR. I restored the test that was removed here, and made the minimal changes to the new test so that it still parses - I think it might be enough but maybe we need to explictly test the missing attribute class with file caching too. I initially did the fix wrong and the test found the error though at least.

🇬🇧United Kingdom catch

Couple of comments on the MR.

🇬🇧United Kingdom catch

Committed/pushed to 11.x, thanks!

🇬🇧United Kingdom catch

Reviewed the MR and it all looks good to me.

Retitling because the latest approach doesn't make it available for all entity reference fields yet, that bit will happen in Configurable views filters to allow for different widgets Active .

Leaving the framework manager review and subsystem review tags because the latest approach was my idea and this issue has a lot of twists and turns.

🇬🇧United Kingdom catch

@pifagor in what kind of context is the link being rendered? This sounds like a render caching issue.

CSRF tokens are rendered via a placeholder/lazy builder

(see

   $placeholder_render_array = [
          '#lazy_builder' => ['route_processor_csrf:renderPlaceholderCsrfToken', [$path]],
        ];

from RouteProcessorCsrf

However, if the placeholder rendering itself gets cached somehow, or is used in a different context like e-mail, then you end up with invalid CSRF links.

🇬🇧United Kingdom catch

This sounds similar-ish to 🐛 Avoid TypeError if config entity dependencies are NULL Fixed .

Can you double check your editor config entities, specifically:

format: the_format_name

And that these formats correspond to a text format i.e. filter.format.* in config/sync

All I can think of is an editor is referring to a text format that no longer exists. If that's the error, we probably need to handle this and log an error instead of the fatal or similar.

🇬🇧United Kingdom catch

Just opened 📌 Add service tag(s) for cache clearing Active which would mean a lot of these direct service calls go.

🇬🇧United Kingdom catch

This doesn't need a CR since it's just a bug fix.

We might want test coverage, but my attempt to add that in a kernel test for hook_library_info_alter() failed. It would probably be possible to set up a kernel or functional test with a test module that provides an SDC, similar to the one provided for manual testing here, but that's quite a lot of test infrastructure for a one line cache clear.

Overall I think it would be a good idea if we could revert the relationship here - i.e. the library.discovery service should be able to tag itself as needing a cache clear on module install, and so should plugin.cache_clearer, so that we can get rid of lines like:\Drupal::getContainer()->get('plugin.cache_clearer')->clearCachedDefinitions(); altogether from module install and just have a single line that loops over the tagged services and calls an interface method. That would also allow us to replace similar code in drupal_flush_all_caches().

Went ahead and opened 📌 Add service tag(s) for cache clearing Active .

🇬🇧United Kingdom catch

we'd use #3440578: JSON-based data storage proposal for component-based page building, which means Drupal field types will the contain values for SDC props

This still has major unresolved questions around it. It may or may not affect the editing experience, but for example the 'field union JSON' proposal might well provide alternatives, while fulfilling the existing product requirements.

🇬🇧United Kingdom catch

Put up an additional MR that moves the performance test back to the legacy driver:

https://git.drupalcode.org/project/drupal/-/jobs/2221126

I think we should do that, then open a follow-up (or maybe continue here for the commit history on the MR) to figure out how to get things onto selenium standalone. Also noticed the test only job is still using the legacy driver which I assume was an oversight in the original issue, so we should move that one too (although that'll be on k8s so should be straightforward).

🇬🇧United Kingdom catch

. Are you suggesting that we follow that same approach wholesale, in which case this issue is not necessary

No not really, just the config management part. 📌 Holistic logo image insertion and dimensions in themes Active is an existing issue with the theme logo that we shouldn't re-introduce to navigation.

🇬🇧United Kingdom catch

One possible difference between this and the usual runner is that the performance-test runner uses docker whereas the main runners use k8s. Pretty sure this is why we point to localhost instead of the image name.

However we went through all of that before, so no idea why changing the image would break it now and I can't see a problem with the arguments being sent.

I tried adding --shm-size=2g because not having that had broken things locally for me, and it might have been a difference between docker and k8s, but that also hasn't helped. (https://git.drupalcode.org/project/drupal/-/jobs/2220842 is the most recent run).

🇬🇧United Kingdom catch

Only realised this after working on 📌 Use artifacts to share the phpstan result cache from core to MRs Needs work and seeing just how much time we were spending on spinning up vs doing things. With the combination of the two issues, the newly combined lint stage finishes in around a minute. Once we add cspell/phpcs/eslint cache support, it should be consistently under a minute hopefully.

🇬🇧United Kingdom catch

So you have the phpass module, provided by core, enabled?

🇬🇧United Kingdom catch

FormElementsLabelTest is a potential candidate here - at least the last method does not http requests.

🇬🇧United Kingdom catch

Self explanatory. Pretty sure this is a very old test from the early days of simpletest.

🇬🇧United Kingdom catch

As far as we know, there's nothing left to do except update to the stable release when it comes out.

There are various non-blocking follow-ups like 📌 Remove jQuery Form dependency from misc/ajax.js Needs work but that issue already existed, just would have made the jQuery 4 update a lot easier if we'd finished it.

🇬🇧United Kingdom catch

@DamienMcKenna if it was just a week or so then we probably would, but indications are that it'll be a few weeks before there's a stable jQuery 4 release, but also that this isn't due to changes/bugs in jQuery itself but updating documentation and etc. Given that, we're planning to release 11.x on the beta, and update to a stable release in an 11.0.x patch release once it's available. Will need something in the release notes, but feels like the least bad option.

🇬🇧United Kingdom catch

It's too late to backport to 10.3.x but we could still consider backporting to 10.4.x.

🇬🇧United Kingdom catch

We should fork or better replace TFT, but until then might as well keep things working.

🇬🇧United Kingdom catch

Performance test job is broken since this went in 🐛 Performance test gitlab job is broken Active .

🇬🇧United Kingdom catch

It will be easier to see the impact of this and maybe bring concurrency down once 📌 Consolidate ckeditor5's FunctionalJavascript tests Needs review and 📌 Consolidate Umami performance tests Fixed are done, because those tests artificially extend the job times of the first two jobs with or without the w3c change.

🇬🇧United Kingdom catch

Committed/pushed to 11.x, thanks!

If this does indeed make functional js tests a lot slower, it may break the performance tests dashboard. I've opened 📌 Consolidate Umami performance tests Fixed which should cover that.

Alongside that, I also opened 📌 Consolidate ckeditor5's FunctionalJavascript tests Needs review for core's other longest running functional js tests.

I think we should open a follow-up to revisit concurrency once those two issues are in, since they may be enough to lower it again, I'll open it postponed now.

I am assuming that we don't want to try to backport this to 10.4.x, but if we do, please re-open with a backport MR.

🇬🇧United Kingdom catch

SourceEditingTest completing in about 1/6 the time with a foreach loop instead of a data provider.

Before:

../vendor/bin/phpunit modules/ckeditor5/tests/src/FunctionalJavascript/SourceEditingTest.php 
PHPUnit 10.5.26 by Sebastian Bergmann and contributors.

Runtime:       PHP 8.3.8
Configuration: /var/www/html/core/phpunit.xml

......................                                            22 / 22 (100%)

Time: 02:24.161, Memory: 6.00 MB

After:

../vendor/bin/phpunit modules/ckeditor5/tests/src/FunctionalJavascript/SourceEditingTest.php 
PHPUnit 10.5.26 by Sebastian Bergmann and contributors.

Runtime:       PHP 8.3.8
Configuration: /var/www/html/core/phpunit.xml

..                                                                  2 / 2 (100%)

Time: 00:23.510, Memory: 6.00 MB
🇬🇧United Kingdom catch

Doesn't reduce the overall job time yet, but finished seventh from last on https://git.drupalcode.org/project/drupal/-/jobs/2210968

🇬🇧United Kingdom catch

This moves around half the methods in the test to protected methods - mostly the ones that don't change any configuration. On my local ddev it reduces runtime from just under 3 minutes to around 2m 30s, but it's very different on gitlab with concurrency - hopefully can get an idea from the test run.

🇬🇧United Kingdom catch

This gets the second of two FunctionalJavascript jobs down to ~3m20s from about ~4m10s usually and will allow the special performance test runner job to finish a fair bit quicker too - although won't be able to see by how much until after it's committed.

https://git.drupalcode.org/project/drupal/-/jobs/2210833

🇬🇧United Kingdom catch

Had test failures that only happened on gitlab and not locally, tried various attempts at sleep() to see if it was a race condition all of which failed. What worked was changing a full ::rebuildAll() to just emptying cache bins (i.e. skipping router, twig and container rebuild mostly). Given we have a successful cold cache test where it really does start just after a full rebuilt, it's a pretty minimal change to the test to get the timings down.

🇬🇧United Kingdom catch

An example of it not working is in 📌 Consolidate Umami performance tests Fixed which I am currrently struggling with. Passes locally but fails with what looks like a race condition in the chrome driver on gitlab.

🇬🇧United Kingdom catch

Does it matter how old the .0 is? Or should assume there will always be a .1 soonish because there will always be something to fix?
Like what if .0 is 1 month old but also it is the latest in the branch?
..
so maybe instead of warning based on .0 should it be basedon minor updates to the site within 1 or 2 months after that minor has been released?
Like it is updating 12.0.7 to 12.1.4 but 12.1.0 just came out a week ago should still have a warning?

tbh I thought using wall time might be more difficult that version numbers because it's more things to look at, but I guess if it's in the metadata it's available for this.

In that case, delaying a minor release update for say two months sounds really good - would handle both of the cases you mentioned - four paper bag releases in one week, vs a .0 release then nothing for months, as well as everything in between.

This would potentially let us do a longer grace period for major releases too (say three months), although contrib compatibility will often do that anyway.

Regarding the .0 warning is this not the same or almost the same if you were going from say 12.0.7 to 12.1.1 if you happen to miss the 12.1.0 because you use the form and not the background updates?

I'm not sure what the question is, but yeah if someone misses the .0 update by accident, then they end up in the same situation as if we make them skip it.

I would hope we could still do the core alpha version without this.

Definitely not alpha blocking!

🇬🇧United Kingdom catch

I guess you can already leave the page when a batch is running but now that it would be less evidently running, people may leave the page even more. That said, it would be fantastic if we would not block the user doing something just because we also need Drupal to do a lot of things in the background.

Yes this is going to be one of the harder pieces. We can probably cover a lot if we keep track in the user's session that they started a batch (probably the batch ID), check the session on every page request to see if we need to show the batch element, and then the element resumes where it left off if there's a batch still running. Then finishing a batch needs to clear $_SESSION['active_batches'] or similar.

🇬🇧United Kingdom catch

This feels like it might unblock things. Short version of how we get to the original end result assuming everything currently being worked on comes together:

1. Add the new filter here, but not automatically to views data yet.

2. Do 🐛 Views handler loading should respect configuration Active so configuration is respected in views.

3. After #2, do Configurable views filters to allow for different widgets Active to allow the filter plugin to be selected in the UI. This would be an extra UI element in views, plus some kind of API for plugins or fields to declare compatibility with each other tbd.

The for entity reference fields, there'd be a choice of numeric and entity reference, and site builders can choose which one, and we don't have an upgrade path or bc concern specific to the entity reference change - although there is still quite a lot of that in #2 and #3 overall.

Production build 0.69.0 2024