This is still a good idea, and more possible now that we have a classloader that supports class renames.
@zaporylie it might be possible to add recipes to a queue in the installer, and then run through the queue on first cron run when updates status is either applied or not.
I don't think we want recipe application itself to trigger an http request to Drupal.org, so also going via a queue might be a good idea anyway. We can make it clear in the documentation for the hook/event that it's triggered an indeterminate amount of time after the recipe is applied.
🌱 [meta] composer require / install module discrepancy issues Active is open against project browser for a UI to be able to remove unused dependencies. We should possibly open a core issue for the detection and removal bits themselves which could maybe live in package_manager?
Agreed it's fine to ignore that here, lots of ways that sites can end up with unused composer dependencies already.
I think this might be covered by 📌 Add CacheOptionalInterface to more blocks Needs review , moving to postponed until that's done, then we can revisit whether there's anything missing after that.
Back to green after ✨ Optimize redirect chain retrieval in VariationCache Active
#84 isn't the right approach - the files are created in system module's js and css asset controllers, not the collection optimizers, and this is by design.
I'm pretty sure this is a side-effect of web vs. cli permissions/ownership, but as above, it ought to be fixed by 📌 Bring back the asset stale file threshold Active which would avoid deleting the directories.
Moving back to needs more info because we still don't have steps to reproduce here.
I made the 'you can' more passive on commit, but kept the 'we can' which for me is different (e.g. it's about what the code and the people working on the code can do, not the person reading the comment and a common formulation, 'we can see' etc.). Agreed generally that verbosity and potential duplication is a virtue for this one.
Committed/pushed to 11.x, thanks!
Tagged for 11.2.0 release highlights, not because we should try to explain exactly what we did here, but it would be good to include in the general 'render caching improvements' section the kinds of things we optimized.
I think we should do this, and it would be better to do it prior to a stable release because it will involve an upgrade path to add the columns.
I think that we can hide publishing status from the UI by default, so that it would normally only be used by workspaces.
The tricky part that I'm not sure about is what to do with lessons/activities that are unpublished in course navigation, probably we should behave as if they don't exit.
The big advantage of this is that when combined with workspaces, it would be possible to make draft edits to activities or entire lessons, and only have to clear student progress when those changes are published.
, because we don't differentiate between DB gets and memory gets. Maybe we should?
We initially counted all database queries in performance tests, but this caused random test failures with the chained fast backend because it uses timestamps for whether an item is valid, so millisecond rounding issues can result in an item being valid or not. For me personally I think the current split by cache bins is good - we know implicitly which bins use chained fast or not.
The new approach looks really good to me and the comments explain what's going on well.
For the entity clone example I think more use could be made of the 'short description' field. From looking at a couple of different contrib pages, it looks like people tend to use that has a heading for the specific tag, whereas it could/should be used more as a description for the entire branch.
For core's active branches we use:
Actively maintained with new features and backwards-compatible improvements every six months. Use this version for the best compatibility with future releases.
For security branches, variations on:
Drupal 11.0.x will receive security coverage until June 2025 when Drupal 11.2.0 is released.
Contrib short descriptions aren't as prominent as on the core download page, but that seems like the right place to put the information.
There has been similar code in Umami since it was committed to core, most recent change was 🐛 Umami page.tpl.php breaks block placeholders Needs review .
Also just found 🐛 Umami theme ignores placeholders and HTML replaced elements when checking for empty regions Needs work which looks similar.
This was partly changed in 🐛 Umami page.tpl.php breaks block placeholders Needs review which was similar but I think more problems were identified here.
@berdir just opened 🐛 Umami page template renders header regions multiple times Active too.
I've reverted 🐛 Lists with items containing non-ascii characters are not sorted correctly Needs review , let's fix that over there and re-commit. Marking this as duplicate.
Reverted the commit and marked 🐛 Symfony\Polyfill\Intl\Icu\Collator::compare() is not implemented Active as duplicate. Let's fix that here and re-commit.
I think we should also open an upstream bug report against Symfony to implement the method, then we wouldn't need the checks added by that MR and could go back to the original code committed here.
There shouldn't be a change record for this issue because the API already exists. Not sure what to do with a draft change record that should be abandonedz can we just delete it?
Just seen 🐛 Symfony\Polyfill\Intl\Icu\Collator::compare() is not implemented Active let's revert this and recommit with that fixed.
One drawback to recipes calling home is that it kinda assumes that recipes will be added to a site and not removed. That might not be a best practice as many recipes will not continue to apply as a site develops over time.
I think it would only every be sent to d.o once, not like modules which are every update status check. So it would be possible to track the cumulative times a recipe has been applied, and month by month comparisons, but different from current project usage stats.
@acbramley yes it would fix that, would just be able to rely on the full view mode.
In ::checkAccess(), if ::checkViewAccess() returns null, it falls back to node grants. The only situation in which that doesn't happen is when the user is allowed to view their own unpublished content.
In EntityAccessControlHandler::access(), it calls checkViewAccess as long as access hasn't been forbidden already.
So you can forbid access and prevent node access running (it can't undo a forbid), but you can't bypass it with a grant except for the view own unpublished permissions check.
Committed/pushed to 11.x, thanks!
The new JSON:API performance test is failing but in a good way.
Spun out the quick fix to 🐛 Cache redirect error displayed when using 'view own unpublished content' permission alongside node grants Active .
I also think we need to consider whether this case actually ought to check node grants too e.g. if you're able to view your own unpublished nodes, then maybe there is some reason node grants would still want to deny that? Seems unlikely but theoretically possible.
catch → created an issue.
Committed/pushed to 11.x, thanks!
Needs a rebase.
Rebased 📌 Recursively replace placeholders in CachedStrategy Active which was postponed on this issue.
Tagging for release highlights because we should summarise the overall rendering performance gains between 11.1 and 11.2 somehow.
This could probably be done with configuration + ECA module.
1. Create a product that can be referenced from a course.
2. When the product is purchased, find the courses that reference it and then add the user as a member to the course (this bit would require either ECA or custom code).
This would allow for selling one course at a time, or potentially a 'course bundle' - buy one product and get added to all the courses that reference it.
It wouldn't handle the situation where someone buys a course, leaves the course, then wants to join it again, that would require additional validation at the join end.
We wouldn't add direct support for this to the core LMS module, but if it can be done with pure configuration and ECA similar to the above, it would make a good recipe.
Implemented the recursion. We don't have double-nested, cached, placeholders in core or at least not that I can think of, or that are covered by performance tests, but this does show the improvement for the single-nested cached placeholders in the navigation bar - e.g. shortcut and user menu placeholders now are now one multiple get instead of two gets.
Committed/pushed to 10.5.x, thanks!
@smustgrave we can update dependencies in patch releases, but we don't tend to do it routinely. Partly because it would be endless work, partly because patch releases of dependencies have caused regressions multiple times in the past so it's better to find this out in a minor release.
The more limited approach looks good to me, however there are performance test failures - do we need to add a static cache to the path alias lookup? Or if the path alias has already been resolved shouldn't we able to determine that somehow without looking it up again?
After all that, committed/pushed to 11.x, thanks!
OK I think this is what I was missing:
/**
* Renders a placeholder into markup.
*
* @param array $placeholder_element
* The placeholder element by reference.
*
* @return \Drupal\Component\Render\MarkupInterface|string
* The rendered HTML.
*/
protected function doRenderPlaceholder(array &$placeholder_element): MarkupInterface|string {
// Prevent the render array from being auto-placeholdered again.
$placeholder_element['#create_placeholder'] = FALSE;
// Render the placeholder into markup.
$markup = $this->renderInIsolation($placeholder_element);
return $markup;
}
When we get to the code being changed, because it goes through doRenderPlaceholder(), even though we original set #create_placeholder to TRUE, when we actually want to check the placeholder item, #create_placeholder is false again, so it gets from cache anyway.
So the bug I thought we were introducing is not in fact introduced, everything is fine. The split between placeholder strategies and the renderer makes this hard to follow though.
I nearly committed this, then went down a bit of a rabbit hole.
Because we're not fetching from cache when create_placheholder is TRUE, it might be possible for a nested item that has both cache keys and #create_placheholder set to skip caching altogether. The idea I had was to then only skip things depending on $is_root_call, but that undoes the change here.
So... I'm going to go ahead here, but we should continue in 📌 Use multiple get for #pre_render / #cache where possible Active and 📌 Recursively replace placeholders in CachedStrategy Active to make sure that nested cacheable placeholders are optimized as well as the top level
Moving 📌 Local task name expectation in getFeaturedPageActions is fragile Active to should have. That leaves one current stable blocker, which needs reviews.
https://git.drupalcode.org/project/drupal/-/jobs/4824652 is the repeat test class job running with that change, and even 10 or so tests in, the test is taking longer and longer each iteration to complete.
Drupal\Tests\package_manager\Build\PackageUpdateTest 1 passes 429s
Drupal\Tests\package_manager\Build\PackageUpdateTest 1 passes 445s
Drupal\Tests\package_manager\Build\PackageUpdateTest 1 passes 457s
Drupal\Tests\package_manager\Build\PackageUpdateTest 1 passes 497s
Drupal\Tests\package_manager\Build\PackageUpdateTest 1 passes 509s
Drupal\Tests\package_manager\Build\PackageUpdateTest 1 passes 516s
Drupal\Tests\package_manager\Build\PackageUpdateTest 1 passes 517s
Drupal\Tests\package_manager\Build\PackageUpdateTest 1 passes 525s
Drupal\Tests\package_manager\Build\PackageUpdateTest 1 passes 558s
Drupal\Tests\package_manager\Build\PackageUpdateTest 1 passes 584s
Drupal\Tests\package_manager\Build\PackageUpdateTest 1 passes 595s
Drupal\Tests\package_manager\Build\PackageUpdateTest 1 passes 620s
Drupal\Tests\package_manager\Build\PackageUpdateTest 1 passes 638s
Drupal\Tests\package_manager\Build\PackageUpdateTest 1 passes 663s
Drupal\Tests\package_manager\Build\PackageUpdateTest 1 passes 668s
Drupal\Tests\package_manager\Build\PackageUpdateTest 1 passes 673s
Drupal\Tests\package_manager\Build\PackageUpdateTest 1 passes 674s
Drupal\Tests\package_manager\Build\PackageUpdateTest 1 passes 676s
Drupal\Tests\package_manager\Build\PackageUpdateTest 1 passes 677s
Drupal\Tests\package_manager\Build\PackageUpdateTest 1 passes 677s
Drupal\Tests\package_manager\Build\PackageUpdateTest 1 passes 677s
Drupal\Tests\package_manager\Build\PackageUpdateTest 1 passes 678s
Drupal\Tests\package_manager\Build\PackageUpdateTest 1 passes 679s
Drupal\Tests\package_manager\Build\PackageUpdateTest 1 passes 679s
I wonder if we are hitting github rate limits or something like that.
This looks good to me, couldn't find anything to complain about. Committed/pushed to 11.x, thanks!
Committed/pushed to 11.x, thanks!
Ahh thanks this is very confusing.
I change the max execution time to 30 and it passed locally, pushed an MR for this.
I think we need two spin-off issues:
1. We should try to optimise the test and/or package_manager so we don't need a 30 second timeout.
2. We should try to make sure that fatal errors result in a 500 instead of 200 so we don't get false negatives on 200 assertions.
I don't think there's anything left to do here, RTBC queue has been very busy (hard to keep under 100 issues even with over 100 commits/month and many more reviews).
Committed/pushed to 11.x, thanks!
We could maybe split my 'quick fix' MR out to a side issue, try to get the committed (with the test coverage) and then continue here?
Committed/pushed to 11.x, thanks!
Unfortunately this needs another rebase, the MR looks good to me.
Committed/pushed to 11.x, thanks!
We definitely need to backport this to 10.5.x.
11.1 and 10.4 are possible, but this will need a backport MR for each branch.
Committed/pushed to 11.x, thanks!
Needs a rebase.
This test fails for me locally with:
1) Drupal\Tests\package_manager\Build\PackageUpdateTest::testPackageUpdate
Failed asserting that two strings are identical.
--- Expected
+++ Actual
@@ @@
-'1.1.0'
+'1.0.0'
/var/www/html/core/modules/package_manager/tests/src/Build/PackageUpdateTest.php:63
which looks unrelated to the random test failure but also means I can't get a green test run to then try to make it fail the same way from there. Also tried reducing TemplateProjectTestBase::MAX_EXECUTION_TIME to 1 and that doesn't change anything for me.
Looks like this needs a manual rebase.
Looking at this again and it feels almost impossible to explain depending on a cacheable dependency vs. varying by the property of a dependency.
Just to check I extracted one of the previous quick fixes here, which compensates for the short circuit when the node is unpublished. I'm not 100% sure we should even be bypassing the node grants system when access is granted via view own unpublished.
Actually moving to duplicate now. Trying out the new 'assign credit on closed issues' rather than transferring over.
Wrong status, but I think it probably is RTBC once there's screenshots.
Marking duplicate of 🐛 Olivero pager template creates additional grid columns Needs work
There's no way to test this, it's a visual issue and we don't have visual regression testing for core.
What this does need though is before/after screenshots with/without the fix following the steps to reproduce in the issue summary.
This looks like the right fix to me, so marking 🐛 Views incorrect display with filters, table and full pager Active as duplicate.
Actually, I would argue that updating everything to the latest version and run /update.php -- both for core and contrib modules -- within Ludwig, and get everything ready first, and only then switch from Ludwig to Composer, would be a better approach, since there ought to be no changes, except to delete contrib module folders, and download them with Composer.
Yeah I'd generally recommend that for almost any situation - update as far as possible with whatever you currently have, then try to move on from there. This way it's easier to know which step has introduced any particular error because they can be tested independently of each other.
Even if we need a declarative API eventually, we could still add that post-stable I think, just would take longer for contrib to be able to rely on it.
Could we try to remove the CSS rules and check whether there's any change in Claro or Gin? It would be good to remove unnecessary CSS if we can.
I think #239715: UMN Usability: Where do forms redirect to? → is still a problem in that there's no consistency where entity forms redirect to after saving. However if this the agreed solution to that it could use an issue summary update.
Fatal errors can often be 200s, I think we have an issue somewhere.
The timeout is not good, probably reflects something that could happen during actual operation.
Do we know where the 20 seconds max execution time is coming from? Bit surprised it's not 30 seconds. We could increase that to see if it helps the random failures and open a separate issue for the timeout.
This one looks valid to me, slightly updated the issue summary.
Automatic updates has stable releases now (although not included in core yet, but package_manager is) so I think it's a good time to revisit this.
Run update.php just means running Drupal database updates normally - on the basis that core/contrib may have been updated in the process of switching from ludwig to composer, e.g. same as the last step on https://www.drupal.org/docs/updating-drupal/updating-drupal-core-via-com... →
Clean URLs were completely reworked in 8.x so this is properly outdated now
Not sure what to do about the classnames though, will at least need another follow-up so back to review for that.
"catch Any concern about possibly having multiple entries, based on combination of modules installed, for just about every file in the plugin directories for every plugin type? Though this would be mitigated once #3490484: [meta] Lots of non-plugin PHP classes in plugin directories is addressed."
So yes,but I have an issue open for a database-backed filecache backend which needs a lot of work, but plugin discovery would be a good candidate. We could put it in memcache/redis in contrib too. Would be accessible to all processes and free up apcu for frequently accessed things.
@mikelutz we can't rely on clearing the apcu cache because e.g. if apcu is cleared from the command line it won't affect web requests and vice versa.
However, the cache key is set in AttributeDiscoveryWithAnnotations::getFileCacheSuffix
and I think we could key that by rootNamespaces
which is iirc more or less the module list, that way it will always depend on the combination of modules enabled.
I was about to write that this might slow down the installer, but we barely/ever discover plugins during install so it shouldn't make much difference at all.
So 📌 Block status code visibility condition should use a status code cache context Active will make the performance issue fixable with the status code block condition.
But I think it would be good to fix this directly in easy breadcrumb too - it can look for the exception on the request (see code in that issue), and if it's there, early return and only add the status code cache context instead of the URL one - probably best to do in the ::applies() method? Not familiar with the code so take with a pinch of salt but should be enough to get started. Only problem is that cache context will be in 11.2, maybe 10.5, but not a patch release, so either needs to wait or maybe copy it for a while.
It's really cool to see how all these performance issues don't just add their improvements, but they actively combine to result in even nicer improvements.
When we originally landed the #pre_render render cache, cache tags, cache collectors , cache contexts, placeholdering, dynamic page cache, big pipe, mostly in Drupal 8 with a few in Drupal 7, it felt like the cache system was close to feature complete.
But variation cache + performance tests the past year or so have really shown that we've not previously been using those features to their maximum in core, and every enhancement or tweak we've made has built on the last. And then once a batch of issues goes in, it opens up another new possibility that was not obvious before.
The reason it's a regression on HEAD is that previously, they would have been part of dynamic page cache, even though we explicitly asked for them to be placeholdered. This might sound like a regression, but it's actually good, because we want them to be separate from dynamic page cache, I think the current behavior also mean that autoplaceholdering is pretty busted because on cache hits on the inner elements, they still bubble up their expensive cacheability metadata to dynamic page cache.Yes and although it's more cache tag lookups compared to HEAD, should still be down from 11.1 overall which is the real comparison.
Just in case anyone lands here due to large cache tables, while this should fix the dynamic page cache, it's very easy to end up with a large internal page cache database too.
One reason is 🐛 Page cache creates vast amounts of unneeded cache data Active for query strings.
For 404s/403s the page cache supports much shorter TTLs, which means 404/403 pages can be expired on cron or set to not cache at all. This was added in #2699613: Set a shorter TTL for 404 responses in page_cache module → .
When we get a 404 or 403, HttpExceptionSubscriberBase sets the exception on the request.
By the time we reach the rendering of the 404 or 403 page, this is available, so the cache context will work.
It won't do anything if someone has a response subscriber that adds a 418 or 201 status code or weird things like that, those would all count as 200.
We could rename it to ExceptionStatusCodeCacheContext
and then when there is no exception status code, return '0' instead of '200'. This would match how the visibility condition works a lot closer.
Opened 📌 Block status code visibility condition should use a status code cache context Active to core, but just that issue would require sites to configure the breadcrumb block not to show on 404 pages.
However once the cache context is there, it might be possible to check for a 404/403 first, early return there while adding the status code cache context, then only add the url cache context if it gets past that. Or set the breadcrumb builder not to apply on 404 pages at all.
I'm not sure how to test this without committing it, but how sure are you that this will successfully work with core updates where the code of package_manager itself can change?
The actual condition plugin lives in system module.
Duplicate of 🐛 [random test failure] PackageUpdateTest::testPackageUpdate Active
Could we change the exception to an assert() maybe? This seems like it should be a libraries.yml validation step (which we don't really have).
The MR looks reasonable to me though.
This one is still valid I think. Adding related issues. Might possibly be a duplicate of #3061535: Add settings from settings.php to the container as parameters → though.
We can but I'm starting to question whether we shouldn't do this in one step with 📌 Views table column align options don't take into account RTL langaguages - remove? Active instead because adding two sets of upgrade paths with their own test coverage is not ideal.
Or if we might do that issue, or rename the classes again to match logical CSS rules or similar, go back to the original approach here which didn't require an upgrade path as quick fix and revisit the approach in a follow-up.
@smustgrave they can happen independently, the other issue doesn't change the CSS file here at all, just remove's views' own version.
Committed/pushed to 11.x, thanks!
Not the same thing but given they both affect SystemMessagesBlock adding the placeholder strategy denylist issue too - both are based on the system messages block being extremely cheap to render overall.
Workspaces in core actually provides this functionality now wholesale.
The remaining bits are 📌 Allow for / implement simplified content workflow with workspaces Active and 📌 Rewrite the node preview functionality on top of workspaces Postponed: needs info , so we can safely close this one as outdated.
Moreover AVIF is specifically good for big sizes with fallback
Not necessarily in this issue, could maybe be a follow-up, would we want to consider some kind of configurable filesize threshold where below a certain point we always skip AVIF and use webp instead? Have seen various discussions that decoding AVIF in the browser is a lot more expensive than webp, which can undo the bandwidth savings with smaller images.
Either way, doesn't really seem like a stable blocker to me? extending this to support more cases won't require API/structure changes or anything like that?
Yeah I was also wondering this - if it's a 'normal' bug and the fix is self-contained (e.g. doesn't require adding some kind of declarative API) then it doesn't feel like a blocker.
The module is still experimental and templates are considered @internal, so I don't think we need a CR here - doesn't hurt to have one but not a blocking issue, whereas it looks like an accessibility review still is.
Code in question is this:
if (preg_match($canonical_pattern, $current_route_name, $matches)) {
$entity_type = $matches[1];
$edit_route = "entity.$entity_type.edit_form";
// For core entities, the local task name matches the route name. If
// needed, we could iterate over the items and check the actual route.
if (isset($page_actions['page_actions'][$edit_route]) && $page_actions['page_actions'][$edit_route]['#access']?->isAllowed()) {
$featured_page_actions[$edit_route] = [
'page_action' => $page_actions['page_actions'][$edit_route],
'icon' => 'pencil',
];
}
So is the fix to resolve the 'if needed' comment and iterate over the items?
Space for one more?
I think this is reasonable to backport to 10.5.x so that behaviour is consistent between the two major branches, although it will need a backport MR at least for the performance test changes due to merge conflicts. Moving to 'to be ported' for that.