MR is 100% removals but looks right to me.
If someone figures out a safe way to provide a pre-beta xb demo that does not result in either un-updatable code or un-updateable sites then it could be added back, but I think it should only be included when all the various considerations have actually been fully thought through, solved, and tested, which will be easier to demonstrate from a clean slate.
I strongly think the recipe should be removed as well - the whole problem is trying to provide a demo of something that has no stable data model and therefore cannot be updated, within production sites that by definition need to be updated, and the composer script is just a symptom of trying to workaround that, but I can open another issue if necessary.
@volanar see 🌱 [policy] Impact on contributed projects and its maintainers Active .
I think the specific point you're raising is partly because Drupal CMS was released not very long after Drupal 11.0.0 - so while there are over 3,000 projects with stable Drupal 11-compatible releases, there are 6,000 without.
However, it will be even more pronounced on the day 12.0.0 is released when you would still want new Drupal CMS sites to start on 12.0.0, and hence any dependencies to also have stable Drupal 12 compatible releases, but I think that's better discussed in the issue above.
📌 Remove the automatic cron run from the installer Needs work is the installer issue.
Additionally, are we considering that the automated_cron module isn’t included in some installation profiles, such as minimal?
Yes it was considered. If you install minimal, you can install automated_cron, or set up a cron job to run from the cli, the only consequence of not immediately running cron is that you get a reminder in the status report to set it up.
Starting from Drupal 10.1, JS and CSS file loads are handled by two specific routes—system.css_asset and system.js_asset (reference: https://www.drupal.org/node/3358091 → ). Should we consider preventing the automated_cron from being triggered if the route is one of these two?
There is no particular reason to avoid running cron on those requests, it happens after the content is served, same as any other request.
The installer itself should not be requesting aggregated assets, so there must still have been an initial AJAX request to some path, which in turn loaded js/css, which then ended up requesting those routes - e.g. this situation should only happen when the installer calls out to the not-yet-installed site. However this is based on core's installer, it could be that the Drupal CMS installer is itself doing something that could be causing this, or that a contrib module is.
It would not necessarily be a bad idea to limit automated cron to HTML GET responses in general, but when it was added it was intentionally wide-ranging to it would have the maximum chance of running frequently - e.g. if core page caching is on and a site has very little auth traffic, then potentially only contact form or search submissions might trigger it for days. Would not hurt to open a core issue though.
Updated the issue summary with a slightly more thought through proposed resolution.
So, we need that cron reset if the search recipe gets applied afterwards, but not if it was selected during the initial site installer.
Yes this ought to be enough.
In 11.1, cron runs at the end of the installer (didn't confirm that Drupal CMS installer definitely calls that same code but seems likely), in 11.2 we removed the hard-coded cron run, but it's executed on the first 'real' request after install (with core's installer, this is the last page of the installer) via automated cron.
Having said all this, cron runs should not be executing AJAX requests against the actual site at all, since not just cron but many other things could go wrong @hfernandes can you check which page might have triggered this?
We're working on a number of similar issues over at https://www.drupal.org/project/project_browser/issues/3453713 📌 [META] Test project browser on popular hosting providers Active and I am starting to think we may be duplicating efforts here.
Yes me too, I didn't realise that issue was already open.
But also fully agreed with Gabor's points - just Drupal core + project browser might not show up potentially showstopper issues with things that memory usage that could be exposed by Drupal CMS, purely down to more projects in the code base/installed as well as the recipe system being in use. So it would be good to expand the scope of the original issues if we can?
The script still runs when you have a stable release, it just short circuits. So if any of the composer or Symfony APIs it relies on change (e.g. a class being renamed and deprecated, then removed in a major release), it could very easily start throwing fatal errors.
Also the first version of the script that I reviewed did not have that check. I pointed this out on the original issue, only a couple of days or so before the CMS 1.0 release - so there was a non-zero chance of it being released with 1.0 without that check being in place, then it would not have been possible to upgrade it to add the check.
Opened 📌 Remove ExperienceDemoBuilder from the composer template Active .
When you say "this code", are you referring to the ExperienceBuilderDemo.php script?
The current situation in HEAD is that it would not be upgrade-able at all
Yes exactly this, and this is what I'm worried about - so as soon as it's in a release, there's no way to fix bugs in it for existing sites or to remove it once XB is stable.
The current situation in HEAD is that it would not be upgrade-able at all, but you could remove it by just deleting it outright.
How will Drupal CMS site owners, potentially tens of thousands of them, some who never touch the cli, know how to do this?
. I'm open to other approaches, for sure
I don't think there needs to be a viable different approach to rethink this - it should be removed, and then something that is not unmaintainable could be added if someone comes up with an idea. Or all the effort that's going into this could instead go towards getting XB to beta stability or into a web-hosted demo where it can't affect real sites.
I haven't tried to experiment yet, but can you confirm whether this code is ever going to be upgradeable/removable when via automatic updates/normal composer commands without any manual intervention? I really think that trying to include this in the default download is asking for trouble (as evidenced by the at least two catastrophic bugs that have been narrowly avoided) and different approaches should be looked at.
catch → created an issue. See original summary → .
Since this is a user facing fatal error, bumping to critical.
We might need a second issue to try to investigate further why the executable can't be found too.
Detail like cache tags I was mostly relying on the grafana/tempo integration (via https://github.com/tag1consulting/ddev-gander) for this kind of granular information, so that it wouldn't be too overwhelming for people to write tests. However, as with database queries, once the tests are written and you can see what's going on much easier, it is all worth it and really helps to understand the impact of a change.
Where it will not be that useful to keep track of, we can just not assert on it if we don't want to, so why not?
Haven't reviewed the MR properly yet, but general +1 for getting more granular..
as it will be present on the top of the modules info
So no indication via update status that there is a secure/release project that can be updated to from an unsupported one, which feels like a severe regression from the current situation.
This policy would just prevent transferring a namespace to another user without the approve or the current namespace “owner”.
Current namespace 'owners' are always given at least a month to respond to any project ownership requests, this is quite a long time - especially considering the actual length of time when they're unresponsive is usually a lot longer than a month in practice.
Why is it preferable to be able to strip the namespace from the current owner, something you apparently agree with, than for e.g. a member of the security team, a core committer, or a well known contrib maintainer of other high usage modules to be able to fix security releases for what could be tens of thousands of sites using a project?
This should use ::assertMetrics() now, see 📌 Add PerformanceTestTrait::assertMetrics() so it is easier to write performance tests Active .
Suggestions, how about I pull the performance test parts to assert the behavior into #3500739: Aggregate cache operations per bin to allow for more specific asserts, then we can document the current baseline and it makes it easier to compare improvements?
Yes that sounds good - easier to show the diff in this issue that way.
and in that message they could recommend an alternative.
In which message?
Did some very quick testing to see what was loading jQuery for anonymous users.
Drupal CMS out of the box does not load jQuery for anonymous users, this is good.
Drupal CMS with the search recipe enabled does. This is almost certainly due to search_api_autocomplete. We don't have performance tests with the optional recipes installed yet but should add some. I didn't check the other recipes yet for jQuery dependencies on the anon front page.
The search recipe takes the js loaded on the front page from about 10kb to about 284kb which seems like a lot of overhead for something that users won't interact with on the vast majority of requests.
One way to deal with that would be find a way to load the extra js on demand, opened 📌 Try to load autocomplete and other js when the search button is clicked Active for that.
While I was there, I also noticed 📌 Preload lora-v14-latin-regular Active in core.
Could use performance tests for the optional recipes probably, at least the ones that have module dependencies.
Moving back to active but making it a 'plan' because so far at least, this is more about tracking core/contrib issues than Drupal CMS itself.
Next steps here:
📌 Switch the performance job to fail Active
Also we should add performance tests for the optional recipes - either all at once or individually, need to talk to @phenaproxima about how to organise that.
catch → created an issue.
Drupal core supports OOP hooks now, so is there any reason to do this?
At the moment, when there is an unfixed security issue reported to s.d.o, and the maintainers aren't responsive, then the security team can intervene to mark the project unsupported, and issues an SA at the same time (without disclosing specifics, obviously the reporter could decide to do so at that point, often they don't). This process alerts site owners (via update status) that there is an unfixed issue in the project.
It is possible to resurrect a project marked unsupported in this way by fixing the security issue, and then asking the security team to re-instate it and add a maintainer - this can be done by either the previous maintainers, or a new one if someone volunteers to take it over and also fixes the issue.
See the process documented on https://www.drupal.org/drupal-security-team/security-team-procedures/mar... →
As I read this issue, it would prevent any of this from happening, so what would happen in that case?
Yeah should be able to do array_diff() or array_intersect() on a specific list of redirect queries, so it only fails if they're there or missing.My plan is to filter and only assert on redirect queries.
Was going to suggest a performance test, to my knowledge it would be the first ever performance test for a contrib module (outside core or Drupal CMS) - could hit one page where the redirect query is expected, then another page where it's not?
We added performance tests to Drupal CMS around alpha, and I also did some manual review around that. I think all the major issues I found were fixed in time for the 1.0 release, so what lighthouse reports is mostly issues we haven't fixed in core yet, and maybe some contrib.
Serve static assets with an efficient cache policy 12 resources found
I think this might be ddev's default nginx config, it won't happen with apache2 because of Drupal's default .htaccess. Worth looking for/opening an issue against ddev to see if they'd consider adding some static asset caching, possibly the answer will be 'no' because it's for local development. Mostly I would ignore this unless running against and actual production site. You could also switch ddev to use apache (I think there's a command or config option) to confirm.
Consider using a module to inline critical CSS and JavaScript, and use the defer attribute for non-critical CSS or JavaScript.
See ✨ Support inlining critical CSS for faster core web vitals Active for a core issue to add support for this. There are contrib modules but only the theme author knows which CSS is critical or not so it would be better to get the API into core, adjust Olivero to use it, then adjust the Drupal CMS subtheme to use it - or whichever theme Drupal CMS ships with by the time that happens. Adding CSS inline means it's not cacheable by the browser, so it's always a trade-off and could be counter-productive if not done very carefully.
Consider removing unused CSS rules and only attach the needed Drupal libraries to the relevant page or component in a page.
📌 Refactor system/base library Needs work removes up to 6-7kb unused CSS from every page served by Drupal, however the MR has numerous bugs and it should probably be split into smaller issues for easier manual review/testing. I would love some help getting that issue over the line, then we can see what's left.
Passive listeners, issue is here: ✨ Implement passive listeners in jQuery to improve scrolling performance Needs work . However this one might be a Drupal CMS issue indirectly. It would be good to turn off js aggregation, see which js files are loaded, and whether that includes jQuery, and if so which library depends on jQuery. Then we could open an issue to try to either avoid loading that library for anonymous users or to remove the jQuery dependency, or something like that.
That looks pretty nice to me as an API. We'd have to handle any additions/removals manually but maybe that is fine.
There are several manually skipped tests in core, if we want to fail on skip we'll have to do those differently.
However the tests only job should probably definitely fail on skip?
Thanks for testing! This is really useful.
- After running composer create-project I'm instructed to run ./launch-drupal-cms.sh. I ignored it as that's just for ddev... isn't it?
Yes it is, the language should be removed as a prompt I think, can stay in a README.md and online docs - could you open an issue?
- Trusted Host Settings error. Not able to change it directly in cpanel, had to use ftp to download the file, change locally and upload to get around permission errors even after the file was chmod 777. Easy to fix but breaks the idea of everything possible from the browser so I'm mentioning it here.
I think this needs a core issue to discuss, e.g. can we still tell people they could do this, but make it 'info' rather than 'error'.
Visiting Extend->Update Extensions tab triggers an error. "The website encountered an unexpected error. Try again later."
The Exception is: PhpTuf\ComposerStager\API\Exception\LogicException: The composer executable cannot be found. Make sure it's installed and in the $PATH in PhpTuf\ComposerStager\Internal\Finder\Service\ExecutableFinder->find() (line 34 of [path-to]/vendor/php-tuf/composer-stager/src/Internal/Finder/Service/ExecutableFinder.php).
This is quite serious. First of all the error should be handled by either package_manager or automatic_updates module so it's not fatal. Secondly we would want people on shared hosting to be able to use the module - could you open an issue against core package_manager to start with? It might need to be moved/split elsewhere but that's probably in the middle of where it would need to be moved to.
quietone → credited catch → .
https://git.drupalcode.org/project/drupal/-/merge_requests/10746/diffs is removing test coverage which looks unintentional?
A bit concerning that we're getting different results locally vs. gitlab, there's an existing issue about functional js test false negatives on gitlab. However this issue makes sense in itself and is a straight regression from when we removed the cron run in the installer, so let's do it.
Committed/pushed to 11.x, thanks!
Looks like we could move some logic around in AdminNegotiator to do the cheapest checks first at least.
@pendragn this issue was more for Drupal CMS contributors to try installing Drupal CMS on shared hosting and report back, hopefully to pre-empt some of the issues you're having on different hosts. For actual support, you would probably get better answers in #drupal-cms-support on Drupal slack.
Having said that, this error you pasted shows that the installer didn't complete successfully. You should try wiping your database and re-installing, I would do so without any of the optional recipes to minimise what needs to be installed. If you end up in exactly the same place again, it'd be good to report that back here but real-time (or closer to real time) help on slack will probably help you get going faster. Also if you're able to share details of which shared host you're on that would be really useful for the purposes of this issue.
Symfony\Component\HttpFoundation\Session\Storage\NativeSessionStorage->start() (Line: 128)
Drupal\Core\Session\SessionManager->startNow() (Line: 93)
Drupal\Core\Session\SessionManager->start() (Line: 59)
Symfony\Component\HttpFoundation\Session\Session->start() (Line: 1566)
install_bootstrap_full() (Line: 695)
install_run_task() (Line: 572)
install_run_tasks() (Line: 121)
install_drupal() (Line: 53)
Performance test/cron issue is https://www.drupal.org/project/drupal/issues/3500489 🐛 Performance tests need to run cron Active and needs review.
Assertions bin sounds good and better than fast vs non fast - could you open an issue?
Shortcut currently isn't being used in Drupal CMS (since you can do most of what it's used for with menu UI + navigation anyway) so could be a candidate to remove from core.
Just directly invalidating access policies on role save seems reasonable but I think we need Kristiaan's input on that.
The interface seems worth a try. In that case would we keep the separate bin for when it's used?
I think it definitely makes sense to put this in the bootstrap bin, it's called on every html request.
LFU would be much more complex yes, you have to keep a count of how often an item has been accessed and then evict the least frequently accessed item. And the current implementation is being careful to add as little CPU overhead as possible.
In general on a normal web request, we've set the limits high enough that items should never be evicted, so it will only kick in for migrations and similar tasks, or pages that load an extreme number of entities for some reason.
Committed/pushed to 11.x, thanks!
@godotislate it's not actually fixing the race condition at the moment so very much needs work - although review of the approach would probably still be useful.
Cache gets are probably the automated_cron configuration.
Updated the issue summary and title. Hoping someone else can RTBC so I can still commit this..
Or another option here - we could uninstall automated_cron. We don't need it for anything in the tests, we just need it not to run. That definitely doesn't need explicit test coverage then - would need to call ModuleInstaller::uninstall() in ::setUp() where it would otherwise run cron.
I think it's more important that the tests tests what they're supposed to, than that we test that the tests test what they're supposed to.
We could probably check cron_run_last and make sure it's not 0 - would mean doing an assertion inside ::setUp() but probably we do that elsewhere already.
I think this was mentioned elsewhere last week but can't find an issue, so I opened one 🐛 Performance tests need to run cron Active .
For fonts there is
✨
Support prefetch/preload/dns-prefetch in the libraries API
Active
, but that's not allowing rel="preload"> on any arbitrary asset, it would be much closer to what you suggest in #22. I'm going to close this as a duplicate of that issue. Would be great if you could copy your comment over.
hmm actually looking at the queries, apcu shouldn't make a difference there anyway.
Can you try making the sleep(1) a sleep(5) and/or adding a third ::drupalGet() warm up request in there? I think what is probably happening is a timing issue between aggregate requests setting the state cache and the main request setting the cache, so that it's not warm enough by the time we get to the request we collect data for.
There is various configuration in update.module like which server to contact, how frequently etc. that would need to move to system.module if this was in core, which is the opposite of what we've been trying to do for years of moving things out of core/system into dedicated modules (path aliases, page cache etc.). So -1 to baking it in.
We could have a 'telemetry' module that does the phone home, this would require removing the config from update.module and into the telemetry module (and adding a dependency), or duplicating the URL config (which could be confusing/go wrong). And the telemetry module wouldn't do anything except phone home so there is not really any reason to enable it. And a reasonable amount of work to move things around.
On the other hand, we've got 🌱 Drupal 10 Core Roadmap for Automatic Updates Active and 🌱 Roadmap to experimental Project Browser in Drupal Core Active - those two modules depend on update module, and at some point might completely replace the UI of the existing update module.
If we have 'update manager' and 'project browser' as two new modules, and 'update module' retains the phone home functionality but no longer provides the update status report UI and notification e-mails, then that could end up holding all the phone home code, that the other modules depend on, but without an extra step of moving things around compared to what will happen anyway.
Locale also depends on update module (because it depends on the project listing API).
So I think we should postpone this on update manager/automatic updates + project browser in core, and the UX consolidation that needs to happen around that, and see if we end up with an update API module by default, called update.module, which already exists. And then we can look at providing an option in update.module to send requests weekly even if the other modules aren't installed and that would allow sites to opt-in to telemetry if they want.
Marking postponed and adding those two issues as related.
Yeah this comes up with 🌱 [meta] composer require / install module discrepancy issues Active too.
I haven't reviewed the MR in depth yet and this doesn't respond to everything in the comment, because I'd probably need to do that review to have a useful opinion on the rest.
One reason for this is that it attempts to provide BC to hook_theme_registry_alter() hooks that mess with preprocess functions. I guess the question is whether or not we actually need to provide BC for that. I think that is very rarely needed, never had that use case myself. unlike regular hooks, what preprocess does isn't actually final and you can usually just undo it in your own preprocess.
[...]The feasibility of both the short-term and long-term ideas depends on the question of whether or not the preprocess function list/registry is something we consider an API and provide BC for.
I also can't remember ever having or wanting to do this, or even thinking about it, whereas I've used hook_module_implements_alter() dozens of times probably. Also, even if modules are using it, we can and do break 'data structures' in minor releases. Is there a way to stop things from running with the new system though - I guess you could use #[Remove]?
To deal with the special template_ prefixed preprocess callbacks, this introduces the concept of a special "template" module, template callbacks are converted to hooks registered in the name of that module. It's unlikely but not impossible that this will cause some super weird conflicts, such as a real-world template module that someone has in a custom project.
Couldn't this already be a problem if there's a 'template' module?
Or even go further and support multiple hooks in invokeAllWith(), like we already do with alter(), just call that. Not sure how much slower that would be.
IME performance problems with preprocess are the amount of stuff we do in the preprocess hooks themselves, like the monster that is template_preprocess_node(). #649530: Creating the cool graph hangs until PHP times out → was one idea to reduce this, hasn't gone anywhere though. Not sure hook invocation itself is going to make much of a dent.
@plopsec do you have apcu enabled locally? We should make all of these tests require apcu to be enabled and skip if it isn't.
This is the original issue that added the checkbox #178581: beta 3 breaker: update.module: opt-in, not opt-out → . Unfortunately it does not go into details about what the WordPress FUD was.
I think this is probably still valid, but we only really care about it in the critical path and it's not straightforward when checking for things that might be directories, so we can't enforce it.
So I think we should change DrupalKernal::findSitePath() to use is_file(), since that runs on every single request to Drupal. And not worry too much about anywhere else. Asset aggregation used to use file_exists() on every request 12 years ago but don't any more.
Project browser and automatic updates are enabled out of the box, and at least automatic updates depends on update status. Project browser at a minimum needs to make http requests to d.o even if it doesn't rely on automatic updates (not sure about that bit). Drupal CMS doesn't really exist without project browser because otherwise you can't install recipes and other modules etc., it may end up getting integrated into the installer itself.
So I think this would probably need to be a required checkbox (e.g. you have to click it to continue through the installer), which I normally would not like, but we could include a sentence that this can be changed after install (e.g. you can uninstall those three modules).
I would imagine this would be conceptually similar to the old “web of trust” system where a person (ideally local) who knows and collaborates with a person over a period of time vets them out reducing the burden and limiting the travel needs.
I can think of one long-term and fairly prolific Drupal core contributor who is in Nigeria and doesn't list any Drupal events on their d.o profile. I can't off the top of my head think of any other Drupal contributors in Nigeria - there will be some, just don't know any by name off the top of my head. It would be at least very challenging to arrange an in person verification.
On the other hand they have a very identifiable contribution record over a long time - infrequent but very useful contributions to tricky issues usually. Otherwise I would not have remembered them enough to be able to look up their profile and check the details for this discussion.
A bigger issue is that long term and prolific contribution to core doesn't help you get the git vetted role, but that's 🌱 More flexible language for git vetted status for co-maintainers of existing projects Active , not this issue.
Updated the issue summary to be a bit more generic, I think that shows that both examples can be valid.
Ill note it shows we finally got vendors to adopt standardized training processes with formal material.
No, it doesn't show that at all. The only reason to apply for co-maintainership of a project is because you use or need it, often with a history of working on issues. That is not the case here, not even the pretence.
@penyaskito I think the first case could be simplified to 'any release of any module that removes a composer dependency on another Drupal project'. Not at computer but can update later.
It runs on the testbot, but at a huge cost, 36 minutes
OMG definitely not, let's remove the test from the MR.
Could the max_input_vars error get hidden by a subsequent error?
That seems possible, but I don't think we know exactly when it will get triggered, so we would need to hard-code support for it in core's error handler probably - maybe that would be OK though.
Committed/pushed to 11.x and cherry-picked to 11.1.x, thanks!
Quick fix is fine but also we should use 📌 Add PerformanceTestTrait::assertMetrics() so it is easier to write performance tests Active in this test - it was the one performance test that didn't get converted in that issue.
catch → created an issue. See original summary → .
📌 Provide a Workflow FieldType that references a workflow entity and tracks the current state of an entity Needs work might help with the content moderation follow-up (or a follow-up after both of those issues land).
A sample:
Hello Team,
Currently, we have good team for managing this module.
Also, we are working on Image Widget Crop of Drupal 10 to Image Widget Crop of Drupal 11 project. In future, we will plan to upgrade in Drupal 12.
Can you please grant me access of co-maintainer so we will do good work in this module.
Hello Team,
We currently have a strong team managing this module. Additionally, we are working on upgrading verf from Drupal 10 to Drupal 11, and we plan to upgrade to Drupal 12 in the future.
Could you please grant me co-maintainer access so we can continue to do good work on this module?
Hello Team,
We currently have a strong team managing this module. Additionally, we are working on upgrading image_url_formatter from Drupal 10 to Drupal 11, and we plan to upgrade to Drupal 12 in the future.
Could you please grant me co-maintainer access so we can continue to do good work on this module?
This is pure copypasta and I think it should be treated as spam.
We should check whether the current logic works with taxonomy/term/n pages which are overridden by views by default. I have a feeling that it won't because of ✨ Upcast named arguments/named parameters in views Needs work , but also that it might just start working once that issue is fixed. Either way not commit blocking here but good to verify what happens and maybe open a follow-up to track it.
Changed is a state we would use when the page has been published but there's a forward revision for the entity. This way it becomes clear that what the user is seeing, is not actually what is visible in the published site.
I might be missing something, but I'm not sure this is a problem with content_moderation as such.
Wouldn't this only happen if the user is reviewing the actual revision page? Other entity pages should only show the default revision anyway so it will either be published or not, and it'll accurately reflect what they see.
Showing the workflow state is different - that also applies to unpublished entities in a draft state, even if they're also the default revision. So it's about something more granular than 'unpublished' then. On the edit form, they'll see the draft, but that's shown clearly on the edit form.
Workspaces will behave differently but we already have the workspace indicator, so users theoretically should already have an indication they're seeing things as they are in the workspace.
If the user is viewing the actual revision page, then they ought to know they're on it, although it could still be confusing if there's a mis-match.
If the above is right, I think we could special case the 'revision' entity link route (either here or in a follow-up), and then handle customisation in a further issue? But it doesn't seem like an issue with forward revisions specifically to me but more workflow state indication regardless of whether dealing with a forward revision or not.
Discussed with @thejimbirch in slack and decided to move this to core. If there's a really good reason to keep it against the recipes and distributions project that's fine but seems more historical that it was posted there - now that recipes is in core and we've made several changes directly.
I don't really know what the process is for these kinds of pages (which feels like a general problem and why we have such a huge quantity of under-maintained documentation), I would assume either marking them deprecated or unpublishing. My preference would be for unpublishing the obviously outdated pages and adding redirects for those paths to a canonical page.
I don't think we should ever actually delete because that's irreversible unless it's spam or something.
I vaguely remember there was an archive section in the handbook, but that might not be the right name.
Adding some rough ideas on how to collect data to the issue summary
One minor nit on the MR otherwise this looks great! Hopefully there's an existing test that almost tests this which we can extend.
An MR with test coverage would be good.
The solution in #5 sounds preferable if that works.
Tagging as a Drupal CMS release target. If we were to implement this in update module, would that save an additional opt-in? It's almost exactly the same data that we already send from update module so might be fine. Then update module needs to do something with the event - would suggest adding it to a queue item, and then having the queue runner actually send the data so it's as light as possible when applying a recipe.
If it works on gitlab, could we check max_input_vars first, and if it's over a certain number, skip the test? That would mean the test doesn't time out locally and give people a hint how they can run it locally if it's likely to fail.
Yeah that sounds good to me.
Haven't reviewed the MR yet, but now wondering if we might need something like this for hook discovery too, e.g. if a hook class uses a trait from a module that's not installed or similar?
Replying to the bc question from... this time last year.
The main case for block titles being altered would be previewing forward revisions, but workspaces will handle that when you're in a workspace via entity query rewrites.
For trash module I think that it will rewrite entity queries to not include the entity when it's trashed - but we should ideally check with @amateescu.
I think this is a sufficiently serious issue that we should not put the fix behind a feature flag but would be good to confirm the trash case first.
We discussed this in slack - the 'soft' deprecation so that people don't jump the gun here is good.
I also wondered about a 'hard' (normal) deprecation in a future release so that we eventually tell people to remove it, but the risk of people still jumping the gun (e.g. breaking support with Drupal < 11.2 prematurely) will be there, and then when we finally get to a core release where that's not a problem, we might as well have just removed the supporting code anyway instead of formally deprecating the hook. That means some modules could still ship with crufty hook_hook_info() implementations, but when we eventually deprecate .module files altogether, it will get deleted with those.
@flyke yes we should add validation for the library name, maybe via an assert() so it impacts developers rather than live sites, but could also log on top of that.
Yes we might be able to delete this code altogether after 📌 Use LRU Cache for static entity cache Postponed .
Don't think this makes sense, the contents of 404 pages are usually pretty cacheable, unless you're using search_404 or something but that can set its own max-age.
This all changed in the meantime, we have policy exceptions for entities with base classes, the 1-1 rule for one-class interfaces etc.
This is still valid, we could add a sentence to https://api.drupal.org/api/drupal/core%21lib%21Drupal%21Core%21Form%21fo... and maybe to entiity build/view alter hooks too.
This was mainly for Drupal.org which handles cross-posting a lot better, and will soon use gitlab for issues anyway.