Well yes, because there's a complex workaround in place to counter the performance problem. But even without that, it is very unlikely to have an impact on CI execution time. But it's been tested and shown a long time ago in comments around #60. I doubt things are fundamentally different now.
I'm definitely open to dropping the create, it's not really creating a new entity, it's duplicating, so just that seems enough. I also considered making it an alter hook, hook_entity_duplicate_alter()?
I also think the arguments should be switched, have the to-be-duplicated entity first, and the original second as context. that's common with alter hooks especially, the main thing that you alter is first.
Unsure on the entity prefix. Yes, it would simplify a custom class, but it also goes against every single other hook. And I'm not convinced it's worth to change them all at this point.
The problem is that search_api implements EntityInterface without extending from EntityBase, so new methods on the interface result in a fatal error.
This is a pattern copied from views module in core, I'm still not quite sure why it's necessary but would be nice to avoid doing that in the future I think, but that's out of scope, this is just about preparing for 11.2.
No it doesn't. This is a hard break on 11.2, but you can add it now, it's just an extra method that isn't actually there yet on the interface.
Tests are failing with relay because that constant then doesn't exist.
Re #85: We'll deprecate all legacy hooks once we have replacements for the remaining special cases, no need to do something special about new ones IMHO.
current user is an object that implements AccountInterface. not UserInterface. The core issue will not change that, it will only change the core implementation, but contrib can do whatever. You can not rely on alternative current user implementations to be user entities.
The performance aspect of this is a tradeoff, that's also why this issue has been stuck so long.
Yes, we save two queries, but the other side of the coin is that to access the information we need from (status, roles) requires that we load the field definitions and create the field objects for it, which is much slower than the UserSession object.
(FWIW, core has been doing that anyway for years due to 🐛 Many calls to ContextRepository::getAvailableContexts() due to entity upcasting Needs review but that's fixed now).
Especially when profiling, that cost is clearly visible and it's hard to compare that with the cost of database queries. On high load sites, possibly with multiple servers and redis/memcache, that cost is easier to balance against extra database queries, but that's not the default setup.
I tried to address that with the getFieldValue() method. But that's complex and I'm not convinced it's entirely reliable and shouldn't be mixed up with this issue iMHO. So either we accept that downside of the change or we postpone it on an issue to improve that performance cost. Which might or might not ever happen.
Created 📌 Support duplicate hook Active and confirmed that works well after I duplicated the hooks into the ContentEntityBase class (so a kernel test with a real implementation might not be a stupid thing.
The two added tests are 1:1 copies of the existing replicate tests with only one relevant change:
- $replicated_node = $this->container->get('replicate.replicator')
- ->replicateEntity($node);
+ $replicated_node = $node->createDuplicate();
Works, but postponed on the core issue now.
I created a new minimal merge request that does nothing but add the the hook and some unit test coverage. A real test might not hurt, what I plan to do as well as a proof of concept is create an issue in paragraphs to verify that this is sufficient to solve our use case.around this.
Hi, thanks for this. It certainly looks much better, thanks for working on this. I discussed it a bit with a colleague, our thoughts:
* The colored pills text on background color is not accessible and especially green is barely readable. Based on our testing, I think I'd prefer to keep the text color black, then contrast is fine.
* It looks bigger on your screenshots, but on claro, the text is tiny and also on larger screens far off to the right:
Thinking about it, I think the important parts are actually the status summary (the pills) and the buttons. the cache/execution time is not the most important thing to see at first glance. So maybe we could invert the position to what it is now? And instead of completely right aligned I think we should have the two the right just follow with a bit of margin? a quickfix is to not do a text align right, it's a somewhere in the middle then.
On text size, I prefer to have them in regular text size.
I even wondered about moving the cache/execution time info to after the table and not doing any flex at all. it is kind of related with the execute all button though.
Would then look something like this for me, but there might be better ways than my quick hacks, especially for the position. And instead of a pure css order, maybe we should rearrange the render array?
Keeping the non-injected logger for now.
Merging this, if people are interesting in doing a similar implementation but pushing to opentelemetry, that can b done in a follow-up.
Merged.
I converted the old patch here into a merge request and fixing up some really obvious errors I made on the way, but this is still far from being unblocked. There's #2896474: Provide an API to temporarily associate data with an entity → and follow-ups of that to convert to that API, we'll need an issue to deal with the twig access stuff, this will need to be split into sensible chunks and more.
I thought I replied, but apparently didn't submit?
Yes, taxonomy tokens are defined in drupal core, the issue queue you're looking for is https://www.drupal.org/project/issues/drupal →
Fixed the rebuild test, still working on the demo test fail on next minor.
berdir → created an issue.
According to the field definition, PaymentMethod entities do support tokens/payment methods that don't expire:
$fields['expires'] = BaseFieldDefinition::create('timestamp')
->setLabel(t('Expires'))
->setDescription(t('The time when the payment method expires. 0 for never.'))
..
I see that \Drupal\commerce_wallee\Services\PaymentSdk::createPaymentMethod checks if the expiration date is a valid date and only converts it then. Maybe it should use 0 as the default value and not null, but it's unlikely that commerce does a type safe check here.
So without further checking, I'd say that should work.
The issue landed in core now, this will need to be updated per #3 and also the return types.
This has been merged now into core.
I updated setOriginal() directly in the merge request but didn't manually verify the definition is valid.
berdir → made their first commit to this issue’s fork.
> I believe part of the reason that the Annotation class references weren't changed to ::class strings was that they were going to be gone with the removal of annotation discovery anyway, but since not all the plugins have been converted, it's probably going to be a while for that.
Exactly this. That was done on purpose, as we otherwise have two classes with the same name that we have to alias. That means we also need to adjust the use statements when we remove tem, which will make the cleanup more complex, with more risk of conflicts with other issues.
Thanks for the reference. I would expect that this simplifies this issue a lot then, might be enough to normalize/export the structure into the correct format, without special import handling.
Contributions need to be merge requests.
This needs to be a merge request.
Merged.
The definition is the annotation and the MR updates the annotation.
This would need to update the attribute class now as well.
I'm not really convinced this is a good idea. I see that there are sometimes use cases where you implement a custom queue plugin and want to use a specific storage at the same time and that currently requires an extra line in settings.php, but I'd argue it's at least as common to use a certain backend for a queue plugin that you didn't implement yourself and this now requires a module with custom code, that might need its own set of environment-specific control to only do that on certain environments. Maybe we support both without deprecating the existing approach, not really a reason to not support both? The cache backend is similar in that cache bins can have a default backend that can be overridden.
The relevant core issue is the first, 📌 Convert QueueFactory to use a service locator Fixed , and it's missing a change record that it's not required to have queue factory services tagged.
The stuff about $settings is not related, that's only a proposed change that is not actually in core yet.
Created a merge request with the necessary changes and updated the test to properly configure the queue backend and verify it's used.
Definitely shouldn't be an invalid route. Not sure if we should have that at all, there isn't really anything to configure there, but doesn't hurt I guess.
packages aren't really that standardized, plenty of projects use just Performance according to https://git.drupalcode.org/search?group_id=2&scope=blobs&search=%22packa...
That said, memcache uses Performance and Scalability and I'm happy to switch to that as well if you create a MR.
UI classes are a relic from the Drupal 7 ctools plugin system this is originally based on. I think the most feasible way forward instead of more workaround is to just deprecate UI classes and inline them into the main plugin class. In some cases, that will result in large classes, but this is the common approach for plugins, from blocks to field types, they both manage the data and their settings forms.
We could deprecate that in 8.x-1.x, and allow plugin classes to directly implement the UI interface and in a future 2.x branch, merge and require it.
@catch: Yes, I'm evaluating that project as well. For us, being able to pull the metrics is currently easier to manage, but it should be pretty easy to adapt what I've implemented here to push the metrics to an opentelemetry receiver.
This now implements the endpoint, inspired by the prometheus_exporter project. I've introduced two new flags on sensor plugin definitions that allows them to opt-in to being metrics, so this does then export them. That's done to avoid exposing lots of requirements and other sensors that can't be expressed in a meaningful way as a metric. Could just be a value for ok/warning/critical I guess but I think that isn't very useful.
Instead, I added an extra metric for the total sensor count per status, that allows to set up alerts on there not being any critical/warning sensors. Additionally opened 📌 Allow to log status changes to watchdog as well as well Active , which allows to for example use the opentelemetry module to collect those logs as well, so they become visible again there. Haven't found any other way to handle non-metrics checks/sensors (like for example maintenance mode on/off or twig configured for production) with opentelemetry.
Remaining tasks that we need for us, then I'll merge and close this and additional features can be done in new issues:
* Allow to restrict access to the /metrics page by IP. Should be a textarea setting that stores it as a sequence in config and the controller should check that if not empty. similar logic to redirect_404 excludes for the UI.
* Add the ability to add custom labels, attributes, so for example each drupal instance can have a name. similar UI, a textarea, stored as a sequence. E.g. "service_name=Foo\n environment=production". should be stored as key value. run a token replace on the value.
@dewalt: Yes, that's exactly the problem and why the fix isn't as simple as just changing it to integer. converting to string doesn't fix join/query performance. a solution is needed, but it's fairly complex and not quickfix.
token module does not define that token, that is defined in core. I didn't check if it changed, but we have no power over that. This module just provides a UI and additional tokens.
views ajax happens on a separate route, not the one that the view is originally displayed on. that route likely has no title.
Just for testing, try using a token that doesn't depend on the current page, that will likely work. There isn't really much this module can do about that, we don't provide that token and even if we would, we couldn't figure out the title from the available information.
Also tried runtime and update now. Especially runtime was more work but the duplication between the two methods isn't as bad as I thought. There was one 300loc chunk that I split into a shared method. An option would also be to define that as a separate hook that both implement, but that will break using ->invoke() at the moment. I do wonder if we should change invoke() to support multiple hooks and merge the results together, a bit like invokeAll()? Would work well for requirements. and then we could split the still huge method up into logical chunks simply be having multiple hook implementations.
There are several chunks between 20-100 lines of code that is shared between the two methods.
I expect reviewing the conversion will be more challenging than actually making the changes. A lot of the stuff going on in those requirements checks isn't tested because it's about it's about infrastructure and things that we can't really simulate at the moment.
One concern about this approach is dependency injection in combination with update requirements. Not as severe as install-time requirements, but update also runs in a reduced and possibly fragile environment. If we inject 10 classes there, then possibly a future update could break that. We could put them in separate classes but then the sharing part won't really work that well anymore.
I was wondering if it really makes sense to split runtime and update, the minimal fix for the problem here would be to keep $phase and just move install out. But I expect system is going to be pretty much the only case with more complex variations between the phases, pretty much all other core and contrib implementations will just either run or not on specific phases and if it's both runtime and update it's easy to register both for a single method. One downside is that rector support will be much more complex, it will not be able to untangle conditional early returns or switch statements or stuff like that. We could fake it by keeping an existing function with $phase and register two calls to it with the respective phase. Also legacy support is tricky for this. But I also suspect we won't support that at all in rector, there aren't that many requirements hooks compared to others.
One thing I noticed is that merging of the return values is inconsistent, install hooks seem to be a regular array_merge(), while others use mergeDeep(), but I'm not sure we really want to do that, you should not attempt to merge specific definitions? use the alter hook for that.
@berdir: I warmly welcome you (or anyone else) to start sketching out a plan at #3493718: [pp-3] Split up and refactor system_requirements() into OOP hooks.
I decided to skip the sketching and plans for now and just try it, with the install class as a starting point. What I'm looking for is to get a feel of how that code will look like, how much code duplication there is and so on. It didn't seem that a sketch/plan could really give me an impression of that. system_requirements() is a monster, but 80-90% of it is large chunks of checks that apply to 1, 2 or 3 of the phases.
It took me a few minutes to copy & paste system_requirements() to the Install\SystemRequirements class, go through those chunks and delete all that didn't apply, partially thanks to how PHPStorm code collapsing works, I can just close an if () and delete it without having to match curly brackets manually. The result is ~500 lines of code in that class. I expect this is a throwaway branch. I took some shortcuts (stale comments and so on). There are a few parts that are trickier because it's mostly the same, but some descriptions are different, or it's an error at runtime and a warning on install. This is where we'll end up with code duplication.
Opened a MR in the system issue, might also do runtime/update later but it's getting late now.
I'm a fan of incremental improvement.
Same, and I've been frustrated in the past when what I saw as incremental improvements wasn't good enough and issues got stuck.
The other side of the coin to incremental improvements is BC. The changes that we'll make further improvements to the requirements stuff is slim, we'll need additional BC layers and have multiple deprecations in overlapping minor versions. Might be a bit of a now or never.
I have some more thoughts on whether or not we should split both the hooks and the work to implement them, but I need more time to properly write that down.
Created a merge request that throws an exception when the field type category is not valid.
berdir → made their first commit to this issue’s fork.
Create a new issue. The value is as tradeoff between memory and time. If only 1 works for you but otherwise you get an error then this seems like an indication that something very weird is going with your cache entries, they might be extremely large which would possibly also cause runtime issues outside of the report.
The issue I mentioned is ✨ Report page takes ages to load - maybe cache it? Active , but that won't help you.
When you create an issue, mention which client you're using, how much memory redis and PHP has and whatever else you think might be important.
Yes, there are multiple issues about this and one should be closed as a duplicate. This is a big change, we can _not_ just change to integer because that will cause problems on existing sites, especially if they happen to flag things with string ids.
#14 is the direction we have to go, similar to what entity_usage is doing on its own and what the dynamic_entity_reference project is doing.
Either way, this will be something for a 2.x version as it's a big change with a complex upgrade path and will impact views and queries and what not.
> Yeah system requirements will be it's own issue, I'm not really looking forward to that.
Certainly a separate issue. But I also think we need to have a plan for it now, to be able to verify the proposed architecture. As mentioned, a possible answer can be that system_requirements() does not fit into that architecture and we move at least parts of it out into the installer or so. Again, system is also special in that it's not just a huge amount of code, but it's also called super super early in the installer, not sure how that works exactly, but I wouldn't be surprised if we run into issues there calling it as a class like that.
I'm torn between a minimal change to move this as quickly as possible to the new system, as it's the only remaining regular hook not supported. But this also seems like a chance to actually improve the API around it. The array keys and constants used here are hard to remember, I always have to look up stuff. Converting that into a class would be a big DX improvement for example. I suppose we could still introduce such a class later, but it's also going to break more stuff (which might be OK, since directly invoking a hook isn't really an API that we provide BC for)
Drive-by review without actually using this module.
IMHO, bundle classes should be used by the module that "owns" a certain bundle. E.g. forum could be a use case in contrib, or book. And mostly in custom distributions and custom modules, because it's also a BC problem. And custom module may customize it.
> - A extends B extends Node
This is then a valid scenario, for example a custom module that adds another function to a book bundle class. Or a distribution providing a default and a specific project extending it.
> - A & B extend Node. But they dont extend eachother.
This however is not. This can't work. Both module a and b are expecting their class and their methods to be there, one will be broken. This is similar to two modules replacing a service. One of them is going to be broken:
🐛
Fatal error after installing the module (incompatibility with Context & Menu position module)
Needs work
. Either subtly in that features just don't work because an instanceof check is FALSE or with a fatal error.
A priority system won't solve that. In theory, a decorator pattern could be used, but properly using a decorator pattern is hard and complex and really only works if there is a decorator trait/base class that by default passes all methods through to the inner one, or adding a new class on the parent is going to break.
That's why this should IMHO not be supported and we should fail early with an exception. You can still use an alter hook, that's on you, can't prevent that except in the specific alter hook.
Shouldn't require any extra steps, but I'm also not sure I fully understand your problem. I'm assuming you are using the default stable widget and the default entity reference revisions formatter to display paragraphs. If you are using custom code or views, then the problem is likely there. There is an issue in ERR about the views integration not working correctly. I would suggest you try it out on a plain Drupal install with a minimal setup and see if it works there.
With Hooks, we now have an implementation in core that doesn't use plugin discovery to find stuff, we could something like that here as well? Could then also be inlined into the EntityTypeBundleInfo, that would also make it clear that you're not meant to use this directly as an API.
The discovery should probably make sure that it's not possible to register multiple classes for the same bundle. Can also happen with the alter hook, where it's much harder to detect, but at least here we could?
What about subclasses? Lets say a contrib module defines a node type and adds a bundle class for it. In a custom project you want to subclass and change that. I guess you still have the option to use the alter hook then and set it there.
I'm not sure about this.
I get the benefit to core merge requests but to me as a contrib maintainer, the minor version it is deprecated in is important. To clarify that, it's not the deprecation that's important, it's when it is safe to use the replacement.
Sometimes we deprecate stuff in favor of something that's already available. For example the entity query access stuff existed since 8.0 so people sometimes got confused and thought using that would require the minor version it was deprecated in, but it didn't.
But in most cases, deprecation and introduction of the replacement API is the same minor version. That minor version is an important factor on what minor version a certain contrib module requires, whether or not it's worth using the DeprecationHelper or custom code for something similar. Or just postpone it, for example 📌 Replace usages of deprecated RendererInterface::renderPlain() Active , which again also depends on when it will be removed.
Paragraphs supports Content Moderation, make sure you are using an Entity Reference Revisions field. Only the host entity needs to have the workflow enabled.
I have not yet reviewed the changes since I provided feedback in #147/#148 but my opinion on this has not changed. I understand that many people are interested in this and use this patch, but for everyone who does use the patch, there are literally thousands of sites out there who don't. Those sites might have inactive redirects for some reason that they do _not_ expect to work and with this, their content might suddenly become inaccessible.
At this point, I have no plans to merge this change.
I proposed an alternative solution in #148 that guides users through creating a valid redirect when creating them. I haven't managed to find time to get back to that, but I'd be much more open to merging that instead. Sites who rely on this patch will then need an update path to convert aliases to the system path.
I think it would be useful to convert system_requirements(), the by far most complex implementation of this hook to understand the implications of this.
It's a massive function (1500 lines) and one thing is that there is a large overlap between then different phases and code is often shared. e.g. the PHP version is always reported for runtime, but all then complain if it doesn't match. update and runtime can share code fairly easy, but install is separate.
That said, system_requirements() is also an extreme edge case and all other implementations are far simpler. So maybe we should just split that differently anyway, move it into a component or something. It's really not module requirements, they are the requirements of Drupal as a whole. We could also consider multiple hook implementations to split it up, but that will then break the ability to use it with invoke() completely. There are also 20 calls to Drupal::service() in there, although not all of them are to different services.
IMHO, if we just split those 1500 lines into 3x 800 lines or something, then we won't really gain that much and keeping those things in sync will be more work in the future.
We specifically implemented features to make this work in Drupal core, I don't know the details of your setup, but this is possible and tested. Make sure you're using entity reference revisions fields.
Paragraphs is fully D11 compatible, those are just documentation issues because the static parser doesn't understand what storage class is used. Just ignore this.
This is passing now.
I had to allow hook_hook_info() and hook_module_implements_alter(), they can't be new OOP hooks, but they need to run through the legacy processing to properly register and work. That means there are a few additional functions to process compared to what I wrote in #4.
@dww didn't follow-up here about my comment about the name of this flag, I hope he isn't too unhappy if we stick with this.
With the follow-ups and my remarks in #31, I think this is now ready.
My proposal is that rector adds this line when doing the hook conversion if it's not already there in YOURMODULE.services.yml
parameters:
# Verify that all hooks have been converted, then set to true
YOURMODULE.hooks_converted: false
Or maybe even default to TRUE and change the comment slightly but that's maybe too risky.
IMHO, this is the only way how we'll get a sensible adoption of this flag in contrib and custom code. Nobody will otherwise find this and actively look it up and remember to add it.
Interesting, I didn't know that's possible. So yes, starting with 8001 is the best we can do for 0 major version I guess. It is very rarely used, so shouldn't be a huge deal. I would probably hide that a bit and not start with this scenario. it should remain an exception, and experience builder very much is, it's an experimental major new thing that's developed and expects to take months of work. Most projects by the time they are released on drupal.org are probably reasonable stable and usable and should IMHO in most cases start with 1.x and alpha/beta releases.
See also the related slack discussion recently: https://drupal.slack.com/archives/C1BMUQ9U6/p1732887634276909
Re #10, from that thread from me:
yes, I think the core compatibility thing should be completely dropped, that doesn't mean anything anymore
that dates back to when drupal core major versions increased the minimum version number, but that is still at 8000 and can not be increased, because all modules would then suddenly have invalid updates
and lowering is also tricky because drupal stores that number as the last update for newly installed modules. so the easiest option is IMHO to document to use 6 digits and just forget about that weird 8000 thing
Self correction on the last part: 5-6 digits and we can forget about the 8000 thing except for 0.x major branches.
D12 will not increase the number because it can not. And we can also not easily lower it without update functions that lower the initial value for currently installed modules and modules could then only rely on that once they require at least D12 or whatever version would make that change.
The technical constraints are pretty simple: first update must be at least 8001 and each following update must user a higher number (Insert "Numbers must go brrr" meme). That's it.
IMHO, this is a documentation task to define a common standard within the current technical constraints and the proposal makes sense to me. If [# #3106712] ever gets resolved and that min number requirement is removed we can just update it and you can then start with 1 or 1000 or whatever core will do.
there are also various similar issues for locale module that were stuck for a long time, so we likely don't want to wait on them. I'd say we shouldn't wait unless the update module one gets into 11.1. we won't be able to completely skip it anyway, not until we replace/split hook_requirements() that is.
Created a merge request. Haven't tested yet at all whether this works correctly, other than the reduced number of procedural hooks.
In my install profile, I counted 1040 registered procedural hooks. That is without 📌 Mark core modules as converted to OOP hooks Active so it should go down around 200 then with that.
With this change, it's now 887, so that's 153 less functions stored in the container that need to be checked.
While testing, I noticed a strange thing with the regular expressions, it didn't catch some of the update functions. It turns out that the magic regular expression attributes update functions in submodules that share a prefix with another module are somehow attributed to the wrong module/hook.
if (in_array($function, ['redirect_404_cron', 'redirect_404_update_8101'])) {
var_dump($matches);
}
results in:
array(7) {
[0]=>
string(17) "redirect_404_cron"
["function"]=>
string(17) "redirect_404_cron"
[1]=>
string(17) "redirect_404_cron"
["module"]=>
string(12) "redirect_404"
[2]=>
string(12) "redirect_404"
["hook"]=>
string(4) "cron"
[3]=>
string(4) "cron"
}
array(7) {
[0]=>
string(24) "redirect_404_update_8101"
["function"]=>
string(24) "redirect_404_update_8101"
[1]=>
string(24) "redirect_404_update_8101"
["module"]=>
string(8) "redirect"
[2]=>
string(8) "redirect"
["hook"]=>
string(15) "404_update_8101"
[3]=>
string(15) "404_update_8101"
}
So while the cron hook is attributed correctly to redirect_404 and not redirect, redirect_404_update_8101 is not.
I only realized while writing this one what the problem is. It's that $module_reg already attempts to filter out all update functions but it does not account for the prefix/length thing, so it will simply match the above on the non-existing hook 404_update_8101. I pushed an update that adjusts the original regex, but that definitely needs to be reviewed by someone smarter than me, because that regex is well above my paygrade.
While doing that, I also noticed that it also excludes preprocess functions, which means that a handful of weird hooks in contrib modules that start with preprocess will be broken since their legacy hook implementations won't be found anymore. Might be OK, might not be. Should be a separate issue if so: https://git.drupalcode.org/search?group_id=2&scope=blobs&search=%22invok...
berdir → made their first commit to this issue’s fork.
I did some testing around the benefits of this.
While the speed improvements on the parsing aspect are not huge, they do considerably reduce the number of possibly-legacy-hook-functions we have to collect and store:
$ ddev drush cr | wc -l
HEAD: 324
MR without StopProceduralHookScan:219
MR: 122
While I'm still not a fan of StopProceduralHookScan, this shows that this saves us 100 entries that we load on every request and have to carry around. I think it's an edge case and contrib should probably not bother with it but the feature is in core, it's implemented, so I guess we can just as well use it.
Out of those remaining 122 functions, 70 are for locale and 24 for update module. And we discussed in slack that we should implement the reverse of checkForProceduralOnlyHooks() to ignore all those functions that we know aren't proper hooks and are invoked differently (install, uninstall, hook_info, module_implements_alter, update functions), that should save us another 10-20 functions, at which point we'll be down to very, very few functions in this list for core.
About the name, to be honest, I don't care too much personally, I think no name will be perfect and more importantly, the ship has pretty much sailed on that. 11.1.0 will release this week and then this is an API and we can't rename it. It's a temporary thing until Drupal 13. What I think we should do and is more important is file a follow-up for rector to add this with a comment that explains it to converted modules, to be defined if it should default to true or false. Unless we do that, nobody will remember to add that, no matter the name.
To summarize, I can RTBC this if we have follow-ups for:
* locale and update module, either one or two separate issues
* reverse checkForProceduralOnlyHooks() to not add to the BC layer, might do that later if nobody beats me to it.
* Add this to rector as a suggestion
Looks good to me, hook_process_HOOK() was removed a long time ago and doesn't exist anymore.
This was open for 10 years and it was changed to a task bit it's imho unclear what the task would be.
Let's just close it?
Did some manual testing, with the UI installer, rushing through the configure page as quickly as I can.
On standard, it works well, I when I click on reports => status as quick as I can tells me that cron did run 8-10s ago and update status is there. Also no message on admin/config. Based on the timing of 8-10s, I suspect that cron already runs during the installer?
Umami is a bit slower because it has quite a bit it needs to index. I don't see a message on admin/config but the status report then is a bit confused. Cron is reported as having run also some 10s ago, but update module reports that it has no data. waiting a bit and refreshing then shows data. Maybe cron runs before update.module is enabled? I first thought that maybe the timestamp is set before the cron actually runs, but that doesn't seem to be the case.
It's only a warning and only on the status page and you have to be fairly quick to get there, I feel like that's an acceptable tradeoff?
Should probably also consider the cost. anonymous page responses should be as fast as possible, you should not use config that early, as that will cause a considerable performance degradation.
A possibly would be to register the service dynamically and inject the necessary configuration directly into it, with an option to to enable it.
Multiple saves sounds like it's own separate problem, the token stuff is a hack, but I thought that we only save when necessary, if you see something else, check for an existing issue or open one. And your own submit could maybe replace the default menu_ui one then.
I don't think #10 can be done in core, a separate contrib module maybe, where it can document what kind of limitations it has. The access check comparison is not trivial, I'm pretty sure the presave() + update check there doesn't really work like that, that makes a number of assumptions and likely relies on static cache to already have those access results.
Merged.
MR is green, but the previous run had 5+ random fails, and the JS one I can reproduce locally (sometimes), but also on 11.x I get that error there.
Still, that's a lot of bad luck, wondering if the removed cron somehow increaess the chance of those random fails.
I can't see files in submodules actually getting scanned.
When enabling navigation, this is the list of files it actually scans for that module:
core/modules/navigation/src/Hook/NavigationHooks.php
core/modules/navigation/navigation.install
core/modules/navigation/navigation.module
For paragraphs it's
modules/contrib/paragraphs/paragraphs.module
modules/contrib/paragraphs/paragraphs.install
Neither hook classes nor module files of submodules are being scanned. I used navigation because it has the navigation_top_bar module, which is AFAIK actually against core policy of not having submodules that aren't tests)
Thy are traversed, but they are filtered out.
*Maybe* it would be faster to use a separato recursive iterator just on the Hooks namespace and a non-recursive one on the module folder, but I suspect that's microoptimization and will not result in measurable performance improvements.
@andy_w: the menu_uncache_preprocess_block() will need to be more complex than that. you just add one more tag. You will need to process the finished and bubbled-up result before it gets written into the cache to *remove* all other cache tags. Will need to look into where exactly that needs to happen.
Just had a quick look at the MR, but as mentioned, this will not change much in practice for most sites with menu links pointing to nodes because of the bubbling of the access cacheability metadata.
to see this, enable render cache debug output ( https://www.drupal.org/node/3162480 → ) and then it's important to visit the page as an anonymous user, because users with bypass node access wont' have them, then you get this output:
<!-- START RENDERER -->
<!-- CACHE-HIT: No -->
<!-- CACHE TAGS:
* block_view
* config:block.block.BLOCK_ID
* config:system.menu.main
* node:3
* node:4
* node:34
* node:30
-->
<!-- CACHE CONTEXTS:
* route.menu_active_trails:main
* languages:language_interface
* theme
* user.permissions
-->
<!-- CACHE KEYS:
* entity_view
* block
* BLOCK_ID
-->
<!-- CACHE MAX-AGE: -1 -->
<!-- PRE-BUBBLING CACHE TAGS:
* block_view
* config:block.block.BLOCK_ID
* config:system.menu.main
-->
<!-- PRE-BUBBLING CACHE CONTEXTS:
* route.menu_active_trails:main
* languages:language_interface
* theme
* user.permissions
-->
<!-- PRE-BUBBLING CACHE KEYS:
* entity_view
* block
* BLOCK_ID
-->
<!-- PRE-BUBBLING CACHE MAX-AGE: -1 -->
<!-- RENDERING TIME: 0.004674911 -->
As long as these node:ID cache tags are there, the only scenario this solves is manually going to the menu edit form and save without making a change, that's not that common and useful. What's common is saving a node with a referenced menu link, and that will invalidate the node cache tag and with that all menus and pages using that node.
Also, instead of this I think it would more interesting to not save the menu link in the first place from the node form if no change is detected. That seems more reliable than this, because there are scenarios when things on the menu link entity are changed that might influence the menu that isn't stored in menu tree, like additional fields.
> >=2.3.5 <3.0
Yes, that is exactly the same as ^2.3.5, just shorter.
Merged.
Changed the .gitlab-ci.yml to change DRUPAL_CORE variable instead of changing ref, then it updates to the latest D10 version.
Also fixed the version constraint. You almost never want to use >= because that would also allow any larger version, including 3.0.0 and up. The correct thing to use here is ^, which allows patch and minor updates, while ~would only allow patch updates.
Yes, Drupal 10 is LTS and supported until 26 for the same reason that SimpleSAMLphp doesn't want to switch earlier to Symfony 7, the Symfony LTS support cycle.
And no, you can't install it separately, not how this module is currently implemented. You can kind of install it without composer, but you're guaranteed to get into version conflicts like this recent issue: 🐛 twig conflict Active . You'll end up loading a mix of different conflicting versions of Twig/Symfony, exactly what composer is designed to prevent. Maybe it could be implemented differently, I don't know.
Yes, that was an important reason I went with this implementation. it works on any core version, a single update function and done, without dependency on a certain core version, would be impossible to manage. It also doesn't require updates when you convert your code from legacy hooks to new, ore refactor and move things around.
With one important limitation: The new #Hook system kind of allows multiple #Hook implementations, so you could have multiple crons. That does not work with ModuleHandler.:invoke() and therefore also not with the implementation done here. So stick to one cron #Hook or register them explicitly as separate jobs.
Maybe that will be expanded at some point, but I'd still rather do 📌 Support CronSubscribers Active .
See https://drupal.slack.com/archives/CGKLP028K/p1733405684090189:
The simplest way is to pin you gitlab template to the latest version prior to when we made the switch to D11. This is as per
@fjgarlin
commment above. It is a one-line change, to alter 'ref'
include:
- project: $_GITLAB_TEMPLATES_REPO
ref: 1.5.x-latest
> Without categories does not work
This isn't a limitation of this issue, but of the module. There is currently simply no way to configure it for the no-categories mode, as that's the only thing it hooks into.
In that case, because implement your own script, there is another, possibly even easier mode. That is to use the eu_cookie_compliance settings to completely disable google_tag scripts before consent is given:
modules/contrib/google_tag/js/gtag.js||google_tag/gtag
modules/contrib/google_tag/js/gtm.js||google_tag/gtm
modules/contrib/google_tag/js/gtag.ajax.js||google_tag/gtag.ajax
If you only use google tag manager for analytics, that might be OK, for other cases it might be too strict. But with the enforced consent V2 basic mode, the result is kinda the same.
I think it's because the validate jobs depend on the composer jobs, we'd need to change them to run on previous major as well.
Can figure out later, but will ask in Slack #gitlab if there's a documentation page.