> Initially I loved this line of thought, but can we reliably do that? What if someone else incomes help? They won't be available.
Yes, I was wondering about that too, what if someone does a better_help module in contrib that wants to use those hooks. The NodeHelpHooks change was more there to test the system. I think we want to add dedicated test coverage about this instead in a test module. Setting to needs work for that.
We could do a setting, that seems like an interesting idea, but also not sure it's actually worth it. A tiny amount sites will bother with that level of micro-optimization. Id rather work on my other ideas to move the metadata/hook list "services" out of the container and experiment with more efficient caching strategies, that would benefit everyone without custom settings.
I'd prefer a separate, explicit settings key. usually it doesn't, but hash_salt can change, with only fairly minor consequences, such as invalidating one-time-login hashes.
Oh, that's a good point. This might be why this was don't done originally, because you'd lose your data after a deployment, that's obviously not a good idea.
I think we want something else then. we could do it like earlier patches, only use a prefix if explicitly set. But it would still be broken if you forget that, unlike everything else.
What if we just introduce a new completely separate setting for queue? We can document that you must set this if you share your redis instance here: https://project.pages.drupalcode.org/redis/#queue-backend (generated automatically from the markdown files) and also in the example settings. queue is opt-in and must be configured explicitly, so I think it's fair to expect people to read that. And shared redis instance of somewhat discouraged anyway.
The extra loop is not great and we're again affecting implementations from a second location that needs a loop. We ran into this with the first pass of the ordering issue if you recall.
What exactly do you consider the problem with the extra loop (technically two of them, but there will be very few class attributes).
I can not imagine that it is an issue in regards to performance. An extra loop with an instance of check is going to be super, super fast compared to everything else that's going on here. I did consider sorting the attributes so that depends definitions are always first, but I think it's not worth it.
Adding it to Hook would not allow to put it on the class as I did on NodeSearchHooks, so we'd need to duplicate it on every single hook there. Also, I'm pretty sure even without a class, this would be a BC problem. See https://3v4l.org/31a3W, using a named parameter that doesn't exist is a fatal error.
Rebased, obviously conflicted heavily, still a useful simplification.
As mentioned in 🐛 HookCollectorPass registers event listeners but they do not follow the required calling convention Active , hook_help() is an extremely common hook that is from an optional module. It would be a little bit tedious to add this to every single help hook, I wonder if it's worth it to special cases a few hooks like this one and optimize them out by default. If we end up with more examples like NodeHelpHook then we might be able to skip dozens of services from being added to the container.
Started something here. Supports both class attributes (had to extend the file cache info a bit and methods. Had to add another loop for that as we want to check HookDependsOn first.
Added it to NodeSearchHooks, which allows to make the argument there non-optional and also NodeHelpHooks on the method, that's also sufficient to skip the class from being registered as only classes with at least one identified hook are registered
I don't see how we can explicitly deprecate cache.static annd cache.backend.memory. the tag is needed or it's not actually a memory cache and we need those settings to be injected.
I also checked core.api.php cache documentation, it doesn't mention memory caches yet, I added a bit for the memory cache bin and also mentioned default_backend and cache.bin.memory.
Only a handful of contrib projects use cache.static, several of which were in the meantime refactored to use something else (old jsonapi and group versions. Most are just generated references in ide/test helper s.
I think we should just move forward with the change record and then remove it.
I also changed KernelTestBase so we only alter defined cache bins and switch them over to cache.backend.memory instead of replacing the whole factory. Then memory caches work and can still be typed.
Yeah, the main issue is that cache.static is a tagged service, so it's injected into the cache invalidator service. Removing that for testing purposes, but that does mean that cache tag invalidations wouldn't work anymore there.
You definitely need to do something, if only to update default config to not use this token and adjust tests that rely on it.
The node body field is going away as standard/always existing field. The proposal in the token module would add a new token for a specific field, so that would be node:body:summary, but you can't assume it's named body. Plus, text_with_summary field type will move out of core as well. standard profile/recipes will switch to a separate teaser field, Drupal CMS will use something else. Drupal CMS can and does ship its own metatag configuration. Users who aren't using that will just have to set up the description metatag themself.
Re #59: Yes, that's exactly what I meant, MemoryBackend is only meant for tests where we want to simulate a persistent cache and have the same behavior.
I deprecated cache.static and MemoryCacheInterface, lets see what this does.
cache.backend.memory should not be using MemoryBackend, that's only reference by cache.backend.memory - we could probably clean that up here - make cache.backend.memory.memory an alias and deprecate it, so we're not using the testing code in runtime.
I'm not sure. If we'd rename it then other cache bins that specify it as a default backend would also use the MemoryCache then. Annoying but might be safer to deprecate and remove and then rename and deprecate memory.memory if we care?
For now, I just deprecated it without renames.
Still pondering about the way KernelTestBase alters the cache backend and thinking about a way to improve that. Maybe a custom factory that does either MemoryBackend or MemoryCache depending on whether it's cache.bin or cache.bin.memory.
I'm not talking about the Hook classes themself. Yes, there are a ton of them, but they are actually services, they have DI and we call them as services, not just for hooks, but also increasingly for other types of callbacks, such as preprocess/form/batch callbacks. They do increase the container site, but there's value in that.
I'm talking about the HookImplementation value objects. They are just data, not services. They're only in the container because it's convenient, as the container is the one thing that's readily available to HookCollectorPass. I did propose a specific alternative to having this in the container, and I'll try to come up with a proof of concept for that when I have some time and energy.
Replying to a question from @nicxvan to keep the discussion accessible:
> I did some brief investigation early on but didn't go much deeper than I needed to to get the theme hook using this pattern
> Isn't the trade off some up front memory for faster initial execution
For me, it's two things.
a) performance, specifically on internal page cache. The container is the one thing that every single request needs. Even if we only initialize a small amount of services and invoke no hooks, we still need to load and unserialize the whole thing. Based on a quick blackfire profile on 11.x with umami, getCachedContainerDefinition() uses 27% of the total time, 50% of the total memory usage and 80% of the network of an internal page cache response. In other words, the container is 4x as large as the actual page cache entry. time is split about 50/50 between the cache lookup and initializing the early bootstrap container (which includes connecting to the database), memory and network is basically all the actual container cache lookup. Smaller container directly translates to faster internal page cache responses. Which often make up quite a lot of requests, even on sites with external caching as max-age is typically relatively low.
For dynamic page cache, it gets a bit more complicated, we definitely need some hooks, anonymous on umami I see about 22 of them, so in optimal cases, we need <10% of the full data. I have some vague ideas on how to optimize things further with partial loading, but that's definitely secondary priority. But it brings me to the second point.
b) We already went through several refactorings on how we store and use the hook info and I have a gut feeling that it won't be the last. By storing this data in the "open" in the container, BC is much harder. Every time we change stuff we change constructors, change services and so on. The container with parameters, services and service closures is essentially the API and data storage. If a sole service (either internal to module handler or something else) is responsible for providing that information, then it allows us to refactor internals much more easily.
What I'd like to experiment with is what I mentioned in my previous comment, using keyvalue and a cache. Which is similar to state, but state is fairly frequently invalidated and needs to be rebuilt, this data should never change except when doing a container rebuild. Using services during container rebuild can be a bit tricky, but we have existing cases that do just that (the development settings), and so is storing fundamental data that Drupal needs in key value, from state to installed entity field definitions, update numbers and so on.
My current idea would be a new collection, store each hook separately, and then we can initially load them all and put them in a single bootstrap cache entry so that it's fast chained. The update could either be intelligent about it and only update changed entries, or we just wipe and do a new insert. No extra database/redis cache lookup needed then compared to now. And we can then experiment later if there's a way to cache it more intelligently.
Menu templates use a recursive block/function, I think that's the only example of that, has to be related to that?
(nothing is ever easy)
Still catching up on the discussion, so not sure which MR is the one to test. Based on a quick test, MR !12311 seems to increase the container size, not decrease it. On umami, from 762749 to 830427. For comparion, the container size on my minimal distribution size is 1.3MB, so way larger still and it will grow too. The other MR is about the same size as before, mostly because the service definitions are smaller, see below.
We now pass 350 service closures to the Module Handler, for relatively trivial "services", I assume that's the reason.
bit of a braindump, as I go through what's going on here....
I always struggled to put my concerns around this in words and more importantly quantify it, but it just feels like a very bad idea to put this amount of data into the container. We did a ton of work to reduce memory and cache sizes, with cache collectors and similar things around theme registry, views data and so on. This feels like a major step backwards.
Umami now has 1100 service definitions. Then again, on the umami frontpage with disabled render caches, hookImplementationsList contains 193 entries, so a considerable amount of those are initialized at least in such a case. Way less with warm caches of course. It was always a single bin, but the old data structures were much smaller. I was thinking around moving this data into keyvalue with a fast chained cache or collector cache, but that's a pretty big change of course.
Trying to analyze a bit more, I created an array with all the hook.list.* "services" and their serialized length:
$lengths = array_map('strlen', array_filter($cache->data['services'], fn ($key) => str_starts_with($key, 'hook.list'), ARRAY_FILTER_USE_KEY));
For MR 12311, array_sum() of that is 282890, that's not including the array keys. The smallest are 554, and hook.list.help is the largest, with 12252. That's a very interesting example, because almost all modules have this, but help is an optional module that may very well be disabled. We still collect and load all this hook information on every request. The issue to declare hooks conditional on a module would help, as all hooks could then be flagged and skipped if help is not enabled, at the cost of extra processing on a cold cache.
For comparison, the size of a service closure is 111: strlen(serialize(unserialize($cache->data['services']['module_handler'])['arguments']->value[2]->value['help'])), so 5x smaller than even the smallest hook list, but it also comes with the extra overhead.
with MR !11980, the smallest are 247, so only twice as large as the closure. I understand that's because of not using ProceduralCall, so we don't need to serialize those objects. I wonder if we could build on that. What if we just store the raw data as parameter, and then call HookImplementationsList::load() ourself? Instead of the indirection through a service closure? So we'd go back to a parameter, not services.
As you can see on the commit message, I configured credits, I just hadn't saved them. I think the UI needs better feedback on whether or not the credits have been saved or not. Maybe a warning message similar to various drag and drop interfaces or possibly even make the commit message only visible after you safe, so you can't miss it.
I don't think the reminder in the issue helps, because I don't look at that anymore after it's merged and closed, credits need to be configured and set before closing due to the commit message.
Took quite a lot of time to update my distribution to 11.x, lots of patches to rebase. But yeah, I see, there's no meaningful difference between 11.x, in both cases the serialized string is around 1.3MB (which is... not great). I need to have a closer look at this and the other issue, I can see that the things I'm seeing are already being discussed there,
It looks like ✨ Modernize Drupal.ajaxCommands.scrollTop() Active made this obsolete, but leaving open so we can test this in our project.
I haven't caught up with the discussion here and even less so with 🐛 HookCollectorPass registers event listeners but they do not follow the required calling convention Active but I have some concerns about the container size from what I saw in the theme OOP issue, all the extra objects and services being serialized separtely aren't going to help. I'll try to do some tests on a real project with this.
That makes sense to avoid extra work to revert the changes, but shouldn't result in the error that people see.
There is a new 2.x branch now, lets just fix this there in a clear but breaking way, without workarounds and do a change record that announces that you will need to either ensure that queues are empty or rename the keys manually or something.
People that need it now can patch it.
This would be interesting to have, but I think extending the guzzle client like that is problematic and not really supposed to be done, a it's marked as final, see phpstan complains.
Instead, we should look into whether this can be done using a guzzle middleware.
See \Drupal\announce_feed_test\AnnounceTestHttpClientMiddleware and \Drupal\Core\Test\HttpClientMiddleware\TestHttpClientMiddleware for examples on how it's possible to intercept requests and also responses.
And are you sharing that Redis instance between multiple projects or not? If not then I would go with a full flush. (FWIW, I think that sharing it, especially between non-related projects, is problematic from a security perspective as using the prefix i just convention and there are known bugs, such as queue currently not using the prefix).
Also, if you are mixing this with persistent data (queues) then you'd lose data, both with a scan based approach and a flush.
The fetch data part is a one-time thing per key and then it will be replaced with valid data. Cache tag based invalidations are the most common way to invalidate stuff for render-caching based things (which are by count/size the majority), so this will happen much more likely than after a cache clear. A full cache clear should be a rare occurrence, typically on deployments only. Are you sure that's worth optimizing for?
I agree. Part of the cleanup would be possible without BC breaks, but I don't know your exact approach on things like removing properties, arguments and methods in a plugin base class. so it's probably easier to do that refactoring, which will not have a performance impact, in a separate issue.
I was able to work on this partially in scope of a performance assessment on a client site, any further work would need to happen outside of that I can't say if and when I'll be able to do that.
It would also be very nice to have the already done improvement and this in a release, it's currently hard to apply those changes as patches due to larger changes.
We can't store a hash in state. Redis is on a way lower level than that. Redis stores the container and state cache, that's recursive. We could store it in redis, but that's an extra lookup per request.
The module already tracks a per-bin flag to track deleteAll(), a cache clear does that for all bins. The implementation is absolutely consistent to Drupal. And with improved ttl configuration (per bin) in the latest release, you can also guide Redis in first cleaning up data from less frequently used and larger bins such as page/dynamic_page_cache/render and store them for shorter time than other bins.
All caches with a stable key that don't vary as much will replace existing keys and their stored data.
You can also vary the prefix on a deployment, the default prefix includes the deployment identifier.
I'm still struggling to understand use cases why you'd go through that trouble.
Going through some of the kernel test failures, reversed, because reasons. On the main MR this time.
* \Drupal\Tests\block\Kernel\BlockStorageUnitTest, this is just a stale $this->container problem. storing entity storages in properties and reusing them is just not a good idea. Passes with Block::loadMultiple()
* core/tests/Drupal/KernelTests/Core/Theme/ThemeSettingsTest.php: same, stale $theme_handler object, need to fetch it again from $this->container.
* Drupal\KernelTests\Core\Asset\LibraryDiscoveryIntegrationTest, same, $this->libraryDiscovery is stale. container rebuilds are tricky. Have I mentioned that I dislike $this->container and doing "DI" in tests recently?
* \Drupal\Tests\block\Kernel\NewDefaultThemeBlocksTest: Yes. stale $theme_installer again, specifically stale injected dependencies. See how \Drupal\Core\Extension\ModuleInstaller::updateKernel re-injects dependencies again into itself after the kernel rebuild. DI around container rebuilds is hard and messy.
* \Drupal\Tests\block\Kernel\ConfigActionsTest: This one looks like a new problem. Somehow the whole system seems to forget umami exists after the install. If I split the ThemeInstaller::install() call for umami out into a second call, then I get a Unknown themes: umami. The first time we call install(), $theme_data is 84 entries. The second time, it just has 79. umami is a profile theme. It's not supposed to be possible to install it when not using umami as a profile. So my guess is that doing a proper theme install in a kernel test actually resets some test overrides that allow it to find things within profiles and then that doesn't work anymore. So the test is kind of doing some illegal things. Maybe we can either update this to do a fake theme install only, since that's not what this is testing, or we switch to a different theme and don't use this umami for this.
* \Drupal\Tests\editor\Kernel\EditorValidationTest, this seems to lose the public stream wrapper. I know that things around this can be very weird. It's failing because StreamWrapperManager::$wrappers is empty. getWrappers() only works if the all key is initialized, this happens in \Drupal\Core\StreamWrapper\StreamWrapperManager::registerWrapper(), so we're reinitializing the StreamWrapperManager and don't call register on it. for kernel tests, this is initially done through DrupalKernel::preHandle() in \Drupal\KernelTests\KernelTestBase::bootKernel(). I'm not sure why there aren't more fails like this. ModuleInstaller does call the register method. It feels like that should happen elsewhere or maybe the stream wrapper should be more intelligent about registering if that hasn't been done yet?
Hm. The code moved around, but mostly is unchanged, so I think it's there are still some useful bits here. And the split of EntityManager resulted in some weird splits and improper separation of concerns. For example \Drupal\Core\Field\FieldDefinitionListener::onFieldDefinitionCreate clears the field map and updates the underlying key value store, and calls \Drupal\Core\Entity\EntityFieldManager::setFieldMap() which doesn't actually persist the field map, that seems pretty weird.
I included my version of this into 📌 Request span name is too specific Needs review , I decided to put the logic directly into \Drupal\opentelemetry\OpentelemetryService::initRootSpan().
We could do the same for request span, I think for now we could just skip it completely for CLI. We could refocus this issue on that once the other one is in? I didn't want to include that as well, mostly because it would conflict with another issue/patch I need.
I created a merge request, updated the request span name as well and also included a bit for CLI.
> First we can use Hook and throw an error if the module parameter or order parameter are set.
As discussed before I would like to deprecate the module parameter anyway. On order, no objections to start there.
> Do we want to move hooks execution to ThemeHandler
No opinion really. ThemeHandler and ModuleHandler have similar names but I think the functionality they provide is quite different in practice, so I don't think we need to unless it makes things simpler.
did aa first pass through the MR, added some thoughts.
This should help with the tests, I had misremembered the cache tag that's used for the bundle info. Also added tests for he other one.
The use of the entity type bundle info service should enable considerable simplification in SchedulerPluginBase. It will always return a default bundle, so the whole $isBundle stuff can go. getTypes() is unused now. Not new, but it makes no sense that $entity_type is passed to entityFormIdsByType() because it's already available as the plugin is for a specific entity type. The protected properties in \Drupal\scheduler\SchedulerPluginBase::entityTypeFormIds and \Drupal\scheduler\SchedulerPluginBase::entityFormIds() are likely pointless as well now, there are no direct calls to these methods. But all that could be a follow-up and some of it is tricky with BC if you care about that for child plugin classes.
Confirmed that 🐛 Front-end theme styles can bleed into Navigation Active fixes this.
Lets avoid an awkward conflict with 📌 Modernize Drush commands Needs review and do it there.
Should also fix the all all reported by 🐛 Wrong Drush command example, uses all all Active
This has multiple conflicts and needs a rebase.
I will not add commerce specific code here. That can be published as its own project.
Not sure about the sync, as syncing kind of has different meanings. You can also skip aliases by setting the flag in a migration, so this feels like a behavior change. Do we really need that, it would just be an optimization for workspaces, no?
This doesn't really seem to be about this module. The command should work, pathauto also provides a bulk delete form.
All contributions must be done using merge requests.
Probably could add the alias type manager as well, doesn't need to be an interface, but is probably also not really needed. But we can still o that elsewhere. Changing our own services to autowire can also be done separately.
Merged.
Moving or copying position would be appreciated, even knowing where it is, it can be a lot of scrolling on larger issues as you usually need it when commenting/closing an issue. So having an extra link/button around there would be great.
This does too many different things for a single issue, especially with that title and it seems to be failing tests.
Most of those things are not related to kernel tests at all.
I just merged the dedicated issue for PHP 8.4 nullable, so it will need a rebase anyway.
Annotations/Attributes should be it's own issue, see review here.
Also, the token hooks should be a separate issue and as commented, lets move them to OOP with BC in the .module file. See 📌 Convert hooks to OOP Active . Probably makes sense to do all in one issue.
The MemoryCacheInterface is certainly weird and wrong. I think we should require that caches are explicitly specified with an attribute when using autowire. Changing it seems harder than just deprecating it. Lets do that and see if that causes any issues with autowired injections?
IMHO caches should be prefixed with cache, especially if they're meant to be an API. Not sure if we should touch entity.memory_cache and if it's worth standardizing that. Now that it's an Lru cache, I think there's very little to no reason to interact with that directly and it's not really an API. Caches can't be non-public though, or they are not shared between services.
You said not a "real" cache, I think that's not a very meaningful description. It is a real cache when used deliberately, and IMHO a memory cache behaving like that is only useful when it's meant to be a drop-replacement for a persistent cache, like we do in kernel tests for example. You shouldn't deliberately use cache.static for in-memory-caching. Looking at core, the only use seems to be PermissionsHashGenerator and we can just switch that over and get a tiny improvement out of that. Therefore, I'm considering to deprecate cache.static here. I think memory cache is a much better name than static cache, static cache as a concept feels like a leftover of drupal_static() and static properties in functions, which is something we very rarely do these days and want to remove completely.
I want to work on a change record and possibly update some api docs as well
Actually, the route is also not yet available in \Drupal\opentelemetry\EventSubscriber\RequestTraceEventSubscriber::createRequestSpan, that means the inner request span is just "GET (request)". This needs an update as well then, like the root span.
I tested this, it works well and is a big improvement, but needs to be a merge request.
There is also ✨ Set root trace name to command for CLI invocations Active , which this will certainly conflict with, but we update either once one gets committed.
Created a start for that meta issue now, interested in your thoughts: 🌱 [meta] Improve tracing and available information Active
I've been thinking about this as well, I agree that this would be very useful, it's currently very hard to group/compare requests on the same route, such as commerce checkout.
I'm thinking about and will open a meta/plan issue to report some thoughts on what I think would make the current tracing feature more useful, I'll add this as a child issue.
This is a duplicate of 📌 Replace usages of deprecated RendererInterface::renderPlain() Active which itself is also a duplicate of the OOP hooks issue. Instead of just moving hooks to .module, they should be fully converted, otherwise it breaks all patches that change it twice.
I'll need more time to fully process your reply and think about it, some notes while reading
That might be safer, in theory you could have something like per-domain user permission overrides, although thinking about that hurts my brain.
This is actually not that hard to pull off and a valid use case I've seen implemented using the access policy API.
I'm talking about using the config override system to do that, not the access policy API, if you have a specific provider then that could define whatever cache contexts are needed. It might even still work with the config override because we now only skip specific cache contexts now.
If we optimize cache redirect following with my suggestion above, we might not need to filter out anything. Each VC lookup will go to every involved cache context only once then, regardless of how long the chain is.
I've been thinking how to avoid doing the cache context resolution many times over in general. Not sure a cache per-lookup would help that much. It's not even really the cache contexts themself that are slow, it's the whole overhead around processing that.
It is worth pointing out that a lot of that cost is actually within assert(), it makes a big difference when profiling whether asserts are off or not, so when profiling, it makes sense to work with production settings. The numbers I did have it enabled, will need to redo it without to get better data.
PermissionsHashGenerator is also pretty scary because it's almost recursive. It calls \Drupal\Core\Session\AccessPolicyProcessorInterface::processAccessPolicies(), which deals with cache context lookups, but it's also used by the permissions cache context. If someone has an access policy with the user.permissions cache context, then the whole thing would explode. And also without that, this also makes it hard to understand profiling data as it's essentially both calling into and being called from CacheContextsManager, so if you click around in the data, you're going in circles.
Need to think more about things like the external access provider. My first reaction was that this seems like an edge case, but we're working on things that means one process != one request, so a static cache per user could and will build up over time to problematic levels. But we have that in many places and will need to use more LRU memory caches I suppose.
An alternative idea I had was to add the user cache context, so that cache context manager can filter out automatically based on optimization rules, but I need to figure out how much faster that actually is.
Taking care of this in 📌 Convert hooks to OOP Active as I was touching the related code anyway.
Will need to check the failing tests.
No objections, but I'm not a full admin and can't do it.
> Do we want a 2.x branch that drops D9 and unsupported D10 versions?
No. D9 can be dropped whenever we fell like it and I'm happy to require 10.3 or so or even higher if useful that does not require a major update.
Created a merge request with some suggestions.
With the change in the other issue, it's better, but it still shows up in blackfire. Converting to only a property, getPlugins() no longer shows up at all, and by converting getEntityFormIds() to fast chained cache, scheduler_form_alter() is now no longer visible in blackfire as well.
Additionally, I changed the getTypes() call to using the bundle info service, that's faster (loading all config entities just to get their ids is very expensive), and it would also work for entity types that don't use bundle config entities. That could also be skipped for this issue, didn't do DI and so on.
I see this was already improved recently in ✨ Implement static caching around SchedulerManager::getPlugins() Active but I'd go further and remove the persistent cache completely.
As an additional improvement, I'd add a check to scheduler_form_alter() to see if this is even an entity form ($form_state->getFormObject() instanceof EntityFormInterface, at least for the methods that expect it.
berdir → created an issue.
Well, that doesn't make sense and isn't a valid thing to do. I've never been able to reproduce that problem, but if something can't be fetched then I think it should just be ignored and use 0, like if it doesn't exist.
if you get that frequently then that sounds like you have an underlying infrastructure problem. Feel free to reach out to me directly so we an investigate on your specific project.
PHP_INT_MAX does not exist in the redis code base, if a cache tag hasn't been invalidated, then it is 0, not max int. sounds like you are using some custom patches.
One comment.
I'm not so sure. Render caching is still broken for view multiple, especially if prepared and split up like this, then I don't see the benefit. About the only thing I can see a benefit would be to load all users together if multiple comments are rendered.
Because this is called from a route event subscriber that calls it on every route. See \Drupal\Core\EventSubscriber\EntityRouteAlterSubscriber::onRoutingRouteAlterSetType().
Note that this is different from the other issues. This is a fallback and it is not exclusively supporting only functions, it just also supports functions. While weird and not used in core except for one test, it's not something we have to remove.
so we have 3 options:
1. Deprecate just functions, that's fairly complex although possible as explained above.
2. Deprecate the whole functionality, As mentioned, in my experience, it rarely works, is unpredictable and hard to debug, especially when combined with title and access callbacks. Deprecation in same place, when we actually add the options, but unconditionally. This might actually be beneficial for performance, because this logic requires that on a router rebuild, we need to load and inspect any controller class/callback. On plain core, blackfire reports that this event subscriber uses about 100ms and it's more the more routes you have.
3. Won't fix this. It's not blocking anything.
IMHO, I'd either do 2 or 3. We could see just how many routes are affected in core if we trigger a deprecation for that? Many entity routes are generic and if necessary, we could add the options explicitly if not done yet?
Merged.
Merged.
I fixed that by changing the type there, or we end up not using this at all, which has it's own complications.
Unsure what's in scope for this and how to approach it exactly. I did think for a second about changing KernelTestBase, but that sounds like a dedicated issue that is likely not trivial and could also break stuff.
Ended up having to copy the test and add some version checks to account for the changing core behavior.
Re #9, not sure I follow, but I recommend you test with the problem you're seeing with just core. If it happens there as well, open a core issue, if not, open a new token issue. Token should not affect the behavior at all, we just have to apply some pretty ugly workarounds so that saving new entities with menu link based tokens work as expected.
Same for other possible problems. This is now passing the existing test coverage, if there are more issues then please open new issues.
Merged.
I think we explicitly don't want to do use Symfony there, if they didn't like Symfony Mailer in contrib, then they also wont' like "Mailer Symfony". It's the underlying system, but the current direction is that we will provide our own API on top of it, just like Drupal as a whole is built on top of some Symfony components.
The name is much easier to change than the machine name, so there I think we could use Experimental, so I'd go with "Experimental Mail(er) API". We can change once it's no longer experimental.
We are also not renaming the Core component, there is no conflict here and things like "mailer DSN" is a concept of the Symfony Mailer component, not the module name.
Reverted the node service. It's causing problems during install and kernel tests because the type hint there is on MemoryCacheInterface, but during kernel tests ,we replace the cache factory, so we only get MemoryBackend's out, always. would require non-trivial changes to deal with that.
We have the factory now already and in #3035288: Deprecate theme_get_setting() → I suggested adding a generic memory cache bin instead of more one-off ones. We decided to pull that back out into a separate issue, so lets try to do this here. If necessary, we can spin off a child issue, but I hope we can just pull it through here.
I also reviewed the current cases of MemoryCache services:
* entity.memory_cache is now an LRU cache
* cache.asset_memory is part of a memory chain, it's non public, so while technically you could inject it into something, we can possibly just remove it?
* node.view_all_nodes_memory_cache, kept this because the service stores non-namespaced things and does a deleteAll(). could use a cache tag and prefix the ids, but seems out of scope?
* system.module_admin_links_memory_cache, replaced and deprecated.
Since alexpott detected my attempts to sneak in the cache.memory bin, we agreed in Slack to revive 🌱 [policy] Standardize how we implement in-memory caches Needs work and pull that bit out. It's an old issue, but we essentially already did what was originally proposed there, so it _should_ be simpler.
Addressed that.
The syntax in #13 is neat, FWIW, the following is almost identical in terms of complexity IMHO and we could use that now already:
protected function formBuilder(): FormBuilderInterface {
return ($this->formBuilderClosure)();
}
Only thing is that it might need more docs than a property?
Makes sense to wait a bit, I do plan to use this patch.
> What if... (really hypothetical scenario), a customer temporarily cannot access the order (for example, the order is "locked" and then unlocked), would the 403 page be cached and the save wouldn't invalidate that cached page?
For a regular Drupal site (excluding API/decoupled/... which is very different anyway), you always have a session. As soon as you have a session, pages are no longer cached externally or with internal page cache. The only thing that could possibly cache it is dynamic page cache.
Testing locally, dynamic page cache gives me a
x-drupal-dynamic-cache: UNCACHEABLE (poor cacheability)
This is because the response object has user and session cache context. It is possible to configure this and cache stuff like this anyway but not recommended as you'd create a huge amount of cached pages that will likely never be used again.
If you access an existing page, then that page includes cache tags. For example, if I manually request a checkout URL for an existing order with a known ID without the necessary session as anonymous user, then I'm getting a 403 and that also can be cached in page cache. But that response contains the commerce_order:ID cache tag, so if the access denied is because of the order state, any change to that order will invalidate this 403 page. So that should all be fine.