Posted a first stab at this to see which tests fail. Also not sure about the approach and comments re Pathauto. Feels like this needs to be done in a hook or event or something, even if Pathauto won't be the one implementing it.
Going to commit this in its current shape. Can still make changes before release if the meta in core evolves regarding best practices.
I noticed the docs in doRender() were not adjusted:
// Set the bubbleable rendering metadata that has configurable defaults, if:
// - this is the root call, to ensure that the final render array definitely
// has these configurable defaults, even when no subtree is render cached.
// - this is a render cacheable subtree, to ensure that the cached data has
// the configurable defaults (which may affect the ID and invalidation).
- if ($is_root_call || isset($elements['#cache']['keys'])) {
+ if (isset($elements['#cache']['keys'])) {
They still mention that doRender() can be the root call, and from the MR it seems that doRender() should no longer be concerned with any of that. Tackle this in a tiny follow-up?
Having it check for the "access site reports" permission would also be an acceptable solution to me. The essence is that the UID 1 check needs to go and preferably be replaced with a permission check.
Declaring all your hooks as extensions of Hook would definitely solve this as we can then support a "source" property on the base Hook class that we can scan for as described in the IS. The only drawback I see is we'd be creating tons of one-of attributes. They would be contained to namespaces that have Hook in them, so I suppose that could be fine.
So maybe add an optional "source" property to Hook that we can specify in the Hook constructor as a named argument and then extensions of Hook can prefill it for you? The default value could be empty or "drupal" to indicate that it's a core hook.
Anyways, big +1 to the idea. We shouldn't add classes to the container if we know they will never be called.
The whole point of these issues is to make sure core no longer hard-codes anything based on the idea that user 1 is special. There are plans to remove user 1's special behavior in a future major Drupal release. So we really can't leave this test to rely on UID 1.
This access check explicitly allows only UID 1:
So that is what we need to change: Introduce a new permission and make the route check for said permission. Admins (including UID1) will automatically get the new permission, so a simple change record detailing that non-admins who want access need to be assigned the permission should do.
You can invalidate it, like you could already before, but I'm not even sure that's necessary or useful, as AFAIK nothing is added to that entity on load.
Right, was too quick in my previous comment. It's indeed not the entity data that triggers any changes, but the group relationship to said entity data. So we're all good on that front, you're right.
Regarding Pathauto, I see your point. Either I trigger it, or I fire an event and expect Pathauto to follow my event. Given how Pathauto has far more reported users, it might be more prudent for me to trigger Pathauto. I'll have to look into that.
Good insights, thanks.
Yes, the approach to be tried here now is the one that introduces the notion of a pseudo cache context.
🐛 Cache redirect error displayed when using 'view own unpublished content' permission alongside node grants Active landed, so the MR here can now be rewritten to remove the workaround committed there.
This is now actionable: We remove the behavior from v4 and use cache tag invalidation instead. When I first wrote Group, the entity layer still used an internal property cache and I was weary of the ramifications of only invalidating cache tags so I went with the re-save.
Years later, I feel more confident in saying that if anything relies on a given entity, it should add said entity as a dependency and therefore the invalidation of said entity should trigger a recalculation on the consumer's end.
The only thing I'm slightly concerned about is hook_entity_update() and similar entity class methods as people may use that to trigger certain things when an entity is saved and said hook does not run on a simple cache tag invalidation. So if people do something there that does not somehow get cached (with the entity as a dependency), we might be in trouble.
Then again, if it persists across requests then it is cached and it should have the dependency. If it doesn't persist, then invalidating won't hurt because the next lookup of said calculations will run them anew with the entity now being recognized as grouped.
So let's go ahead and remove the old behavior in favor of cache tag invalidation.
Could look into SDC, is there an example of applying it to entities? I don't think I have anything else in my hook_theme. The permissions UI uses an inline template for some small things.
The top level menu thing we definitely need to look into for D11.
Just re-emphasizing #32 as Nic commented on the MR where I'm about to commit the hook conversion to Group 📌 Convert hooks to OOP hooks with attributes Active :
I will not be prefixing my hook classes with my module name. I will simply be naming them EntityHooks, UserHooks, and so on. It feels really wrong to completely ignore namespaces when naming classes.
My reasoning now (I may have not always done this in the past) for adding the word Group to some classes and not to others is this: Does it have to do with the concept of Group? Then I add it. E.g.: The concept of permissions within a group is specific and not to be confused with the concept of permissions in general. So I name those classes GroupPermissionFoo. Implementing hooks has nothing to do with Group itself, it's implementing third-party API, so I choose not to add Group to the class names.
If I CMD+O in PhpStorm for "UserHooks", I see two results, but if I type "g userho" I already see what I want. Even "group Hooks" gives me everything I need if I were to search for Group's hook implementations. I see no reason to make class names extremely long if we already have all the info we need. Keep in mind some modules have really long names.
All green for next minor, so will commit after one final round of reviewing the code and threads.
While I am still a believer that the final keyword has its place, there is no point in being stubborn for the sake of being stubborn if a perfectly suitable alternative like @final presents itself. So consider me on board with the latest proposal.
Two remarks, but I don't see why this MR can't be RTBC once they are discussed/resolved.
Leaving at NR as there are no actionable items, yet. Depends on where the discussion leads us.
Also, autoplacholdering doesn't disable caching like max-age 0 does, it would still be cached on its own.
Which is why I added:
Then all we'd have to do is adjust RenderCache a bit to also check for that cache tag
But aside from that detail, I agree that making this rely on config might not be such a great idea. Your further clarifications I also tend to agree with. The suggested implementation here is one of "do not cache", but it doesn't mean that other implementations have to go down that same path.
Okay in that spirit I'll go over the MR again, but I would cautiously agree with the approach. We need to be very aware of #108, though. Ideally those issues get fixed sooner rather than later so the changes introduced here don't run havoc there, even if it's technically not "our fault" if it does.
Just adding this: I like how clean the current MR looks and I can see the reasoning behind it. But flagging the LanguageBlock as "cache optional" does not cover the load here. Because we want it to not be cached, ever. So it's hardly optional. Furthermore, we also want to placeholder it. And I can see that (placeholder & no cache) being the case for more pieces of content in the future.
Hence my suggestion for something that would both indicate we do not want to cache it and we want to placeholder it.
Quoting option B:
B) Use the new \Drupal\Core\Block\BlockPluginInterface::createPlaceholder() to force this to be a placeholder and cache it separately. In our projects, we're seeing 10k+ variants of this block, very rarely reused. That's MR !9798.
However, that MR seems to also involve CacheOptionalInterface? And I'm not sure we want to be using that here.
My reasoning is this: CacheOptionalInterface was introduced with access policies, where multiple smaller pieces will be cached together as a whole. In that context the interface meant: "If all of the smaller pieces indicate they are insignificant, we need not cache the whole."
However, render arrays with cache keys (including the language switcher block) aren't necessarily like that. They get cached individually and as part of a whole (DPC). So using the interface there seems ambiguous as it's not clear in which caching layer they would be considered optional.
Now, with that in mind, LanguageBlock should already be auto-placeholdered by virtue of its max age being 0 and BlockViewBuilder making it use a lazy builder. So setting createPlaceholder() should effectively change nothing about the current situation. The block will be placeholdered and DPC will not cache it.
So it seems the goal is to rid LanguageBlock if its max age of 0 and then make sure it doesn't poison the page. That part is not fully clear from the issue title and summary.
In that case, option B seems like the most straightforward fix, but we would indeed not want to individually cache all variations of LanguageBlock because it varies too much. So removing it's max-age 0 is a good thing, as it would otherwise kill all page caching when combined with 🐛 [pp-3] Bubbling of elements' max-age to the page's headers and the page cache Postponed .
Then again, if we merely want to placeholder it and we don't want the placeholdered HTML to be cached, we have more options than to resort to max age 0. I'm wondering if this was ever explored. Let's look at renderer.config:
renderer.config:
required_cache_contexts: ['languages:language_interface', 'theme', 'user.permissions']
auto_placeholder_conditions:
max-age: 0
contexts: ['session', 'user']
tags: []
debug: false
As you can see, we could achieve the very same result of having LanguageBlock auto-placeholdered by providing a cache tag that indicates the desired behavior. Much like the special "CACHE_MISS_IF_UNCACHEABLE_HTTP_METHOD:" cache tag we have. This tag would also bubble up to FinishResponseSubscriber, but it would have zero effect on the max-age of the page.
So rather than use CacheOptionalInterface in a way that might confuse people, why don't we introduce a "SHOULD_AUTO_PLACEHOLDER_BUT_NOT_CACHE" cache tag, add it to LanguageBlock and renderer.config.auto_placeholder_conditions.tags
?
Then all we'd have to do is adjust RenderCache a bit to also check for that cache tag and we'd have a system that works for ALL render arrays (with a lazy builder) rather than having to figure out ways to apply an interface to them.
Adding my vote as I've personally seen how much he's involved in organizing multiple events for many years now:
Joris Vercammen (borisson_)
longwave → credited kristiaanvandeneynde → .
So essentially you get 1 to 1,5 year to adjust to internal deprecations and 3 to 3,5 years for public API deprecations as of 10.3?
The latter seems a bit on the high end if you ask me, but I agree that an absolute minimum of 1 year is too short for contrib and client projects to adapt to disruptive changes. Ideally, we have a 2-year window as a hard cut-off, but trying to align that with major core release dates is tricky.
Essentially you'd have to deprecate things right out the gate when a new major release is cut, which seems impossible unless you postponed deprecations from the last major. And that would essentially be the same deprecating over 2-4 years, but with extra steps.
So with that in mind I suppose RTBC +1, even though I'm not fully a fan of keeping dead weight around for more than 3 years.
Thanks for confirming!
Tests are failing on me forgetting to convert tokens.inc and views.inc, also need to tackle submodules still.
Hmm, reading more into it, this is exactly what OPT_IN_TEST_NEXT_MINOR is for.
https://git.drupalcode.org/project/gitlab_templates/-/blob/main/includes...
kristiaanvandeneynde → created an issue.
Having to put a manual stop sign in several files in your module seems a bit tedious and counter-intuitive. Nothing seems to happen if you don't, so I can imagine many developers simply not knowing about this memory optimization.
I like that you can flag your module as "I've converted all my hooks" in one place (services file), though. We can scan for said flag when procedural hooks are gone and then give a little nudge saying you can now delete the container param.
So from what little understanding I have from trying to implement hooks in Group and having a cursory read of HookCollectorPass, I tend to agree with Berdir that we can do better here and significantly reduce the fallout if someone were to forget to add the stop signs. Then we should evaluate as a whole whether we cannot do away with the stop signs altogether.
Pushed a commit that uses hook ordering and the new theme initial preprocess callbacks. The entire module file is gone now and I've bumped minimum required version to 11.2. That last bit may make tests fail as we don't have an official 11.2 release yet.
kristiaanvandeneynde → changed the visibility of the branch 3522723-remove-plugin-definitions to hidden.
Never mind, this was fixed in HEAD by 📌 Bump PHPStan to version 2.0.0 Active
kristiaanvandeneynde → created an issue.
Thanks all!
Generally, the approach is to commit to the target branch first and then the other active branches, in the same issue. Please don't open new issues for porting MRs as it makes my huge issue queue even larger.
Will do final check and commit on Friday probably, maybe early next week.
All green on the Group issue when using 'main' branch of gitlab_templates :)
Okay, pushed a commit with those instructions.
I managed to make the tests go green using the workaround, so that seems to confirm my findings. Is there a way for me to test this on a specific Group issue MR?
kristiaanvandeneynde → created an issue.
We definitely want to keep the module name as part of it, finding the right EntityHooks file out of dozens in a project will be a massive pain.
I'm not sure about that. I tend to press the shortcut to find classes in PhpStorm and then type e.g.: "group Ent" and it would show me classes starting with Ent in the group namespace. I feel that adding your own module name to your classes is a bit redundant and leads to classes being harder to find. Namespaces are your friend here and I'm sure most IDEs support them in their search.
Gave some preliminary feedback on Slack, if it's not enough then we can quickly have a look in person in Leuven a week and a half from now. My only concern is that a too loose validation would allow config to contain values that the code does not handle well (or at all). So as long as that concern is taken care of, I'm fine with this.
Added follow-up here: 📌 Decide if we need to load #type defaults for #pre_render callbacks Active
So setting back to RTBC.
kristiaanvandeneynde → created an issue.
I just had another look and while I agree that part of ::getValidatedCachedRedirectChain() rehashes what we already commented elsewhere, I don't think it hurts here to be so verbose. VariationCache is becoming quite complex and the comments may help understand it.
As for the "you can" vs "See xy", I'll leave that up to committers to split hairs over :) I'm also not a native speaker but I try to do my best to write proper English in my comments, yet might fail sometimes.
if it's all the same to you, I'll leave the MR as-is now.
We initially counted all database queries in performance tests, but this caused random test failures with the chained fast backend because it uses timestamps for whether an item is valid, so millisecond rounding issues can result in an item being valid or not. For me personally I think the current split by cache bins is good - we know implicitly which bins use chained fast or not.
Right, almost forgot about how annoying we were to everyone else with our performance tests 😅 Agreed that we can leave this as is.
I'll see if I can comb over the MR one more time to find places to improve the comments. VariationCache has become a lot more complex with the addition of getMultiple() and the internal cache now, so maybe having this many comments isn't a bad thing?
Just wanted to add for posterity that this is all a slightly convoluted alternative to making the render cache a chained backend with a memory cache in front. However, that would have several drawbacks:
- We have to use MemoryBackend rather than MemoryCache or otherwise the render cache would no longer be a true cache. MemoryBackend is slow due to all the (un)serializing
- We might not see any benefit in our performance tests, or even worse performance, because we don't differentiate between DB gets and memory gets. Maybe we should?
- It would needlessly check all cache redirects for invalidation, even though they don't need to be invalidated. For now... some might if we land PseudoCacheContext from 🐛 Cache redirect error displayed when using 'view own unpublished content' permission alongside node grants Active
So all in all, given how crucial the render VariationCache is to a page's performance, I think it's a good thing we came up with a custom solution here.
That reduced the cache GETs by 3, probably fixing what you found in #35
Pushing something wild.
Rather than a simple "is the entire chain still valid", I'm swapping it out with "give me the part of the chain that is still valid". This should fix the false invalid result on chains that led to a cache HIT before, but also immediately optimize scenario's where an entire chain was deemed invalid if cache contexts changed value mid-request, even though we may have been able to reuse the start of the chain.
Oh, right. I will amend that, good catch!
But because we don't persist the cache hit in the $chain, this will never be TRUE for any lookups that were a HIT, it only works for misses.
That was by design, though, and discussed previously. We cannot persist the cache HIT as it may get invalidated and our redirect chain cache would have no clue that it did. Tests were even failing because of that because at first I did store cache hits in the cached redirect chain.
Off the top of my head: Don't grants also not apply if an access hook already gave a definitive answer?
Just noticed, fixed it.
catch → credited kristiaanvandeneynde → .
Done
It is. It's the new API that we introduced here.
The CachedStrategy itself isn't that CR-worthy (IMO) as it is merely an optimization.
Sure why not
Looks great to be honest. Comment is also very clear.
The MR you opened still has the following going for it:
1 and 3 will still lead to a VC error for anonymous people with the permission while grant implementations exist because, depending on the order, we will either try to overwrite a redirect for 'node_grants:view' with 'user.roles:authenticated' or vice versa. Anonymous people shouldn't get the permission, but still...
It also means we check the grants cache context where it doesn't apply. But that's only for one out of many configurations of status vs owner vs permissions. So I'd be able to live with that as collateral damage.
However, this is the first time we encountered this bug and we cannot know whether we might encounter it more. We may not always have the luxury of adding an unrelated cache context just to fix cache redirect collisions. So a proper solution is still needed to be safe and that is what I was trying to come up with.
I'd say the following:
- Are you displaying something? --> dependency
- Are you varying based on something? --> cache context we know (global) source of something, pseudo cache context if we don't
More details and edge cases to be added, but that would be the gist of it.
So take the status field for example:
- If it means you show the node background color in pink or not, then IMO that's not a variation and you can simply add the node as a dependency.
- If you vary access based on the status, we do not necessarily care about what will be output in that context. We do care about the status as part of a decision tree on what to return. So here it's a variation.
Just added the cache context and value object for review. Still need to implement in VC and the node access control handler as proof-of-concept.
Could this be called CacheableDependencyCacheContext - a cache context that varies by its cache dependencies?
Sure, we can name it whatever makes most sense.
But it's important to state it does not vary by the dependency, it's just that we have a dependency that causes variations which we cannot represent with a regular cache context so we need to be able to invalidate a part of the redirect chain to reflect the dependency.
Is there a forseeable situation where someone might want to make a similar cache context that also returns something?
I don't see a use case where the cache context would return any value, really. It will essentially serve as a substitute for a redirect chain link that we cannot otherwise declare. By virtue of that it does not make sense to have it return anything other than a fixed value (like 1).
The upside of the above is that the DX isn't nearly as bad.
It's easy to document PseudoCacheContext as:
"When you need to vary based on a cacheable dependency's property but you have no cache context to represent that, perhaps because you don't know where the dependency came from. In that case, add the dependency to a PseudoCacheContext and pass in the PseudoCacheContext as a cacheable dependency of your return value."
But with clearer language than what I could come up with just now.
Okay so moving on from my previous comment, I just thought of something that could make pseudo cache contexts work.
Imagine a value object PseudoCacheContext that implements CacheableDependencyTrait but:
- On any method that can add contexts it throws an exception so only cache tags and max age are allowed
- Its getters for tags and max age would return "nothing" ([] and -1)
- Its getCacheContexts() method sorts the tags and max-age (for better hit ratio) and puts that into a 'pseudo:some_sorted_string' cache context
- Said cache context would always return 1 (what @catch suggested before)
Now in VariationCache, when we detect such a context it will tag the the next chain item after the context with said metadata.
Now we could take the example of my previous comment and tell VariationCache exactly which redirect needs to be tagged. So no matter what redirect the pseudo-context gets added to, that's where we flag the next redirect for receiving the node cache tag. So from the example above, no matter if the chain looks like:
<INIT> --> AXZpseudo --> <CACHED DATA>
<INIT> --> ABpseudo --> <CACHED DATA>
<INIT> --> A --> pseudo --> XZ --> <CACHED DATA>
<INIT> --> A --> pseudo --> X --> Y --> <CACHED DATA>
<INIT> --> A --> pseudo --> X --> Z --> <CACHED DATA>
<INIT> --> A --> pseudo --> B --> <CACHED DATA>
We would always tag the one after the pseudo context with the node status.
The offending code in NACH would then become:
protected function checkViewAccess(NodeInterface $node, AccountInterface $account, CacheableMetadata $cacheability): ?AccessResultInterface {
// NO LONGER NEEDED!
// $cacheability->addCacheableDependency($node);
// We do not know where the node comes from, but we have to vary by its
// status. We therefore create a pseudo cache context that gets invalidated
// whenever the node is invalidated.
$pseudo_context = new PseudoCacheContext($node->getCacheTags(), $node->getCacheMaxAge());
$cacheability->addCacheableDependency($pseudo_context);
if ($node->isPublished()) {
return NULL;
}
$cacheability->addCacheContexts(['user.permissions']);
This feels like it could work with minimal effort on changing VariationCache. The only drawback is that the pseudo-context will in some cases introduce an extra redirect, but that would be the case anyway if we had a proper "node status cache context".
Given this some more thought and I don't see "hinting" for what to tag a redirect with working.
Say we have a scenario where we have three critical decision points:
- First one adds cache context A
- Second one adds B when published or X when unpublished and a pseudo-context (N:1) to indicate we need to tag with node 1
- Third one in the X branch can add Y or Z depending on value of X
Ideally, we want the node cache tag to be added to the redirect for B or X, but how do we know where to add it?
We could come to a cold cache with AB(N:1), AXY(N:1) or AXZ(N:1) and that would be one big fat redirect pointed to by the initial cache contexts (). So the chain would, for example, be:
<INIT> --> AB --> <CACHED DATA>
Now invalidating A whenever the node changes would be silly, but for the sake of arguing let's say we write the tag at the last redirect. If we now change the status, the whole chain is flushed, but no real damage is done. However, if we visit a second node page that triggers XY, the redirect path would be rewritten to:
<INIT> --> A --> XY --> <CACHED DATA>
At this point, tagging the last redirect would give us what we want. Only XY and the cached data get flushed when the node status changes and we keep redirect A. But if we now visit a third node that triggers XZ, we get:
<INIT> --> A --> X --> Z --> <CACHED DATA>
Now we're in a pickle again. Because we need to tag X with the node cache tag, but how do we know that? When we get to the cache we only see AXZ(N:1), we do not know which context was added in which order.
So hinting what to tag a redirect with would only work if we had a chronological order of when cache contexts were added and I still think it would be really poor DX to expect people to provide pseudo-contexts whenever the vary by something that is not part of the global context.
But that would require a complete rewrite of our cacheable metadata add and merge methods.
So moving on to another option: Source detection.
If we could tell NACH where the node that's being passed in is coming from, we could add the cache context for whatever global source we used to get the node. In this case it could be route.args:node or something. An example of this is the route.group cache context in Group, but a more generic version of it.
But that would require some sort of service where we can pass in an object and get a source in return. And if the node was actually loaded at random and passed into NACH, it would still break because we'd have no cache context to represent the status check.
So that left us with one final solution: A cache context to represent status checks. I am really not a fan of this either because it's just plain dirty. The node in question may be from a global source, but we don't know that. Falling back to loading the node without context is just asking for performance issues and poor cache hit ratio.
This was also touched on in the issue mentioned in #22.: To add insult to injury, we had a notion of "current user" vs "any user" being discussed there and that could lead to poor cacheability all around.
Now, I may have got a solution but I'll write that down in a separate comment. This whole comment was to outline why we're stuck.
Did you delete the MigrateControllerTest message by mistake?
I think that's the final one to open a child issue for, but it's probably a bit more complex to fix as I recall it involves changing JavaScript.
Oh wow, for a second there my brain processed the assert as an if-statement. Left one more suggestion, but that can be fixed upon commit.
It is not possible to do this, you can ignore placeholders, you can't remove them.
If CachedStrategy does not find any hits, it will also return an empty array. Isn't that the same as what the test strategy did? BigPipeStrategy can also return an empty array. The only difference I see is that the test did not have SingleFlushStrategy as a fallback, which ends up returning all placeholders anyway.
Might be my sickness talking but if I read ChainedPlaceholderStrategy::processPlaceholders() correctly, we reduce the set of $placeholders whenever a strategy returned some of them back. At the end of the loop we want $placeholders to be empty, no? So does that not mean that the check has to be !empty() to figure out that something was, in fact, not replaced?
ChainedPlaceholderStrategyTest is choking on the new assert, I think because of a small mistake.
Americanizzze the code comments.
😂
Dug up the ::class docs for people stumbling upon this issue: https://www.php.net/manual/en/language.oop5.basic.php#language.oop5.basi...
Found some. minor things but looks good to me otherwise
Re #10 at first it did that too for me but then when I refreshed it seemed fine. Must have had a hiccup in GitLab on my end, wouldn't set it to RTBC if it had a garbled history like it initially did.
Having said that, MR still looks great to me so RTBC +1.
Setting back to NR and opting out of bot to reduce noise.
@berdir, I don't really mind resolving merge conflicts like these, PhpStorm has a nice tool for it. So whichever issue goes in first, I'm not too fussed about having to resolve a merge conflict on the others. Can ping me to do it if you want.
but if the former returns forbidden can the latter override that decision? It looks like AccessResult::orIf would prevent that?
Returning Forbidden is what I meant by saying "subtract from". The default is neutral and then people can explicitly allow or forbid access, how is that not add to or subtract from?
If that is the case, then why would a query be any different at all?
Queries don't have the luxury of running code on them. So if one bad actor messes up the query, everyone else is stuck with that outcome. A processor in between could prevent a lot of that damage.
Providing a mechanism (a hook?) to either enable or disable an existing permission in core would fix the same problem without having to express intent at all, which is why I'm talking about the design decisions.
You are literally describing the Access Policy API. Which is one piece of this puzzle, but query access runs after permission checks.
Okay so I feel like this thread is seriously getting derailed, even if there was no intention to do so.
So I'll answer one more time, but after that I don't think we should still be discussing the design decisions we made in the past regarding how we should allow people to alter other modules' access. Even with the most recent addition to our access layer (Access Policy API) we allowed modules to both add to and subtract from passed in access logic. The goal is not to change that, so why are we discussing it?
Could you expand on this? Why would it be a ball ache and for whom? A lot of systems have access controls like this so I'm confused why it would be such a problem?
If you want a sports editor to only be able to post articles to the sports section of your news website, currently all you need to do is assign them the "editor" role in the sports section. If we take one of your suggestions, we would have to make them an editor on the entire website and then make sure they can't do anything in fashion, politics, economics, etc. which is a lot of extra work.
It would also mean we break with everything Drupal users have grown to expect. We already have an issue floating around somewhere about the massive mindfuck we have when it comes to neutral access results. Both the route access checks, entity access checks and grants system treats a neutral result differently.
Now on topic: When you build an entity query, we know that the conditions are filters because access conditions should go into the access check (= why we have the accessCheck() method). For Views it should be the same thing.
So if we build an API that takes an existing query and assumes that all conditions are filters, we could very well create a system that gathers the access logic and allows for easy alteration. Only after everything is processed do we add it all to the query. This would solve all of the problems I explained above because it would be crystal clear whether a status check is a filter or an access rule.
I'd like to pursue this route when i have a bit more time on my hands.
If we really want to allow modules to loosen permissions (which I don't think is a good idea, but fine)
But it is a good idea? Even node grants supported that for viewing unpublished nodes.
@kindutch put the concept behind this all very well. Thanks for that!
It all ties into the access policy API's concept of scopes and identifiers. One scope should not be concerned with another. It's perfectly valid to not have node edit access in the global scope, but to have said access in a specific domain or group scope. It would be an absolute ball ache to achieve per-domain editor access if it meant you first had to grant general node edit access for the entire website.
Currently it shows 2 fewer cache gets in 'render' after the getMultiple support.
Re #25, I'm fine either way we commit this. I consider this issue "done" now and the results are amazing. Also they're actual DB gets, not memory gets, so the impact is even bigger.
I'm also considering creating an issue where can differentiate between DB and memory gets.
Not sure why it's failing on QuickStartTest and RecipeQuickStartTest
Felt useless while sick on the couch, added back support for ::getMultiple(). Back to rest mode now.
I think I've got a clean solution for ::getMultiple() but it'll have to wait a few more days. Sickness got the best of me, tried to commit today but my head is not functioning and I can't guarantee that whatever I commit today is airtight.
Okay, all green :) Now for the final hurdle...
Fixed final test fail locally and the other went green locally. So if all goes green we can focus on reintroducing the static cache in getMultiple().
Pushed some fixes already, you can see the pattern is exactly like how we use refreshVariables()
Yeah I got a fix, but got something urgent to attend to.
I think NodeTest might be failing for the same reasons we need refreshVariablesTrait. One request has a MISS, the other makes it so something is cached, but the cache in the test's thread still thinks there's a MISS and returns as such.
That's looking a lot better already. I suggest we fix these test fails and then introduce the same logic in getMultiple().
Updated the code, we still get good results on performance tests. Just 3 cache gets more than the initial improvement.
I also temporarily reverted the getMultiple() changes so we can first see which tests fail on the single changes. Easier to debug that way.
I think we can do one better in getRedirectChain(). Rather than not setting a cache on a cache HIT, we can still store the path leading up to the hit and reuse that next time we construct a redirect chain. At least that way we can skip requesting all the redirects again.
Refresh status change...
LOL, @berdir beat me to it. I lost sleep last night thinking we're caching too much on cache HITS because we know redirects are permanent but the end of the chain on a hit is not.
So either we switch to a MemoryCache for the redirect chain, which would turn this whole thing into a no-op and a glorified chained backend or we just make sure we pop a cache HIT off the chain and store only the parts leading up to it.
Let's see what testbot has to say.
Spin-offs from recent discussion:
kristiaanvandeneynde → created an issue. See original summary → .
Will fail tests, but that's the idea. If it's only performance tests that fail, we have no collateral damage.
kristiaanvandeneynde → created an issue.
LGTM