- First commit to issue fork.
- @catch opened merge request.
- π¨π¦Canada mandclu
I will also add that this seems to severely impact the intended reusability of the recipes within Drupal CMS. I can include one such recipe in my own recipe (tested successfully with
drupal_cms_admin_ui
) but my recipe fails to apply if I add another (tested by addingdrupal_cms_anti_spam
, which only adds another 5 dependencies, based on the command line output). - π¨π¦Canada mandclu
+1 on the need for this. I can install the Event Platform project (which has loads of nested dependencies) in less than 20 seconds via the Drupal UI or drush. If I implement a recipe that installs this project (and does nothing else, even forcing the configuration to load) then the recipe runner tries for several minutes and then inevitably fails.
- π©πͺGermany Fabianx
This was loosely inspired also by:
Issue #3463409 by mkalkbrenner: Parallel indexing using concurrent drush processes
which showed that parallel indexing is possible in general.
- @fabianx opened merge request.
- Issue created by @Fabianx
- Issue created by @catch
- πΊπΈUnited States smustgrave
Sorry for the delay, javascript test keeps failing even after 2 re-runs. :(
- πΊπΈUnited States smustgrave
Thank you for creating this issue to improve Drupal.
We are working to decide if this task is still relevant to a currently supported version of Drupal. There hasn't been any discussion here for over 8 years which suggests that this has either been implemented or is no longer relevant. Your thoughts on this will allow a decision to be made.
Since we need more information to move forward with this issue, the status is now Postponed (maintainer needs more info). If we don't receive additional information to help with the issue, it may be closed after three months.
Thanks!
- πΊπΈUnited States smustgrave
Thank you for creating this issue to improve Drupal.
We are working to decide if this task is still relevant to a currently supported version of Drupal. There hasn't been any discussion here for over 8 years which suggests that this has either been implemented or is no longer relevant. Your thoughts on this will allow a decision to be made.
Since we need more information to move forward with this issue, the status is now Postponed (maintainer needs more info). If we don't receive additional information to help with the issue, it may be closed after three months.
Thanks!
- π¬π·Greece bserem
New MR against 2.2.x, new patch attached for use with composer, same state as MR (from https://git.drupalcode.org/project/honeypot/-/merge_requests/60.patch)
- π¬π§United Kingdom catch
The idea here was to ensure that new core assets, like logos, icons etc. are optimized. So it is something like a step to ensure that's not forgotten. I don't think we can specify specific tooling since things change all the time.
- πΊπΈUnited States phenaproxima Massachusetts
π Update to Drupal core 11.2 beta Active is committed, so this is unblocked.
- @bserem opened merge request.
- π¬π·Greece bserem
bserem β changed the visibility of the branch 2820400-add-possibility-to to hidden.
- π¬π·Greece bserem
bserem β changed the visibility of the branch 2820400-use-js-timelimit to hidden.
- π¬π·Greece bserem
bserem β changed the visibility of the branch 2.1.x to hidden.
- π¬π·Greece bserem
Attaching new patch.
Notes:
- I would love to bump hook_update_N to 8201 (from 8105) to match with version 2.2.x
- Can't update the MR as the forks don't have 2.2.x and won't sync themselves
- Please add https://www.drupal.org/u/jaims-dev β to credits, as he helped with the new patch
- π³πΏNew Zealand quietone
Adding the template used by the core gates.
What is the criteria to use?
Automatically closed - issue fixed for 2 weeks with no activity.
Automatically closed - issue fixed for 2 weeks with no activity.
- πΊπΈUnited States smustgrave
Thank you for creating this issue to improve Drupal.
We are working to decide if this task is still relevant to a currently supported version of Drupal. There hasn't been any discussion here for over 8 years which suggests that this has either been implemented or is no longer relevant. Your thoughts on this will allow a decision to be made.
Since we need more information to move forward with this issue, the status is now Postponed (maintainer needs more info). If we don't receive additional information to help with the issue, it may be closed after three months.
Thanks!
Automatically closed - issue fixed for 2 weeks with no activity.
Automatically closed - issue fixed for 2 weeks with no activity.
- π¬π§United Kingdom catch
Yeah the gain would mostly be in a single web head + cli situation. So mostly local environments and smaller hosting setups. As soon as you get to multiple web heads it's going to be marginal.
- π§πͺBelgium kristiaanvandeneynde Antwerp, Belgium
- when we retrieve an item from the fast cache, we check if the node uuid matches the node uuid we're on, if it does, then we ignore $last_write_timestamp only for the current node.
Imagine this set-up (with write-through on set):
- Node A writes item X to the persistent cache, we flag item X with the UUID for node A
- Node A immediately writes item X to the fast cache, also flagged with the UUID for node A
- We keep a last write timestamp per node
Now, item X should remain valid indefinitely unless its tags got invalidated, it's got a max age that has expired or it got manually flushed. The first two scenarios are no troublemakers as both the fast and persistent cache can deal with tags and max ages just fine. What has me worried is the manual clear.
If I trigger a manual clear on node B, that will clear the persistent cache used by all nodes, but only the fast cache of node B (unless otherwise set up). Now node B has to recalculate item X and stores it in the persistent cache as originating from node B's UUID. Cool.
But node A's fast cache hasn't been cleared and it still contains item X as originating from node A. In the meantime, node A's last write timestamp has not been updated. So now node A will happily keep serving a stale cache item because it has no clue that the underlying persistent item now originates from node B. With the current system, this cannot happen as we have one timestamp to rule them all.
This can be cured by making sure markAsOutdated() called from invalidate calls also updates the last write timestamp of all nodes. That would, however, beat the purpose of having a timestamp per node unless we differentiate between writes and invalidations when calling markAsOutdated().
- we also have to compare against the max($timestamps) of all the other nodes though, just in case they've written more recently than the item was written.
Maybe that was your way around the above, but then I wonder if we still gain anything. Checking if any other node recently wrote something seems to undo what we're trying to achieve here. It sounds a lot like one centralized timestamp, with extra steps.
I think the big question is whether the better hit rate here, would make it worse diluting the grace period approach in #3526080: Reduce write contention to the fast backend in ChainedFastBackend.
We would need to bring back the write-through on save and delete, or at least delete the APCu item if that's better for locking and space usage, we couldn't just ignore it as is done in that issue.
The more I think about this, the more I'm wondering whether we should just pick one road and go down that one. Trying to combine the grace period from the other issue, dropping write-throughs in the process, seems rather incompatible with this one.
And given my thoughts above I'm not entirely sure how much benefit we could gain here. But that might very well be because I'm not fully picturing what you had in mind yet with the timestamps per node.
- π¬π§United Kingdom catch
Thinking about this more while working on π Add a grace period to ChainedFastBackend when last_write_timestamp is ahead Active .
For chained fast, we use the $last_write_timestamp to invalidate everything before the item was written. This is so that the cache remains consistent between multiple web heads, and between cli and web heads too - let's call those 'nodes'. Unfortunate that it's the same word as Drupal nodes but my other idea was instances which sounds like class instance.
If we can identify which node a particular consistent cache item was written on, then we can check if it was on the current node or not. If it was written on the current node, then as long as we always write-through or refresh the local cache item, we can ignore $last_write_timestamp only on our server.
Something like:
- in the local cache - APCu - keep a special key which is a random UUID for the 'node'.
- store last_write_timestamp per-node, so an array like [$uuid1 => $timestamp, $uuid2 => $timestamp2].
- whenever we write to the consistent cache, we add the node uuid to the cache item
['chained_fast_uiid' => $uuid, 'data' => $data]
- we have to add this in on set and remove it on get for the caller, same structure for the fast cache. At the same time, we either have to write the item to the fast backend too, or at minimum delete it. This would undo one of the changes in π Add a grace period to ChainedFastBackend when last_write_timestamp is ahead Active so a fine balance.- when we retrieve an item from the fast cache, we check if the node uuid matches the node uuid we're on, if it does, then we ignore $last_write_timestamp only for the current node.
- we also have to compare against the max($timestamps) of all the other nodes though, just in case they've written more recently than the item was written.
On a single web-head set-up, with infrequent cache writes from the command line, this would allow $last_write_timestamp to be almost entirely ignored for web requests, which would mean setting an item to chained fast would no longer invalidate the fast backend.
With two web heads, if they both write to the persistent cache in the same second, then it wouldn't help much, but if one web head happens to write to state a minute later, then it would avoid invalidating all the other fast cache items written in the last minute.
With multiple web heads this would likely be no more effective than the current situation, but it also probably wouldn't make it any worse.
I think the big question is whether the better hit rate here, would make it worse diluting the grace period approach in π Add a grace period to ChainedFastBackend when last_write_timestamp is ahead Active .
We would need to bring back the write-through on save and delete, or at least delete the APCu item if that's better for locking and space usage, we couldn't just ignore it as is done in that issue.
Automatically closed - issue fixed for 2 weeks with no activity.
- πΊπΈUnited States smustgrave
Thank you for creating this issue to improve Drupal.
We are working to decide if this task is still relevant to a currently supported version of Drupal. There hasn't been any discussion here for over 8 years which suggests that this has either been implemented or is no longer relevant. Your thoughts on this will allow a decision to be made.
Since we need more information to move forward with this issue, the status is now Postponed (maintainer needs more info). If we don't receive additional information to help with the issue, it may be closed after three months.
Thanks!