- 🇬🇧United Kingdom Driskell
For 3.5 patch just needs adjusting to a MR and moving the change to DrushQueueWorkProcessor to the main module as it's no longer in the purge_drush submodule
I'm looking at creating a plugin that can do some invalidations across a cluster of machines on each machine. I figured it would be useful to utilise the queueing system and just hide the fact that the queue claimItem() on each machine returns different items specific to invalidations needed on that machine (in my case, Varnish invalidations across a fleet.) So rather than implementing an external queue, using purge's.
I ran into an issue though that was unexpected. The PurgersService has a processing lock. So even though I'm running my own processor I cannot actually run my processor in parallel - only one can run at any one time.
Is there a specific design consideration for the lock within the PurgersService? What are the thoughts on a small patch to add new plugin definition so that a processor can request that the lock in purgers not be taken so it can process in parallel?
Requires some fairly extensive plugin customisation but happy to help describe. I'm going to try open source if I can at some point but it's not there yet and I need approval.
Assuming I'm not missing something (there's a huge library of plugins to consider so I may be!), I wonder if a processor could request that it does not need a lock. It could be done by a new plugin definition statement: "aquire_global_lock = FALSE". The processor can then obtain it's own lock and parallel process if it wants to (our queue and purger plugins support this.)
-
-
New plugin definition for processors: aquire_global_lock
Default value: TRUE. Fully backwards compatible as a result.
-
Active
3.5
Code
Not all content is available!
It's likely this issue predates Contrib.social: some issue and comment data are missing.
For 3.5 patch just needs adjusting to a MR and moving the change to DrushQueueWorkProcessor to the main module as it's no longer in the purge_drush submodule