- Issue created by @GuillaumeG
- π¦πΊAustralia GuillaumeG
Ok, ended up triggering the following in a service on hook_entity_insert() / hook_entity_update():
public function purgeCache(EntityInterface $entity) { $invalidations[] = $this->purgeInvalidationFactory->get( 'url', $entity->toUrl('canonical', ['absolute' => TRUE])->toString(), ); try { $this->purgePurgers->invalidate($this->processor, $invalidations); } catch (DiagnosticsException | LockException | CapacityException $e) { $this->messenger->addError($e->getMessage()); } }
- π΅π±Poland szy
Hello Guillaume,
exactly the same happens to me. A small site, only 50 nodes - editing one of them puts all of them in the queue.
Is it necessary? Looks like wasting of resources, wasting of Cloudflare quota to me.
Szy.
- π΅π±Poland szy
Now I see, that every cron run fills the queue with the same nodes - without even touching them. Now - with 50 nodes in the database - I have my queue filled with 620 nodes!
Marking it as a major bug.
Szy.
- π΅π±Poland szy
No, sorry, it's not because of the cron run.
It happens with every install/uninstall operation - every time I see:
queue: urlpath: added 52 items
(all my content nodes).
Szy.
- π¬π§United Kingdom Finn Lewis
We have the same issue.
Ended up with a 2.2GB purge_queuer_url table. Had to uninstall the module.
Drupal 10.2.4
PHP 8.2.15
purge_queuer_url 8.x-1.0
It would be good to find time to fix this as without it enabling Cloudfront caching is kind of unsustainable.
Any suggestions from the maintainers would be most welcome.
- π³πΏNew Zealand ericgsmith
The issue of duplicate items going into the queue is a purge issue https://www.drupal.org/project/purge/issues/2851893 β¨ Deduplicate Queued Items Active
I have found success with Jonathan's module https://www.drupal.org/project/purge_queues β mentioned in that thread.
For larger sites we still find issues with the performance as the duplicate check queries items individually.
I currently use this patch https://www.drupal.org/project/purge_queues/issues/3379798 β¨ Alternative approach to unique queue using upsert Needs review along with his module which has been good for a year or so.
The issue of actions like saving one node and its filling up the queue with all items is down to your site and cache tags. Have a look at the cache tags stored against the urls. Common culprits are things like node_list from views. Modules like https://www.drupal.org/project/views_custom_cache_tag β may help if that is the case.
As for a 2.2GB url table, that is huge! I'm sorry don't have much to offer, try analyze what is in there and if you can ignore any it through the blacklist. Things like facets and search terms can flood the table, if they are cached consider setting a lower lifetime and ignoring them. Cache invalidation for some of the site maybe be better than none.
This module also has an issue that pages never expire/ drop off the registry so crawlers and garbage urls can really pollute the table. https://www.drupal.org/project/purge_queuer_url/issues/3045503 β¨ Registry expiration as opposed to removing it too soon Needs work
- π¦πΊAustralia GuillaumeG
Hi @ericgsmith,
Thanks for your detailed answer.
I tried your suggested solutions:
- Using the dev version of the purge_queuer_url module to get all the latest fixes and trying the code from the MR on https://www.drupal.org/project/purge_queuer_url/issues/3045503 β¨ Registry expiration as opposed to removing it too soon Needs work
- Using the purge queues module along with the mentioned patch.
Unfortunately, I did not see any improvements.
I checked the X-Drupal-Cache-Tags headers and did not notice anything unusual for the nodes I saved during testing.
I found the same cache tags when browsing the SQL table purge_queuer_url_tag.For other developers who need to know how to display Drupal Cache Tags, you can refer to https://www.drupal.org/docs/8/api/responses/cacheableresponseinterface#d... β
However, I did notice that the queue was almost always equal to the Traffic Registry size, and saving the node again would simply double the size of the queue.
Example (after training the traffic registry with wget and registering 85 items):
- Traffic Registry size: 85
- Queue size: 0
After saving a node:
- Traffic Registry size: 85
- Queue size: 85
After saving the same node again:
- Traffic Registry size: 85
- Queue size: 170
Is there anything I'm missing here?