johnpicozzi β credited wolcen β .
Also ran into this on the weekend, and was curious about this ticket while I was poking around to find those cache limiter settings I'd forgotten already.
In my case, it was as simple as a client update that placed a rather large SVG of their logo into a theme that in-lined the SVG directly into their pages. This grew each page to 500% it's original size...that hurt - fast!
But, that's my first point: the number of rows does not directly correlate with space. Having 5000 cache rows does not necessarily mean you'll suddenly have a space issue. Unfortunately, this is probably where a fair number of people first learn about the possibility to even tune the cache. That was certainly the case for me - the cache tables frequently stick out like a sore thumb when you start running into size issues (if it's not the watchdog table, that is).
The extra step of including monitoring such as @ressa shared (and I ended up yoinking a good of logic from [thank you!]) I'd say is the better practice with regard to space issues. That monitoring method is certainly what I'll stick with now that it's made it into our Ansible roles.
It's also good to consider things that may effect how quickly these tables can grow, and I'd say that's probably the more interesting part to me. For example, I've seen faceted searches that explode these tables faster than anything. Seeing regular notices about 1000's of records being purged from cache I can see being a helpful clue to have at hand.
Frankly, someone specifically focused on tuning may well be fumbling around in these woods already, and hopefully knows/learns the skills to run select count(*) queries. I don't overall think this is particularly low-hanging fruit here.
All that said - given it is of commensurately low effort - I still think it would be nice to see a notice that the cache was trimmed, and specifically by how much.
wolcen β created an issue.
wolcen β created an issue.
Seeing that fix on the AJAX CI failure (for which I am grateful - I'd had NO clue how to fix that!) I rebased to re-run CI here.
@mparker17 - sorry for the lag in response here. By all means, a merge request or whatever you prefer would be great - feel free to take over on this!
Let me know if I can possibly clarify things from my perspective/if you have any questions for me.
I've not had the time recently to dedicate to it - the patch has also been working for our production use case where it had been hanging, but I'll try to get some time free to bring this up and re-run the tests and see if I can reproduce your results.
Looks like this proposed solution includes some client-specific code:
+ if ($taxonomy['name'] == 'Residency' && $taxonomy['vid'] == 'campaign_goal') {
+ $parent_terms = $storage->loadByProperties(['uuid' => $parent_uuids]);
+ }
Adding patch that basically does the same as #3378470, but without causing as much diff noise to make the change more obvious.
So - having been bitten by this again, I dug a little deeper. I can now confirm that this will happen on the "full" side as well and have created a patch that will reproduce the issue in your tests.
This can happen regardless of duplicates, specifically, when the order of parent term id's is not correctly sorted.
I did review the other referenced patch in 3378470 and we're seeing the same thing: the issue comes down cases such as this where you have to be careful not to add things to tidsLeft when they are already in tidsDone, which is exactly what happens with this test.
Adding likely-related issue report. Sorry - hadn't noticed that one before (wasn't searching for dead lock, doh!)
wolcen β created an issue.
Excellent point @laborouge! Thanks, that does the trick...for today!
FWIW, given you have curl + jq, you can quickly see the problem section here:
curl -s "https://packages.drupal.org/files/packages/8/p2/drupal/mailchimp.json" | jq '.packages."drupal/mailchimp"[1].dist' -
Contrast this result when using [0] instead of [1].
I am also seeing this after a composer clear cache. Looking at the reported URL, you will see that while the 2.2.2 release is fine, the second packages entry (2.2.1) has a null reference value.
Heh, whoops - @drutopia was really "me".
To be clear: we were unsure if the new tests added by that commit are the "cause" of the issue - we don't know if module is perhaps loading the private key improperly, or if phpseclib is doing a new test when it really didn't need to (or should have just exited).
Did not see any complaints on the phpseclib side so far.
Our hosting provider already had an .htaccess file in the .well-known folder (to ensure no other rules were preventing access for LetsEncrypt).
We added the last three lines to punt back to Drupal in the case that e.g. a LetsEncrypt validation file did not exist:
RewriteEngine On
satisfy any
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule ^ ../index.php [L]
Attached working for me.
Produces assertion error:
"PHP message: AssertionError: Failed to assert that "request.path, user.permissions" are valid cache contexts. in /var/www/html/web/core/lib/Drupal/Core/Cache/Cache.php on line 33 #0 /var/www/html/web/core/lib/Dr