- πΊπΈUnited States SamLerner
Could someone describe what the potential negative impact is to clear the entire cachetags table?
Or, how do you determine what tags are no longer in use? Check each one to see if it references deleted stuff? Could we use that to run a cleanup command on a regular basis?
- π¨π¦Canada gapple
As I understand, there is no problem if cachetags are cleared at the same time as a full cache flush - a new cachetag entry will be added to the database as needed when a tag is next invalidated. If done automatically when flushing the cache, it would just make the operation take longer.
If tags are cleared without clearing cache items, there is a specific, probably unlikely, case where a cached item could have the same checksum and be out-of-date but still served.
The checksum is a sum of the invalidations on the item's tags, so if:
- Item saved with: A:1,B:0 (checksum 1)
- tags are cleared, so invalidations are reset to 0
- B is invalidated
- the cache item is valid against the new checksum of A:0,B:1 despite being out-of-dateThe checksum is checked for equality, so a lower checksum from the database will still invalidate the cache item (e.g. an item with two tags' checksum is 12, but the cachetags in the database have both been reset to 0 - since
0 != 12
the item will be invalid). - π³πΏNew Zealand jweowu
IIUC (see #12) the main problem will be that you cannot (reliably) clear all cachetags "at the same time" as clearing the caches.
Even trusting that all cache backends will reliably behave the same way, clearing caches invokes hooks so that modules can react and clear their data, and I assume there's no guarantee that along the way some of that arbitrary code doesn't cause something to be cached and invalidated.
If all of your caches were in the database and you knew where they all lived and you executed a single transaction which (only) truncated all the cache tables, deleted any other data which was required, and deleted the cachetags then I imagine that would be fine; but in practice I don't think "clearing all caches" is nearly so clean a process.
You can't naively purge cachetags before clearing other caches, because cache entries may be needed in the process of clearing caches.
You can't naively purge cachetags after clearing other caches, because in the process of clearing caches you may have acquired and invalidated new cache entries.
(n.b. This is my speculation -- I don't have a deep understanding of these processes so I might be wrong, but thinking about the issue had led me to those conclusions.)
I expect we need a way to mark the pre-existing cachetags prior to the full purge, so that entries which are unchanged following the purge can be recognised and removed.
- πͺπΈSpain eduardo morales alberti Spain, πͺπΊ
Some questions are related here with Vollatile Custom Entities, all entities' cache tags are invalidated on post delete, which makes sense if you are using these entities on cached places, like the case of Nodes, but if those caches tags are not used anywhere, it is better to override the method postDelete to avoid add new entries to the database,
Method postDelete EntityBase.php:
public static function postDelete(EntityStorageInterface $storage, array $entities) { static::invalidateTagsOnDelete($storage->getEntityType(), $entities); }
Overriding the postDelete method will not invalidate the tags on delete.
Hi :)
I use Drupal with OVH since 5 years, all is very nice with Durpal, but complex as you know.... But I have ONE problem since 5 years, all time the same, and 5 years I looking for a solution for that...
My SQL become big all time.... WIth PHP 8 I need to Clear Cache all days or my DATA SQL go to more 8giga... ANd I am block all time ba my hosting.... :(
SOmetime ca explain me a solution please? 5 Years I try....
I am on Drupal 9.5
Thx. :)
- π¬π§United Kingdom catch
Replying to #26: Cache tags can pretty reliably be cleared after bins are emptied. If a new cache item is created and not invalidates, its cache tag checksum would be 0 which will match the cache tag blank slate.
If it's created and then immediately invalidated, it's cache tag checksum will be 1 or more which will not match 0, and then it would be invalidated when next retrieved.
The only case that is not covered is if it's invalidated, has a checksum of e.g. 1, the cache tags are wiped, it is not requested, then they're invalidated in such a way that the checksum matches again and its only requested again then. This is such an extreme edge case/race condition that it can probably be ignored.
A workaround would be a cache tag last garbage collection timestamp to compare against and write that alongside the cache items. Then anything created before it would also be wiped but that adds an extra state or k/v request on every request.
- π³πΏNew Zealand jweowu
> This is such an extreme edge case/race condition that it can probably be ignored.
I can only assume that wasn't the conclusion when this was implemented, though -- it doesn't seem plausible that this issue never came up in discussion at the time.
> A workaround would be a cache tag last garbage collection timestamp to compare against
Yeah, I was also pondering that approach in #12 and #13. I think it's a pretty reasonable idea.
An alternative would be to have a pre-purge process which copied the cachetags table to a temporary table, and then post-purge deleted all cachetags matching an entry in the temp table. That has its own noteworthy costs, but they're isolated to the time of the purge. If the table was really gargantuan, though, it might not be great (and at the point in time when a fix is deployed, some sites are inevitably going to have tables fitting that description). Offhand I think I'd lean towards the timestamp column.
- π³πΏNew Zealand jweowu
Amavi: Is that on account of the cachetags table specifically? If a normal cache rebuild fixes things for you -- if temporarily -- then it's definitely not about cachetags (as the entire reason for the present issue is that cache rebuilds do not purge the cachetags table).
Your database will have many different cache tables, and I suspect your problem is something different to this issue. You should start by confirming which specific cache table(s) are getting so large, and then you can look for or post an issue related to that.
- π¬π§United Kingdom catch
#636454: Cache tag support β is the origialnal issue. I worked on it at the time, haven't reread it for ages, but it would not surprise me if purging just didn't get discussed or got deferred to a follow-up that never happened. It was my more or less the first API addition in Drupal 8 and years before a release so that particular problem was very abstract for a very long time.
- π³πΏNew Zealand jweowu
A more-palatable variant of the temporary table suggestion has occurred to me. I don't know how practical it is, but in principle I think this avoids the down sides of the other approaches mentioned.
In essence, while the cache rebuild is taking place, new cache invalidations get written to a temporary table. Then, after the cache rebuild, the cachetags table is truncated, and the rows of the temporary table are inserted.
In order for that to work, cache lookups need to know about the temporary table, something like:
if (a cache rebuild is in progress) { check the temporary table for cache validity if (the temporary table contained a row for that cache id) { return result; } else { // nothing about this ID in the temporary table check the regular cachetags table return result } } else { // not currently rebuilding the cache check the regular cachetags table return result }
And similarly, when invalidating a cache entry the new
invalidations
value written to the temporary table would be an increment of the value in the temporary table if a row existed there already, and otherwise an increment of the row from the original cachetags table.The lookups during cache rebuilds could be on a join of the two tables, rather than two separate look-ups.
No timestamp column needed, and no wholesale copying of cachetags; and outside of cache rebuilds the behaviour can be much the way it is at present.
Is that practical? I won't be surprised if I'm missing something, and I haven't thought through the ramifications of multiple simultaneous cache rebuilds (if that's currently permitted to happen), but it seemed worth suggesting.
- π¬π§United Kingdom catch
I think we should just add cache tag purging as part of a drupal_flush_all_caches() step, immediately after emptying the bins, and document + open a follow-up for the potential race condition where a cache item is both set and and invalidated but then not requested until after a further invalidation again, the chances of that happening are miniscule but the potential issues deriving from storing timestamps or creating temporary tables will affect everyone.
Also I think it's worth looking at starting and ending a database transaction in drupal_flush_all_caches() in case that's viable.
Got the same issue but on a larger scale on a site with lots of webform submissions, around 3.5M+ and for each submission there's an entry in the cachetags table, currently sitting at a bit more than 4M rows.
- π¬π·Greece mariaioann
I have the same problem with Message entities. I am creating messages for as notification for a large number of users, but they are being purged where they are 30 days old. Currently, we have 400K messages, but the cachetags table has 23M entries for messages, as it included all deleted messages as well.
Is it safe to delete the relevant cache tag on a message postDelete hook?
What I have not understood is why don't we delete an entity's cache tag when the entity gets deleted in general? - π¬π§United Kingdom catch
What I have not understood is why don't we delete an entity's cache tag when the entity gets deleted in general?
Cache tag storage and implementation is swappable, so there's no inherent concept of a cache tag existing as a row in a database with a counter. It would for example be possible (but very slow) to store the cache tags with the cache items, and query all cache items when a cache tag is invalidated, and have no dedicated cache tag storage at all. Because of this, there's no concept of the tag as a thing existing in itself, the checksum implementation that core uses introduces the counter system on top of string tags, but consumers of the cache API, like the entity system, don't need to know about it - they just get/set/delete cache items and invalidate tags.
If you have a high traffic site with a lot of users/content, you should strongly consider using https://www.drupal.org/project/redis β , which won't run into this problem because it evicts items when it runs out of memory. This is a good idea for lots of reasons other than a large cache tags table.
- π§πͺBelgium wim leers Ghent π§πͺπͺπΊ
@MariaIoann
- Nothing prevents you from saying "my entity does not need cache tags". See
\Drupal\Core\Entity\EntityInterface::getCacheTagsToInvalidate()
and\Drupal\Core\Cache\CacheableDependencyInterface::getCacheTags()
. Message entities the way that you describe them appear very ephemeral, so it makes sense to me that they would not use/need cache tags. - See
\Drupal\Core\Datetime\Entity\DateFormat::getCacheTagsToInvalidate()
for another example.
- Nothing prevents you from saying "my entity does not need cache tags". See
- π¬π§United Kingdom catch
Probably the simplest solution here would be:
1. Add a CacheTagsChecksumPurgableInterface with a ::purge() method and implement it in the database backend.
2. Call this method on all cache_tags_invalidator services that implement the interface, as late in drupal_flush_all_caches as possible, definitely needs to happen after plugin caches are cleared, not sure whether before or after router rebuild or not.
3. Open a follow-up for the potential race condition described in #29.
- π¬π§United Kingdom catch
When implementing this there's actually not really a race condition as described in #29.
If we implemented cache tag purging outside of drupal_flush_all_caches() then it would be possible for cache entries to exist with an 'incidentally matching' cache tag checksum. However, when we purge immediately after emptying the cache bins, there should be no cache entries created before the tags are purged, everything gets reset at the same time. There would have to be entries written literally during the purging itself for there to be a problem. This is about as likely as an entry being written in one cache bin that's just been emptied based on cached information in a table that's just about to be emptied - e.g. no worse than it is now.
So given that, what's in the MR might be enough.
After 5 years of problem I have do this:
-write some Rules in my SQL for purge all CASH each hours.
Pushed a commit to address the issue with the test by moving the assertions to the DatabaseBackendTest subclass.
I think this showed that
purge()
needs to callreset()
, so added that.- π¨πSwitzerland berdir Switzerland
Posted a review.
> -write some Rules in my SQL for purge all CASH each hours.
I'd recommend not purging all your CASH every few hours, that sounds like a very costly thing to do. (Sorry, could not resist).
More serious: If you have a large enough site that you are struggling with the size of the database cache backend and have to flush it so frequently, you should really consider using a backend with a fixed size like redis/memcache. The database backend is a basic implementation for small to medium sites. Also, this is about cache tags only, which should grow far, far less than actual cache backends.
OK, one of the test failures is Drupal\FunctionalTests\Bootstrap\UncaughtExceptionTest::testLostDatabaseConnection, which is apparently a new intermittent failure that seems to happen a lot, even after re-run.
The other test failure was one I thought I had fixed but calling
reset()
inpurge()
, but apparently because I moved the test to a different class and its own method, the cache gets are succeeding after purge because the checksums are 0 which match the 0 returned from the table because it's empty.- π¦πΊAustralia acbramley
Random failure being tracked here π [random test failure] Drupal\FunctionalTests\Bootstrap\UncaughtExceptionTest::testLostDatabaseConnection Active
- π¬π§United Kingdom catch
but apparently because I moved the test to a different class and its own method, the cache gets are succeeding after purge because the checksums are 0 which match the 0 returned from the table because it's empty.
Ah yeah this was one of the reasons I tried to re-use the existing test. But if we invalidate tags before the purge it should be OK?
- π¨πSwitzerland berdir Switzerland
I'm not sure if we even need to test cache bins at all there. Possibly for the reset() edge case, but yes, then I'd add an invalidate first.
- π¨πSwitzerland berdir Switzerland
What if we interact directly with the cache tag checksum service only and assert what we want to know with that? Instead of indirectly going through cache entries? That feels convoluted and is IMHO tested elswhere.
Something like this:
// invalidate the tag through general invalidator service. // assert current value. assertEquals(1, $checksum_invalidator->getCurrentChecksum([$tag])); // purge through general invalidator service. // assert current value assertEquals(0, $checksum_invalidator->getCurrentChecksum([$tag]));
the database query we can keep.
and we can still do the purge through the cache_tags.invalidator service to make sure the loop there is correct.
- π¨πSwitzerland berdir Switzerland
Left a suggestion on how to handle that in a way that I think is easier to understand.
Made the change per the suggestion. Also changed back to invalidating multiple tags. I'm not sure it makes any difference, but it was trivial to do just in case.
- π¨πSwitzerland berdir Switzerland
Looks good to me, needs a change record, not aware of a contrib implementation that would actually need to implement this, but who knows.
Added change record.
Also in converation with @catch on Slack, moved the `cachetags` table truncation before clearing the cache bins, because having entries in the cachetags table doesn't hurt anything, but there could be an issue if the cache bin tables do without corresponding checksums in cachetags. Pushing back to NR for that.
- π¬π§United Kingdom catch
Examples for #59. Let's assume we have one cache item, and one cache invalidation for
node:1
, and our one cache item is tagged withnode:1
If we clear cache bins before tags, then the following can happen:
1. Cache bins are cleared.
2. Cache miss happens, cache item is written with
node:1
having 1 invalidation - because cache tag invalidations haven't been purged yet.3. Cache invalidations are purged.
4. If
node:1
is invalidated before our cache item is requested, then we're back to 1 invalidation again, and the item could be considered valid.But if we purge cache tags before bins:
1. cache tags are purged
2. Any cache tagged cache items immediately become invalid if they assume any invalidations, because tag invalidations were reset.
3. Any new cache items get written as if there are no invalidations. Or with one invalidation if one happens during this (short) window.
4. But then, all the cache bins are emptied anyway, any new items, regardless of how many cache tag invalidations there are, will be valid.
Given that, purging tags first feels like it should be 100% correct as soon as the full cache clear is complete.
- π¨πSwitzerland berdir Switzerland
The order change makes sense, back to RTBC.
- π¬π§United Kingdom alexpott πͺπΊπ
The issue summary is out-of-date with the final state of the patch. It could do with being updated to match. The proposed resolution and other sections are all out-of-date.
The MR looks good. Once the issue summary has been updated can be set back to RTBC and I will prioritise this one.
-
alexpott β
committed 3df1b8d2 on 11.2.x
Issue #3097393 by godotislate, catch, berdir, jweowu, ndobromirov, wim...
-
alexpott β
committed 3df1b8d2 on 11.2.x
-
alexpott β
committed 2999d457 on 11.x
Issue #3097393 by godotislate, catch, berdir, jweowu, ndobromirov, wim...
-
alexpott β
committed 2999d457 on 11.x
- π¬π§United Kingdom alexpott πͺπΊπ
Committed and pushed 2999d4574c4 to 11.x and 3df1b8d2897 to 11.2.x. Thanks!