- πΊπΈUnited States smustgrave
Closing due to inactivity. If still a valid feature request please reopen updating issue summary
Thanks!
- Status changed to Active
12 months ago 10:29am 5 January 2024 - π©π°Denmark ressa Copenhagen
I just ran into something like reported in π Extremly large cache_views_data.ibd Fixed , where the
cache_page
table ballooned with several GB's in a short time. It would be nice with monitoring and feedback in the GUI about this, if possible, so reopening.For now, I use this hourly cron-triggered script, and get an email alert, when the limit is reached:
#!/bin/sh # Purpose: Monitor Linux disk space and send an email alert to $ADMIN # From https://www.cyberciti.biz/tips/shell-script-to-watch-the-disk-space.html ALERT=50 # alert level ADMIN="info@example.org" # dev/sysadmin email ID df -H | grep -vE '^Filesystem|tmpfs|cdrom|snap' | awk '{ print $5 " " $1 }' | while read -r output; do echo "$output" usep=$(echo "$output" | awk '{ print $1}' | cut -d'%' -f1 ) partition=$(echo "$output" | awk '{ print $2 }' ) if [ $usep -ge $ALERT ]; then echo -e "Subject:Alert: Almost out of disk space $usep% \n\nRunning out of space \"$partition ($usep%)\" on $(hostname) as on $(date)" | /usr/sbin/sendmail -t "info@example.org" fi done
As an extra precaution, I added this in
settings.php
, to purge rows, when cron is run:
$settings['database_cache_max_rows']['bins']['page'] = 500;
- πΊπΈUnited States wolcen
Also ran into this on the weekend, and was curious about this ticket while I was poking around to find those cache limiter settings I'd forgotten already.
In my case, it was as simple as a client update that placed a rather large SVG of their logo into a theme that in-lined the SVG directly into their pages. This grew each page to 500% it's original size...that hurt - fast!
But, that's my first point: the number of rows does not directly correlate with space. Having 5000 cache rows does not necessarily mean you'll suddenly have a space issue. Unfortunately, this is probably where a fair number of people first learn about the possibility to even tune the cache. That was certainly the case for me - the cache tables frequently stick out like a sore thumb when you start running into size issues (if it's not the watchdog table, that is).
The extra step of including monitoring such as @ressa shared (and I ended up yoinking a good of logic from [thank you!]) I'd say is the better practice with regard to space issues. That monitoring method is certainly what I'll stick with now that it's made it into our Ansible roles.
It's also good to consider things that may effect how quickly these tables can grow, and I'd say that's probably the more interesting part to me. For example, I've seen faceted searches that explode these tables faster than anything. Seeing regular notices about 1000's of records being purged from cache I can see being a helpful clue to have at hand.
Frankly, someone specifically focused on tuning may well be fumbling around in these woods already, and hopefully knows/learns the skills to run select count(*) queries. I don't overall think this is particularly low-hanging fruit here.
All that said - given it is of commensurately low effort - I still think it would be nice to see a notice that the cache was trimmed, and specifically by how much.