Add monitoring/reporting of cache table pruning

Created on 21 July 2017, almost 7 years ago
Updated 31 March 2024, 3 months ago

Problem/Motivation

#2526150: Database cache bins allow unlimited growth: cache DB tables of gigabytes! β†’ added pruning of cache tables. Site admins need to be able to determine the best number for their system, as the default value may be too low which would result in cache churning (albeit at cron time).

Proposed resolution

Remaining tasks

User interface changes

API changes

Data model changes

✨ Feature request
Status

Active

Version

11.0 πŸ”₯

Component
CacheΒ  β†’

Last updated 1 minute ago

Created by

πŸ‡ΊπŸ‡ΈUnited States mpdonadio Philadelphia/PA/USA (UTC-5)

Live updates comments and jobs are added and updated live.
Sign in to follow issues

Comments & Activities

Not all content is available!

It's likely this issue predates Contrib.social: some issue and comment data are missing.

  • πŸ‡ΊπŸ‡ΈUnited States smustgrave

    Closing due to inactivity. If still a valid feature request please reopen updating issue summary

    Thanks!

  • Status changed to Active 5 months ago
  • πŸ‡©πŸ‡°Denmark ressa Copenhagen

    I just ran into something like reported in πŸ› Extremly large cache_views_data.ibd Fixed , where the cache_page table ballooned with several GB's in a short time. It would be nice with monitoring and feedback in the GUI about this, if possible, so reopening.

    For now, I use this hourly cron-triggered script, and get an email alert, when the limit is reached:

    #!/bin/sh
    # Purpose: Monitor Linux disk space and send an email alert to $ADMIN
    # From https://www.cyberciti.biz/tips/shell-script-to-watch-the-disk-space.html
    ALERT=50 # alert level 
    ADMIN="info@example.org" # dev/sysadmin email ID
    df -H | grep -vE '^Filesystem|tmpfs|cdrom|snap' | awk '{ print $5 " " $1 }' | while read -r output;
    do
      echo "$output"
      usep=$(echo "$output" | awk '{ print $1}' | cut -d'%' -f1 )
      partition=$(echo "$output" | awk '{ print $2 }' )
      if [ $usep -ge $ALERT ]; then
        echo -e "Subject:Alert: Almost out of disk space $usep% \n\nRunning out of space \"$partition ($usep%)\" on $(hostname) as on $(date)" | /usr/sbin/sendmail -t "info@example.org"
      fi
    done
    

    As an extra precaution, I added this in settings.php, to purge rows, when cron is run:
    $settings['database_cache_max_rows']['bins']['page'] = 500;

  • πŸ‡ΊπŸ‡ΈUnited States wolcen

    Also ran into this on the weekend, and was curious about this ticket while I was poking around to find those cache limiter settings I'd forgotten already.

    In my case, it was as simple as a client update that placed a rather large SVG of their logo into a theme that in-lined the SVG directly into their pages. This grew each page to 500% it's original size...that hurt - fast!

    But, that's my first point: the number of rows does not directly correlate with space. Having 5000 cache rows does not necessarily mean you'll suddenly have a space issue. Unfortunately, this is probably where a fair number of people first learn about the possibility to even tune the cache. That was certainly the case for me - the cache tables frequently stick out like a sore thumb when you start running into size issues (if it's not the watchdog table, that is).

    The extra step of including monitoring such as @ressa shared (and I ended up yoinking a good of logic from [thank you!]) I'd say is the better practice with regard to space issues. That monitoring method is certainly what I'll stick with now that it's made it into our Ansible roles.

    It's also good to consider things that may effect how quickly these tables can grow, and I'd say that's probably the more interesting part to me. For example, I've seen faceted searches that explode these tables faster than anything. Seeing regular notices about 1000's of records being purged from cache I can see being a helpful clue to have at hand.

    Frankly, someone specifically focused on tuning may well be fumbling around in these woods already, and hopefully knows/learns the skills to run select count(*) queries. I don't overall think this is particularly low-hanging fruit here.

    All that said - given it is of commensurately low effort - I still think it would be nice to see a notice that the cache was trimmed, and specifically by how much.

Production build 0.69.0 2024