- Issue created by @bobburns
This is more of a function of the database server setup. Is the transaction isolation level β set?
- πΊπΈUnited States bobburns
Yes according to the Status Report thenTransaction Level is Read-Committed, and as I said a little earlier - The cache config table has 86,621 rows.
Only because you mentioned the upgrade to 9.5.8 as being suspect. Here is the entire difference between 9.5.7 and 9.5.8: https://git.drupalcode.org/project/drupal/-/compare/9.5.7...9.5.8?from_p...
At a glance there is nothing there that would change the behavior of the database layer.
- πΊπΈUnited States bobburns
It appears to have solved the problem
I have added these to the my.cnf
innodb_lock_wait_timeout = 180
innodb_rollback_on_timeout = 1since the default timeout is 50, it could only help - and appears solved it
After looking at the cache_config table, I see the translations of modules added have helped it grow to be so big
I have been dealing with getting Redis working so I could use Crawler_Rate_Limit, and someone changed the version 5 to 7 which made the redis.conf fail - until I put a veresion 7 redis conf in and changed the "dir" to 'etc/redis" where redis had owership rights to write, causing me to have to put the iptables config that was in the redis.conf by the previous redis install program in the iptables config and just got it all working.
The drush cache:rebuild and flush all caches from the browser are working again
Hopefully Redis and Craler Rate Limit will solve the scrapers issue by slowing them to a rate limit
- Status changed to Closed: outdated
about 2 years ago 3:08pm 16 June 2023 - π¨π¦Canada joseph.olstad
I'm seeing this on a D10.4.x build. I'll try increasing the innodb_lock_wait_timeout from the default 50 to something better.