- π«π·France andypost
Should be fixed via π Use READ COMMITTED by default for MySQL transactions Fixed
I have a lot of custom entities passed for processing (deletition) in a queue.
Whenever a single worker processes the items, all is OK.
Whenever I enable a second one, the mentioned exception starts to happen every so often.
[error] Drupal\Core\Database\DatabaseExceptionWrapper: SQLSTATE[40001]: Serialization failure: 1213 Deadlock found when trying to get lock; try restarting transaction: INSERT INTO {cache_bootstrap} (cid, expire, created, tags, checksum, data, serialized) VALUES (:db_insert_placeholder_0, :db_insert_placeholder_1, :db_insert_placeholder_2, :db_insert_placeholder_3, :db_insert_placeholder_4, :db_insert_placeholder_5, :db_insert_placeholder_6) ON DUPLICATE KEY UPDATE cid = VALUES(cid), expire = VALUES(expire), created = VALUES(created), tags = VALUES(tags), checksum = VALUES(checksum), data = VALUES(data), serialized = VALUES(serialized); Array
(
[:db_insert_placeholder_0] => last_write_timestamp_cache_bootstrap
[:db_insert_placeholder_1] => -1
[:db_insert_placeholder_2] => 1563527591.201
[:db_insert_placeholder_3] =>
[:db_insert_placeholder_4] => 0
[:db_insert_placeholder_5] => d:1563527591.2019999;
[:db_insert_placeholder_6] => 1
)
in Drupal\Core\Cache\ChainedFastBackend->markAsOutdated() (line 306 of /var/www/vdk-multisite/docroot/core/lib/Drupal/Core/Cache/ChainedFastBackend.php).
Note that if I replace cache bootstrap with alternative cache implementation (memcache / redis) all get's OK right away.
This is reproducible on DrupalVM without any customization on the database configs.
Whenever there is concurrency the database cache can not handle concurrent updates on the same key very often.
Why not consider to catch the exception re-read it from DB, as someone is actually writing at the moment. If that timestamp is (bigger than the one we have, we can skip the write operation.
Otherwise retry the write operation and then fail as it currently does. Hopefully in a high write scenarios, this should alleviate the pain on the row lock for the key on MySQL level.
Discussions, Patch, Reviews etc.
None.
None.
None.
None (bug-fix).
On a custom entity we are seeing several database deadlock errors like following. Our module like others has no control on how Drupal issues database transactions or rolls them back when deadlocks are encountered.
So the question is are these deadlocks on cache_bootstrap benign?
Drupal\Core\Database\DatabaseExceptionWrapper: SQLSTATE[40001]: Serialization failure: 1213 Deadlock found when trying to get lock; try restarting transaction: INSERT INTO
{cache_bootstrap}
(cid, expire, created, tags, checksum, data, serialized) VALUES (:db_insert_placeholder_0, :db_insert_placeholder_1, :db_insert_placeholder_2, :db_insert_placeholder_3, :db_insert_placeholder_4, :db_insert_placeholder_5, :db_insert_placeholder_6) ON DUPLICATE KEY UPDATE cid = VALUES(cid), expire = VALUES(expire), created = VALUES(created), tags = VALUES(tags), checksum = VALUES(checksum), data = VALUES(data), serialized = VALUES(serialized); Array ( [:db_insert_placeholder_0] => last_write_timestamp_cache_bootstrap [:db_insert_placeholder_1] => 1 [:db_insert_placeholder_2] => 1552019948.235 [:db_insert_placeholder_3] => [:db_insert_placeholder_4] => 0 [:db_insert_placeholder_5] => d:1552019948.236; [:db_insert_placeholder_6] => 1 ) in Drupal\Core\Cache\ChainedFastBackend>markAsOutdated() (line 306 of /var/www/html/core/lib/Drupal/Core/Cache/ChainedFastBackend.php).
Closed: duplicate
11.0 π₯
cache system
Not all content is available!
It's likely this issue predates Contrib.social: some issue and comment data are missing.
Should be fixed via π Use READ COMMITTED by default for MySQL transactions Fixed