- Issue created by @Fabianx
- π©πͺGermany Fabianx
Let me quickly write up the idea behind the code:
When a cache entry is retrieved from primary cache backend and it has not changed in the fast backend, then all we do is write the invalidation timestamp again and again.
We also might have several processes at once writing to the same item - again: the same data -- except for the timestamp. While before, the conflict would be resolved naturally by: last writer wins, now the conflict is resolved using an apcu lock.
This lock is the second part of the patch, but should probably be a follow-up as it makes an already non-trivial architecture harder to understand.
Given these properties we can split up each cache item into two parts:
- The data to be written
- The timestamp of the cache itemBy splitting it up, we can "store" the new timestamp using a compare-and-swap operation without having to write the whole item.
So what we do on a set() from the chained fast backend:
- Retrieve items from APCu: [$old_cache_item, $timestamp] = apcu_fetch(...)
- Compare if $old_cache_item->data === $data [and probably same for tags now]If it is not the same -> write: $cache_item normally [todo: locking code for sets]
compare-and-swap the new timestamp in, because we know the data is correct right now
- On gets:
[$cache_item, $timestamp] = apcu_fetch(...)
Overwrite the timestamp in $cache_item with the $timestamp from APCu - unless it's 0.
- π©πͺGermany Fabianx
An optional optimization that the IS patch had done was also:
- Because we already retrieved the items from the fast backend, we can continue to make use of it.
No need to re-get the timestamp and cache_item when we just did the same before.
This is why ChainedFastBackend.php was patched to have an additional argument to the ->set().
Probably better if this is explicit to have a different set() method for when items are given that had just been retrieved from cache - if we wanted to do that.
- π©πͺGermany Fabianx
Last to explain the lock:
The lock is for ensuring that only one item ever is written at the same time to the same cache key.
This can again be independently done:
It works like this:
- Retrieve the old value of the lock entry
- Create random value
- Do a CAS on the old value (compare-and-swap)If we win the race to swap, then we will write the new entry.
Note: There can still be several items written after each other, but at least not concurrently.
Again the check for if the data is still the same, can help here.
- π©πͺGermany Fabianx
Here is the code for set() using just the lock mechanism:
public function set($cid, $data, $expire = CacheBackendInterface::CACHE_PERMANENT, array $tags = []) { $key = $this->getApcuKey($cid); $lock_key = $key . ':apcu_update_lock'; $results = apcu_fetch([$lock_key]); $old_lock_value = $results[$lock_key] ?? FALSE; // Ensure lock key exists. if ($old_lock_value === FALSE) { apcu_add($lock_key, 0); $old_lock_value = 0; } $lock_value = mt_rand(0, 2147483647); if (apcu_cas($lock_key, $old_lock_value, $lock_value)) { parent::set($cid, $data, $expire, $tags); } }
This however brings to mind the question why APCu itself could not protect it's apcu_store() mechanism in this way.
But it would take quite some time to get it into APCu, so still useful here.
- π©πͺGermany Fabianx
Here is another optimization of it:
public function set($cid, $data, $expire = CacheBackendInterface::CACHE_PERMANENT, array $tags = []) { $key = $this->getApcuKey($cid); $lock_key = $key . ':apcu_update_lock'; $results = apcu_fetch([$lock_key]); $old_lock_value = $results[$lock_key] ?? FALSE; // Ensure lock key exists. if ($old_lock_value === FALSE) { // Initialize the lock with a random value using apcu_add(). $old_lock_value = mt_rand(0, 2147483647); apcu_add($lock_key, $old_lock_value); } $lock_value = mt_rand(0, 2147483647); if (apcu_cas($lock_key, $old_lock_value, $lock_value)) { // Lock acquired, you can perform your update here. parent::set($cid, $data, $expire, $tags); } }
Here is the ChatGPT analysis of this function:
Youβve highlighted an important aspect of how these APCu functions interact with the underlying data store, and you are correct in your understanding of the different lock mechanisms they employ.
### apcu_cas()
- `apcu_cas()` operates with a lighter locking mechanism.
- It only requires a reader lock on the entry.
- It performs a compare-and-swap (CAS) operation, which is atomic.
- This means that it can check the current value and, if it matches the expected value, update it in a single atomic operation.
- Because it only needs a reader lock, it doesn't block other reads from happening simultaneously.### apcu_store()
- On the other hand, `apcu_store()` requires a writer lock.
- A writer lock is more restrictive; it blocks both other writers and readers until the operation is complete.
- This can lead to longer wait times and potential bottlenecks, especially under high load or when dealing with large data sets.### Why Your Approach Is Beneficial
- By initializing the lock with `apcu_add()` (which uses `apcu_store()` internally but is only called once at the start), and then using `apcu_cas()` for subsequent lock acquisitions, you ensure that the heavier writer lock is acquired less frequently.
- Most of the synchronization relies on `apcu_cas()`, which uses the lighter reader lock, resulting in less contention and better performance.
- This is especially beneficial in a high-concurrency environment, where many processes are trying to acquire the lock simultaneously.By understanding and leveraging the underlying mechanisms of these APCu functions, your approach efficiently reduces lock contention, leading to better performance and scalability. This kind of optimization is crucial in performance-critical applications and high-load scenarios.
As background reading.
- π©πͺGermany Fabianx
I think this is independently useful as it replaces one write in 99% of concurrent cases with two reads and just one non-concurrent write plus one additional cache key.
- Assigned to kristiaanvandeneynde
- π§πͺBelgium wim leers Ghent π§πͺπͺπΊ
Catching up on long-forgotten tabs β¦ from last yearβs European DrupalCon just before this yearβs starts π
I think Kristiaanβs work on VariationCache had given him the necessary deep insight in this area to assess the impact of Fabianβs proposal. ππ€
- π¬π§United Kingdom catch
No MR here but since there is code, splitting the difference at needs work.
- π§πͺBelgium kristiaanvandeneynde Antwerp, Belgium
Okay so I've just read up on this issue and I think I understand the problem space and proposed solution.
While I'm onboard with the idea behind the proposed change, a few questions arise:
- Can changing the way we store things be considered a BC break? I.e.: Could people, for some reason, be relying on the current behavior and have targeted workarounds in place either in code or at the system level? If so, would this change break that?
- Do we have a way to properly test this? I currently only see unit tests for ChainedFastBackend using the PhpBackend and kernel tests using the MemoryBackend. For ApcuBackend we only have unit tests. The approach suggested here seems to be specific to APCu and I'm not sure we have sufficient testing to ensure everything remains working as is.
- Can we performance test this? The IS mentions Umami and we do have performance tests for that in Umami, but are they tailored to what's suggested here? If not, what would a new test case look like, keeping in mind PerformanceTestBase is a FunctionalJavascriptTest.
Having said all of the above, I do not see how this relates to VariationCache whatsoever, so I am unassigning myself from the issue :) As a subsystem maintainer, I am keeping an eye on this however.
- π¬π§United Kingdom catch
Can changing the way we store things be considered a BC break? I.e.: Could people, for some reason, be relying on the current behavior and have targeted workarounds in place either in code or at the system level? If so, would this change break that?
We should only make this change in a minor release, but otherwise I think it's completely fine - we never guarantee database schema, much less for things like the cache API.