Want to update the documentation and clean it up a bit but then I think we're ready for a first version.
Yeah, I was about to suggest the same. It seems there is room for optimizations. I think the Pantheon team would contribute some of those after their initial testing phase.
1. Ok, that makes sense. I'm open for changing the implementation of \Drupal\redis\Cache\RedisCacheTagsChecksum::getTagInvalidationCounts too if there's a way to make it more performant. Note that sometimes, it's called with a *lot* of cache tags (dozens), so dropping the mget here might not be the best approach.
If these don't change too often, then switching to `GET` might be good for Relay, since it can do millions of lookups per second.
We could also just switch to `get()` in `getTagInvalidationCounts()` when it's a single key?
2. I see. There aren't that many cache getMultiple() requests, apparently just that on my demo site, you can see in Cache\Relay::getMultiple() that our goal is to optimize for that, and if it's just one we don't do a MULTI..EXEC. The only reason for that is that we assume it's faster. Since it happens pretty rarely and if it can be better optimized in Relay we can easily drop that and just always do separate hgetall requests.
Yes, in most cases you don't need `multi()` or `pipeline()` calls, especially if you want to leverage in-memory caching.
#17: No I think the behavior is fine. Not quite sure why you'd limit it to 75% then? I'm pretty sure the recommendation for the drupal module is going to be to use lru, we already add a warning on the report when using redis without some sort of eviction.
We're gonna overhaul the eviction in 1.0 and probably make `lru` the default.
Regarding compression: Using Relay's built in compression and serialization is a lot faster than `gzcompress()` etc. If it's not a nightmare to adopt, I'd suggest using it.
As expected, I was able to fill up the memory cache quickly with a simple curl script that requested a dynamic amount of pages from drupal. I noticed that at 75%, it stopped increasing, which matches the default config. is it correct that noeviction will keep the initial data not add anything new but will keep working? Not sure but I think the redis noeviction policy works different in that it can block or error when trying to set more data. (just trying to understand how things work)
We use ZSTD compression and igbinary serialization for most data that goes into Relay, because it usually reduces data by ~75%. is that something we can configure? Both are always available when running Relay.
Relay::setOption(Relay::OPT_SERIALIZER, Relay::SERIALIZER_IGBINARY);
Relay::setOption(Relay::OPT_COMPRESSION, Relay::COMPRESSION_ZSTD);
Relay::setOption(Relay::OPT_COMPRESSION_LEVEL, -5);
Due to how cache tag invalidation works, it is quite common for those keys to not exist and they will keep not existing until these things actually change. Happy to have a chat on how that works and if there's a better way to do this.
Yeah, let's discuss this once everything is working.
As expected, I was able to fill up the memory cache quickly with a simple curl script that requested a dynamic amount of pages from drupal. I noticed that at 75%, it stopped increasing, which matches the default config. is it correct that noeviction will keep the initial data not add anything new but will keep working? Not sure but I think the redis noeviction policy works different in that it can block or error when trying to set more data. (just trying to understand how things work)
Correct, `noeviction` with Redis actually crashes the services, while Relay will just act as a proxy once the memory is full. We could change that behavior in 1.0 if you like Relay to hard-fail if the cache is full.
bendir, I'm brand new to Drupal and that seems above my paygrade. Do you know/recommend anyone with deep Drupal knowledge who we could hire to tackle these more Drupal-specific Relay integration?
Bendir, that'd be great. The Pantheon folk would love to test this.
I've run this locally and getting no issues when setting the client to Relay
.
@bradjones1: Do you reckon this will happen somewhat soon-ish, or unlikely?
Where can I see the Travis runs?
How can I run the tests locally?
Correct, no need to duplicate code, Relay is compatible, it just needs a different constructor and symbol name.
I'm trying to add an organization, but I'm unable to.
tillkruss β made their first commit to this issueβs fork.
tillkruss β created an issue.