Explicitly support Relay (drop-in replacement for PhpRedis)

Created on 18 November 2022, about 2 years ago
Updated 29 May 2023, over 1 year ago

Problem/Motivation

Advertises that it is 2x faster than PhpRedis... https://relay.so/docs/1.x/installation

And it is API-compatible with PhpRedis so could be drop-in compatible. Might be just a matter of explicitly mentioning this in the README, perhaps adding some test coverage?

Steps to reproduce

Proposed resolution

Remaining tasks

User interface changes

API changes

Data model changes

📌 Task
Status

Fixed

Version

1.0

Component

Code

Created by

🇺🇸United States bradjones1 Digital Nomad Life

Live updates comments and jobs are added and updated live.
Sign in to follow issues

Comments & Activities

Not all content is available!

It's likely this issue predates Contrib.social: some issue and comment data are missing.

  • First commit to issue fork.
  • @tillkruss opened merge request.
  • 🇺🇸United States bradjones1 Digital Nomad Life

    Thanks so much for your contribution!

    It looks like tests are only run for PHP 7.1 — 7.3, however Relay requires PHP 7.4 or newer, so I omitted adding tests.

    Go ahead and add the tests and the project's maintainers can reconfigure the test runner. And/or, reviewers can run tests locally in the meantime.

  • Status changed to Needs work almost 2 years ago
  • 🇺🇸United States bradjones1 Digital Nomad Life

    I don't have time for a very detailed review right now but a general observation on the MR is that there are a lot of boilerplate extended classes; I think since Relay is API-compatible with PHPredis (right?) that only the areas in which there is a difference between the drivers should there be subclassing. Curious your thoughts.

  • 🇨🇦Canada tillkruss

    Correct, no need to duplicate code, Relay is compatible, it just needs a different constructor and symbol name.

  • 🇨🇦Canada tillkruss

    Where can I see the Travis runs?

    How can I run the tests locally?

  • 🇺🇸United States bradjones1 Digital Nomad Life

    So Drupal.org is moving to a GitLab instance for project management and CI; currently tests for this project can't run on the legacy CI ("DrupalCI") because the runner doesn't support installing PHP extensions, which is obviously required here.

    AFAIK the Travis config in this repo looks like it was for perhaps a fork that was hosted elsewhere, but I don't have the info on where.

    The GitLab migration is in beta and I have requested for this project to be included in the early access program (see #3261803: Using GitLab CI instead of Drupal CI ) so hopefully that will get picked up soon; the team at Drupal Association are pretty good about including projects that have unique case requirements like this. I've also successfully migrated another project I maintain to GitLab CI so could help get that set up once we have access.

    Sorry this is a bit more convoluted than many open-source projects; Drupal has a unique ecosystem where modules are provided a lot of support from the "parent" project, however the legacy of Drupal rolling its own project management system (this site) and CI (DrupalCI) dates to the days well before GitLab or even GitHub. It's really remarkable. The migration will make contributing to Drupal a lot more like contributing to projects as you're used to, elsewhere. (With some interesting tweaks, e.g. world-writeable issue forks.)

    Anyway, there's some background and info on a path forward.

    Regarding running tests locally, I think Berdir (the maintainer of this module) says he's been able to run locally, however I'm not sure how much time it would be worth investing in that if we can get the cloud CI working in short order.

  • 🇨🇦Canada tillkruss

    @bradjones1: Do you reckon this will happen somewhat soon-ish, or unlikely?

  • 🇺🇸United States bradjones1 Digital Nomad Life

    Do you reckon this will happen somewhat soon-ish, or unlikely?

    The Drupal Association infra team has been adding new projects every week so I imagine this would be near-term.

  • 🇨🇭Switzerland berdir Switzerland

    the travis tests were running on https://github.com/md-systems/redis, but that hasn't been working since travis changed how they support open source projects.

    Note that beside having GitlabCI enabled, it will also require someone to port the travis script over to something that runs on GitlabCI to add the extensions, include a redis container and run the tests with the different supported integrations. I'd suggest opening an issue for this here once that happened.

    That said, having that is not a blocker to having this committed, we haven't had test on drupal.org ever and still (sometimes ;)) add improvements and new features. I can run tests locally, I am relying on community members to test and verify more advanced features though as we are only using a pretty basic setup.

  • 🇨🇦Canada tillkruss

    Bendir, that'd be great. The Pantheon folk would love to test this.

    I've run this locally and getting no issues when setting the client to Relay.

  • 🇺🇸United States bradjones1 Digital Nomad Life

    FYI, GitLab CI now enabled for this project. #3261803-47: Using GitLab CI instead of Drupal CI

  • 🇨🇦Canada tillkruss

    bendir, I'm brand new to Drupal and that seems above my paygrade. Do you know/recommend anyone with deep Drupal knowledge who we could hire to tackle these more Drupal-specific Relay integration?

  • 🇨🇭Switzerland berdir Switzerland

    Feel free to reach out to me to through the contact form, see also https://opensource.md-systems.ch/en.

    That said, my comment shouldn't be that hard to implement but it needs to be verified that what I said is actually true, I would need to set up relay myself to verify that.

    See https://api.drupal.org/api/drupal/core!core.api.php/group/cache/10#confi... on how specific cache bins can be set to a specific backend. Then in core.services.yml, cache bins can define that they want to use the ChainedFast backend by default in their service definition, like this:

      cache.bootstrap:
        class: Drupal\Core\Cache\CacheBackendInterface
        tags:
          - { name: cache.bin, default_backend: cache.backend.chainedfast }
    

    The condition for that is relatively few cache entries, many reads, few writes. ChainedFast is \Drupal\Core\Cache\ChainedFastBackend, the documentation there explains quite well what it does I think. It's a simpler implementation of what Relay does (based on 5min looking at the docs) with a much simpler invalidation mechanism (a single timestamp flag for the whole cache bin).

    If you use Relay, then I think you don't want to use that, as you'd end up caching that same data both in APCu and Relay.

    On the other side, some of the other cache bins on larger sites will happily write gigabytes of cache data (render, page, data, dynamic_page_cache..) and it's often only infrequently used. You don't want to waste your memory (especially with the limited free version) on that stuff. So we want to introduce a setting that allows to control which bins should use the runtime memory and which ones shouldn't.

    We have an existing per-bin setting in \Drupal\redis\Cache\CacheBase::setPermTtl() but can also use an array like the database setting on the settings page, the string concatenation is a leftover of Drupal 7 when it was all flat strings. What we could do is have a default setting that if none are defined, we enable runtime memory only for the same cache bins that by default use chainedfast, that is bootstrap, discovery and config.

    And the final step is then documenting that setting and that you need to explicitly overrule the default backend for those 3 to explicitly and directly use redis if you use relay.

  • 🇨🇭Switzerland berdir Switzerland

    As discussed, started working on this. Got rely up and running and tested the current MR a bit. Pushed a small fix for the lock backend (missing use) and also started adding a bit of info to the reports page about relay memory usage and eviction policy.

    As expected, I was able to fill up the memory cache quickly with a simple curl script that requested a dynamic amount of pages from drupal. I noticed that at 75%, it stopped increasing, which matches the default config. is it correct that noeviction will keep the initial data not add anything new but will keep working? Not sure but I think the redis noeviction policy works different in that it can block or error when trying to set more data. (just trying to understand how things work)

    Also verified per above that explicitly setting redis to the backend for those 3 default bin will skip the ChainedFastBackend, will work on documentation for that later.

    Using MONITOR on the redis backend, I did notice that a bunch of redis queries still go through even though I restarted and my current memory usage is far from full ( 2.92 MB / 32 MB memory usage, eviction policy: noeviction ). most of them are cache tags counts, which are not yet set and therefore don't exist as keys. Is it possible that the relay cache doesn't cache non-existing keys?

    monitor output looks like this on umami demo install frontpage:

    1675288006.634677 [0 172.24.0.2:52032] "MGET" "prefix:cachetags:x-redis-bin:container"
    1675288006.640927 [0 172.24.0.2:52032] "MGET" "prefix:cachetags:x-redis-bin:config"
    1675288006.641924 [0 172.24.0.2:52032] "MULTI"
    1675288006.642371 [0 172.24.0.2:52032] "HGETALL" "prefix:config:language.entity.en"
    1675288006.642388 [0 172.24.0.2:52032] "HGETALL" "prefix:config:language.entity.es"
    1675288006.642394 [0 172.24.0.2:52032] "HGETALL" "prefix:config:language.entity.und"
    1675288006.642411 [0 172.24.0.2:52032] "HGETALL" "prefix:config:language.entity.zxx"
    1675288006.642414 [0 172.24.0.2:52032] "EXEC"
    1675288006.642647 [0 172.24.0.2:52032] "MULTI"
    1675288006.643065 [0 172.24.0.2:52032] "HGETALL" "prefix:config:language.en:language.entity.en"
    1675288006.643077 [0 172.24.0.2:52032] "HGETALL" "prefix:config:language.en:language.entity.es"
    1675288006.643083 [0 172.24.0.2:52032] "HGETALL" "prefix:config:language.en:language.entity.und"
    1675288006.643090 [0 172.24.0.2:52032] "HGETALL" "prefix:config:language.en:language.entity.zxx"
    1675288006.643100 [0 172.24.0.2:52032] "EXEC"
    1675288006.643450 [0 172.24.0.2:52032] "MGET" "prefix:cachetags:x-redis-bin:discovery"
    1675288006.646313 [0 172.24.0.2:52032] "MGET" "prefix:cachetags:route_match" "prefix:cachetags:x-redis-bin:data"
    ...

    the only non-cachetag requests are those configs, not sure what's up with those.

    Due to how cache tag invalidation works, it is quite common for those keys to not exist and they will keep not existing until these things actually change. Happy to have a chat on how that works and if there's a better way to do this.

    Did not yet add the setting for per-bin configuration of using in-memory or not, that's next on my list.

  • 🇨🇦Canada tillkruss

    As expected, I was able to fill up the memory cache quickly with a simple curl script that requested a dynamic amount of pages from drupal. I noticed that at 75%, it stopped increasing, which matches the default config. is it correct that noeviction will keep the initial data not add anything new but will keep working? Not sure but I think the redis noeviction policy works different in that it can block or error when trying to set more data. (just trying to understand how things work)

    Correct, `noeviction` with Redis actually crashes the services, while Relay will just act as a proxy once the memory is full. We could change that behavior in 1.0 if you like Relay to hard-fail if the cache is full.

  • 🇨🇦Canada tillkruss

    Due to how cache tag invalidation works, it is quite common for those keys to not exist and they will keep not existing until these things actually change. Happy to have a chat on how that works and if there's a better way to do this.

    Yeah, let's discuss this once everything is working.

  • 🇨🇦Canada tillkruss

    As expected, I was able to fill up the memory cache quickly with a simple curl script that requested a dynamic amount of pages from drupal. I noticed that at 75%, it stopped increasing, which matches the default config. is it correct that noeviction will keep the initial data not add anything new but will keep working? Not sure but I think the redis noeviction policy works different in that it can block or error when trying to set more data. (just trying to understand how things work)

    We use ZSTD compression and igbinary serialization for most data that goes into Relay, because it usually reduces data by ~75%. is that something we can configure? Both are always available when running Relay.

    Relay::setOption(Relay::OPT_SERIALIZER, Relay::SERIALIZER_IGBINARY);
    Relay::setOption(Relay::OPT_COMPRESSION, Relay::COMPRESSION_ZSTD);
    Relay::setOption(Relay::OPT_COMPRESSION_LEVEL, -5);
    
  • 🇺🇸United States Michael Grunder

    Hi Berdir, I'm writing Relay with Till.

    Relay will cache non-existent keys, but I think you're running into edge cases.

    1. Relay doesn't cache `nil` replies in `MGET` because technically this could mean the key doesn't exist, or it's a key of a different type. It would be quite easy to add that as an option if it would be useful.
    2. Relay won't read from the in-memory cache inside of `MULTI`..`EXEC` blocks. The reasoning is that we can't provide the same atomicity guarantees that Redis can for transactions. This could also be made into an option if that would be useful.

    Cheers!
    Mike

  • 🇨🇭Switzerland berdir Switzerland

    #17: No I think the behavior is fine. Not quite sure why you'd limit it to 75% then? I'm pretty sure the recommendation for the drupal module is going to be to use lru, we already add a warning on the report when using redis without some sort of eviction.

    #19: The redis module currently has its own compression and serialization implementation, we do support igbinary through the serializer that can be injected, I also added a setting to only compress data if it's over a certain length. The logic for that is in \Drupal\redis\Cache\CacheBase::createEntryHash(). If you think it's more efficient to just use those settings and letting relay handle it, then we could also override that method as well as \Drupal\redis\Cache\CacheBase::expandEntry(). We do set flags for both serialized and compressed entries, so it's a setting that can be changed without breaking your site, how do your options handle if they fetch that that hasn't been compressed/serialized differently?

    Re #20

    1. Ok, that makes sense. I'm open for changing the implementation of \Drupal\redis\Cache\RedisCacheTagsChecksum::getTagInvalidationCounts too if there's a way to make it more performant. Note that sometimes, it's called with a *lot* of cache tags (dozens), so dropping the mget here might not be the best approach.
    2. I see. There aren't that many cache getMultiple() requests, apparently just that on my demo site, you can see in Cache\Relay::getMultiple() that our goal is to optimize for that, and if it's just one we don't do a MULTI..EXEC. The only reason for that is that we assume it's faster. Since it happens pretty rarely and if it can be better optimized in Relay we can easily drop that and just always do separate hgetall requests.

  • 🇨🇦Canada tillkruss

    Regarding compression: Using Relay's built in compression and serialization is a lot faster than `gzcompress()` etc. If it's not a nightmare to adopt, I'd suggest using it.

  • 🇨🇦Canada tillkruss

    #17: No I think the behavior is fine. Not quite sure why you'd limit it to 75% then? I'm pretty sure the recommendation for the drupal module is going to be to use lru, we already add a warning on the report when using redis without some sort of eviction.

    We're gonna overhaul the eviction in 1.0 and probably make `lru` the default.

  • 🇨🇦Canada tillkruss

    1. Ok, that makes sense. I'm open for changing the implementation of \Drupal\redis\Cache\RedisCacheTagsChecksum::getTagInvalidationCounts too if there's a way to make it more performant. Note that sometimes, it's called with a *lot* of cache tags (dozens), so dropping the mget here might not be the best approach.

    If these don't change too often, then switching to `GET` might be good for Relay, since it can do millions of lookups per second.

    We could also just switch to `get()` in `getTagInvalidationCounts()` when it's a single key?

    2. I see. There aren't that many cache getMultiple() requests, apparently just that on my demo site, you can see in Cache\Relay::getMultiple() that our goal is to optimize for that, and if it's just one we don't do a MULTI..EXEC. The only reason for that is that we assume it's faster. Since it happens pretty rarely and if it can be better optimized in Relay we can easily drop that and just always do separate hgetall requests.

    Yes, in most cases you don't need `multi()` or `pipeline()` calls, especially if you want to leverage in-memory caching.

  • Status changed to Needs review almost 2 years ago
  • 🇨🇭Switzerland berdir Switzerland

    Some more work done on this.

    * I tested the built-in serializer. The problem is that it serializes everything, including all metadata we have on each cache item, I guess it even serializes cache tags where we use incr() on which wouldn't work, but I would need to double check that. The methods to do that manually on the relay object are _underscore prefixes, so I gues they are considered internal and should not be used directly?
    * I removed the optimization for fetching multiple cache items at once, there's also the option to only do that for bins that are stored in memory, just thought of that and it might be a good idea if a pipeline is really faster.
    * I added support to control which bins are stored in memory, it defaults to "['container', 'bootstrap', 'config', 'discovery']".
    * I changed mget for a single cache tag to a get() for now, that seems to work fine and caches those at least. There can be many of those, each piece of content has its own cache tag that is checked if a page contains it. So a site with 100k nodes will also attempt to check those 100k cache tags eventually.

    Want to update the documentation and clean it up a bit but then I think we're ready for a first version.

    There's one more thing that I noticed while testing, and that's the special entry that we have for marking a bin as deleted. Even on bins that we don't store in memory, this specific entry would be quite useful to keep as it is requested once per request per used bin and very rarely changes, looks like this:

    for dynamic page cache with many cache tags:

    1675414050.584254 [0 127.0.0.1:50894] "HGETALL" "dc:dynamic_page_cache:response:[request_format]=html:[route]=entity.node.canonical198557dcd26840d5cc8d4f65f4b1e0d08744cee020108f9e37569130adba05cd"
    1675414050.584485 [0 127.0.0.1:50894] "MGET" "dc:cachetags:config:block_list" "dc:cachetags:config:block.block.olivero_help" "dc:cachetags:config:block.block.olivero_page_title" "dc:cachetags:config:block.block.olivero_primary_admin_actions" "dc:cachetags:config:block.block.olivero_primary_local_tasks" "dc:cachetags:config:block.block.olivero_secondary_local_tasks" "dc:cachetags:config:block.block.olivero_messages" "dc:cachetags:config:block.block.olivero_syndicate" "dc:cachetags:config:block.block.olivero_search_form_narrow" "dc:cachetags:config:block.block.olivero_main_menu" "dc:cachetags:config:block.block.olivero_search_form_wide" "dc:cachetags:config:block.block.olivero_account_menu" "dc:cachetags:config:block.block.olivero_breadcrumbs" "dc:cachetags:config:block.block.olivero_content" "dc:cachetags:config:block.block.olivero_powered" "dc:cachetags:config:block.block.olivero_site_branding" "dc:cachetags:block_view" "dc:cachetags:node_view" "dc:cachetags:node:4" "dc:cachetags:user:1" "dc:cachetags:config:shortcut_set_list" "dc:cachetags:local_task" "dc:cachetags:config:system.menu.account" "dc:cachetags:config:search.settings" "dc:cachetags:config:system.menu.main" "dc:cachetags:config:system.site" "dc:cachetags:config:system.theme" "dc:cachetags:config:system.menu.admin" "dc:cachetags:rendered" "dc:cachetags:user:0" "dc:cachetags:x-redis-bin:dynamic_page_cache"
    1675414050.584944 [0 127.0.0.1:50894] "GET" "dc:dynamic_page_cache:_redis_last_delete_all"
    

    For render, with just a few cache tags:

    1675414050.626196 [0 127.0.0.1:50894] "HGETALL" "dc:render:shortcut_set_toolbar_links:[languages:language_interface]=en:[theme]=olivero:[user]=1"
    1675414050.626362 [0 127.0.0.1:50894] "MGET" "dc:cachetags:config:shortcut.set.default" "dc:cachetags:config:user.role.authenticated" "dc:cachetags:config:user.role.administrator" "dc:cachetags:x-redis-bin:render"
    1675414050.626476 [0 127.0.0.1:50894] "GET" "dc:render:_redis_last_delete_all"
    

    One idea I had is that we switch to using OPT_ALLOW_PATTERNS, then we could explicitly include just that key for those? Thoughts?

  • 🇨🇦Canada tillkruss

    Want to update the documentation and clean it up a bit but then I think we're ready for a first version.

    Yeah, I was about to suggest the same. It seems there is room for optimizations. I think the Pantheon team would contribute some of those after their initial testing phase.

  • 🇨🇭Switzerland berdir Switzerland

    Restructured the main README.md completely and also added an example settings.php file/snippet.

    One last topic that's not quite clear to me is persistent connections, in general and specifically also with Relay. https://relay.so/docs/1.x/connections#authentication says that Relay always uses persistent connection, but the Relay.php Client implementation checks $persistent and uses pconnect or connect. Maybe the connect/pconnect methods exist for backwards compatibility, but if Relay really always uses that, then our configuration should maybe make that clear and document why and that it ignores the persistent configuration setting?

    (in general, I'm don't quite understand if there's a downside to persistent connection and if there are cases where they should not be used, specifically also with the other clients).

    > f023e649 - use Relay v0.6.0 `addIgnorePattern()` helper

    Nice, was wondering if something like that should be provided. We should probably mention in the docs that at least version 0.6 must be used.

    Will create a follow-up with some of the ideas here and then commit this in the next days.

  • 🇺🇸United States Michael Grunder

    One last topic that's not quite clear to me is persistent connections, in general and specifically also with Relay

    As you noted, Relay defaults to using persistent connections, since Redis uses the connection itself to manage which keys the given client may have cached. Whenever the socket disconnects (either manually, or as a result of an error), Redis will clean up its database of our cached keys, and Relay will also flush its in-memory cache.

    I'm don't quite understand if there's a downside to persistent connection and if there are cases where they should not be used, specifically also with the other clients).

    With a typical setup there's really no downside to persistent connections, and would be harmful to the performance of client key caching. That said, they can be disabled by default with the relay.default_pconnect ini setting. This could be useful if Relay's in-memory cache was disabled.

    We still have `pconnect` and `pclose` mostly for compatibility, but also so connections can be actually closed without having to restart fpm.

  • Status changed to Fixed almost 2 years ago
  • 🇨🇭Switzerland berdir Switzerland

    Created 📌 Set up GitlabCI Fixed and Relay follow-ups and ideas Active , added a note about the version and merged.

  • 🇨🇭Switzerland berdir Switzerland
    • Berdir committed f1626880 on 8.x-1.x
      Revert "Issue #3322514 by tillkruss, Berdir, Michael Grunder: Explicitly...
  • Status changed to Active almost 2 years ago
  • 🇨🇭Switzerland berdir Switzerland

    I had to revert this, I realized that I introduced a major regression in the cache tag handling, I also don't have addIgnorePattern() it seems, I'm getting:

    Error: Call to undefined method Relay\Relay::addIgnorePattern()

    I am on 0.6:

    $ php --ri relay
    
    relay
    
    Relay Support => enabled
    Relay Version => 0.6.0
    
  • Status changed to Needs work almost 2 years ago
  • 🇨🇭Switzerland berdir Switzerland
  • 🇮🇱Israel heyyo Jerusalem

    I requested to wodby maintainer to add Php Relay to docker4drupal :-)

    https://github.com/wodby/docker4drupal/issues/541

  • 🇨🇭Switzerland berdir Switzerland

    I do suppose that would be the easiest solution, also for testing it on GitlabCI, as the drupalspoon templates use that docker container too.

  • @berdir opened merge request.
  • 🇸🇮Slovenia KlemenDEV

    Just wanted to say this is amazing work being done here, this will be really neat :)

  • Status changed to Needs review almost 2 years ago
  • 🇨🇭Switzerland berdir Switzerland

    We have kinda passing tests now on the new MR, although it seems to be troubled a bit by random fails, not just on Relay but the others too. I think it's due to the use of REQUEST_TIME and slow test execution, so that it takes more than 3s to get to that specific test. The test is in core, so there's little that we can do about that except duplicating the whole method.

    I also had to revert the usage of addIgnorePattern(), it doesn't seem to work for me on 0.6 and the script I used for the docker container apparently still even installs only 0.5.1. Ah, I just realized why, the method name is addIgnorePatternS(), it's even wrong in the release notes, pinged Till about it. It's just a convenience thing, we can reintroduce it later once we get the latest version also on GitlabCI.

    • Berdir committed a5214f81 on 8.x-1.x
      Issue #3322514 by Berdir, tillkruss, Michael Grunder: Explicitly support...
  • Status changed to Fixed almost 2 years ago
  • 🇨🇭Switzerland berdir Switzerland

    Ok, lets give this another try, feeling more confident now with automated tests set up. Added a note about addIgnorePatterns() to the follow-up, as written to Till on Slack, the reason why the tests use the old version is that the metadata url for the latest version that the installer script relies on is outdated (https://builds.r2.relay.so/meta/latest)

  • 🇺🇸United States bradjones1 Digital Nomad Life

    Thanks everyone for the hard work on this. I'm glad to have played a small part in helping to connect the various parties early on. Awesome collaboration.

    Re: random failures on GitLab CI related to timing; I can relate to this and it is a legitimate concern. I'm not sure if it's due to GitLab CI runs being "faster" or what, but I had similar issues with Simple OAuth where sometimes the token returned would be seen as not-yet-valid because it was issued and used in the same "second." I hate putting wait conditions into test cases yet when it comes to timing in a closed system there are sometimes no great alternatives.

    Looking forward to using this!

  • 🇮🇱Israel heyyo Jerusalem

    Did anyone made any benchmark to see the performance improvement with Relay compared to ChainedFastBackend ?
    Also I see that Relay is free up to 32MB.
    If someone was ready to pay for paid option, is there any recommendations on how to configure it ?

  • 🇨🇭Switzerland berdir Switzerland

    No profiling, but it's definitely better in several aspects, it doesn't need a redis lookup once per bin and page to check if it's up to date, and invalidations should also be much more efficient. you might get a few extra cache tag lookups, but core will do that too with 🐛 ChainedFastBackend invalidates all items when cache tags are invalidated Fixed

  • 🇺🇸United States bradjones1 Digital Nomad Life

    Also I see that Relay is free up to 32MB.

    All the components are open-source... are you talking about a particular implementation?

  • 🇮🇱Israel heyyo Jerusalem

    I saw this I formation on their website relay.so website:.
    https://relay.so/#download

    But 32MB may be completely enough if it's for few bins, wodby for example set 32MB for APCu by default.

  • 🇺🇸United States bradjones1 Digital Nomad Life

    Ah ok, thanks for the pointer. That's an interesting business model for sure. Yeah your thoughts are same as mine initially - is 32M even much of a restriction?

  • 🇸🇮Slovenia KlemenDEV

    Are there any guidelines regarding the recommended memory?

    We use 1.5G for Redis on one of the websites (100k nodes though, 95% utilized memory) and now hearing 32M may be enough this got me a bit confused

  • 🇨🇭Switzerland berdir Switzerland

    Relay doesn't replace the redis server, it replaces the client,. Like #46 said, kinda like apcu.

    I'd expect it should be enough too, but if you have more you can always use it for more bins.

  • Automatically closed - issue fixed for 2 weeks with no activity.

  • Status changed to Fixed over 1 year ago
  • 🇮🇳India Anul Delhi

    We are already using Redis in our site with PHP Redis client. Due to performance issue we are trying to use Relay. We followed this document 📌 Explicitly support Relay (drop-in replacement for PhpRedis) Fixed to install Relay but we are running in the below mentioned issue :

    thrown in /var/www/html/vendor/relay/relay/src/RequestHandler.php on line 31
    Fatal error: Uncaught ArgumentCountError: Too few arguments to function Relay\RequestHandler::__construct(), 0 passed in /var/www/html/docroot/modules/contrib/redis/src/Client/Relay.php on line 14 and at least 1 expected in /var/www/html/vendor/relay/relay/src/RequestHandler.php:31

    We are using Ddev setup for our site. Any help in installation on Relay would be Appreciated.

    Thanks in Advance.

  • 🇨🇭Switzerland berdir Switzerland

    It looks like you installed a php package called relay/relay, that has nothing to do with this. relay is a PHP extension. See https://relay.so/docs/1.x/installation for installation instructions.

    For ddev, you need to customize the Dockerfile, see https://ddev.readthedocs.io/en/stable/users/extend/customizing-images/#a.... I can try to share the specific command I put in there later.

  • 🇮🇳India kalpanajaiswal

    I followed this doc https://relay.so/docs/1.x/installation#using-docker for installation but getting this error
    "PHP message: Error: Class "Relay\Relay" not found in /var/www/html/docroot/modules/contrib/redis/src/Client/Relay.php on line 14 #0 /var/www/html/docroot/modules/contrib/redis/src/ClientFactory.php(186): Drupal\redis\Client\Relay->getClient('redis', 6379, NULL, NULL, Array, false)

Production build 0.71.5 2024