Introduce Adapters instead of different cache/queue/lock classes per backend

Created on 10 September 2019, almost 6 years ago
Updated 25 July 2025, 19 days ago

Problem/Motivation

Maintaining the different implementations and adding both new backends (such as cluster) and new features (such as key value store) is complex and requires providing many classes with only slight variations.

Adding new connection options such as TLS or multiple servers is also challenging because of how the configuration is passed to the factory.

Steps to reproduce

Proposed resolution

Explore an adapter based approach. Identify unique cases that vary between implementations, also allow to pass through any command directly.

The adapter and the interface would need to be internal, while a goal would be to allow additional implementations, extending them must be an option to support new cases.

Find a better solution for the factory and its configuration. maybe a factory factory pattern with tagged services.

Remaining tasks

User interface changes

API changes

Data model changes

📌 Task
Status

Active

Version

1.0

Component

Code

Created by

🇨🇭Switzerland berdir Switzerland

Live updates comments and jobs are added and updated live.
Sign in to follow issues

Merge Requests

Comments & Activities

Not all content is available!

It's likely this issue predates Contrib.social: some issue and comment data are missing.

  • 🇨🇭Switzerland berdir Switzerland
  • 🇨🇭Switzerland berdir Switzerland

    berdir changed the visibility of the branch 3080449-deprecate-clientinterface-and to hidden.

  • Merge request !61introduce client adapters → (Open) created by berdir
  • 🇨🇭Switzerland berdir Switzerland

    Started a proof of concept, all cache classes merged into one, getClient() actually returns an instanceof ClientInterface, which has some new methods for special things (really just multi/pipeline related things atm)

    Currently using __call() to pass through all not explicitly defined methods directly to the actual client. Not sure yet if I want to rely on that (with @method on the interface maybe) or explicitly define all the methods used in our implementations and only keep that as a fallback.

    Still need to update all the other things and refactor the factory part.

  • 🇨🇭Switzerland berdir Switzerland

    One thing that I'm not sure about yet is the pipelines. I see two options:

    * Add specific methods. That worked well for cache, it's just two and they are fairly generic. The reliable queue however has some very specific ones.
    * Add our own generic version of that to the client adapter, with our own class to pass it through. the cluster version of that can then for example just not actually do a pipeline and pass it along as separate commands.

  • 🇨🇭Switzerland berdir Switzerland

    This is now passing tests. I want to improve docs and so on, but it's seems to be working pretty well.

    the multi stuff was easier than expected, I just can't use chained methods, then multi() seems to be working fine for both phpredis and predis. I might remove some or all of the methods I had to add and use non-chained multi() or pipeline calls.

  • 🇦🇺Australia acbramley

    Tested this and put together a cluster version. Using docker-compose and https://github.com/Grokzen/docker-redis-cluster

    https://git.drupalcode.org/project/redis/-/merge_requests/62

    Docker container:

      redis:
        image: grokzen/redis-cluster:latest
        network_mode: service:nginx
        environment:
          IP: '127.0.0.1'
    

    Nginx ports:

          - '7000-7005:7000-7005'
          - '5000-5010:5000-5010'
    

    It seems to be working well, info page is showing similar info to 1.9 + cluster patch.

  • Pipeline finished with Success
    1 day ago
    Total: 580s
    #570643
  • 🇨🇭Switzerland berdir Switzerland

    The current pipeline approach might be a bit tedious for cluster, but I still think it should be possible. Alternatives would either be a lot of code for all (have a pipeline class for all implementations, force pipelines through that) or likely just as much complexity for predis (something like a function executePipeline(array $commands)).

    I'm tempted to start this this approach, if it turns out too complicated for cluster I'm open to changing it.

Production build 0.71.5 2024