I figured it out, the prebuilt Docker images from https://hub.docker.com/_/drupal have Drupal baked in. If you're doing something like this:
https://github.com/davidbarratt/davidwbarratt/blob/8558d6d1e0be3d797bc3a...
then you may end up with an older version of Drupal that is baked in, which would normally be fine since you're probably copying everything over anyways:
https://github.com/davidbarratt/davidwbarratt/blob/8558d6d1e0be3d797bc3a...
but Docker's COPY
does not remove files that are not in the source.
To get around this I could either specify the same version in the image (i.e. 10.4
) or I can remove the baked image:
https://github.com/davidbarratt/davidwbarratt/commit/8558d6d1e0be3d797bc...
@cilefen hmm, this is odd, it does not exist when I blow out the folder and do a composer install
, but it does exist in the docker container which does the same thing... this might be a caching problem I suppose.
I'm having this problem, and I upgraded using Composer....
I can confirm that the attached patch fixes. the issue. I am unable to re-open the issue, but this should be RTBC!
I installed this module in a fresh Drupal 11 site and I ran into the same issue.
davidwbarratt β created an issue. See original summary β .
davidwbarratt β changed the visibility of the branch 3094343-queue-confusion-on-replicated-databases to active.
davidwbarratt β changed the visibility of the branch 3094343-queue-confusion-on-replicated-databases to hidden.
The fix for this is in π Queue confusion on replicated databases (auto_increment_offset) RTBC
I have fixed it!
[DEBUG] [purge] [2024-08-23T22:32:11] purger_cloudflare_worker_0b7da3a883: Received Purge Request for node_list, node_list:article, node:49 | uid: 1 | request-uri: http://localhost:8888/node/49/edit?destination=%2Fadmin%2Fcontent | refer: http://localhost:8888/node/49/edit?destination=/admin/content | ip: 127.0.0.1 | link:
[DEBUG] [purge] [2024-08-23T22:32:11] purger_cloudflare_worker_0b7da3a883: Executing purge request for node_list, node_list:article, node:49 | uid: 1 | request-uri: http://localhost:8888/node/49/edit?destination=%2Fadmin%2Fcontent | refer: http://localhost:8888/node/49/edit?destination=/admin/content | ip: 127.0.0.1 | link:
The code basically relies on the indexing of the array to always match. Instead of making that assumption, we'll just use whatever is passed in and ensure the indexing matches up.
I did try switching from mod_php to php-fpm. I thought it might since the former holds on to the response and perhaps I was hitting a timeout or something, but it didn't make a difference.
I added repo steps to π Late Runtime Processor purges incorrect tags Active it happens every time I use Late Runtime Processor, if I use the drush or cron processor the issue goes away.
I didn't test it on any other database, but I am using SQLite, I assumed since the problem happens consistantly that that could be a clue, but it might not be.
I figured this out because I was writting a Cloudflare Worker that receives the purge requests and I noticed it was missing a tag. Upon further investigation, the tag isn't being sent.
To clarify, the patch clearly fixes a problem, but I'm not sure if that's the problem I'm having π Late Runtime Processor purges incorrect tags Active or if that's the same problem described in this issue. From the comment thread, it looks like the patch does fix the issue for some folks so it's possible there is more than one problem and the patch isn't a comprehensive solution.
I tested the patch and it does not fix the issue I had in π Late Runtime Processor purges incorrect tags Active so I'm re-opening it.
I accidentally created a duplicate of this issue at π Late Runtime Processor purges incorrect tags Active . This problem seems exceptionally bad with SQLite.
This appears to be a duplicate of π Queue confusion on replicated databases (auto_increment_offset) RTBC .
I edited a node and I get this debug message:
[DEBUG] [purge] [2024-08-21T23:17:42] queue: claimed 3 items: node_list, node_list:article, node_list | uid: 1 | request-uri: http://localhost:8888/node/49/edit?destination=%2Fadmin%2Fcontent | refer: http://localhost:8888/node/49/edit?destination=/admin/content | ip: 127.0.0.1 | link:
However, if I disable the Late Runtime Processor, I get the correct tags if I inspect the queue
What might cause the Late Runtime Processor to have incorrect tags?
davidwbarratt β created an issue.
This is a breaking change, so I opened up a 2.x branch.
thejimbirch β credited davidwbarratt β .
davidwbarratt β created an issue.
davidwbarratt β created an issue.
sourojeetpaul β credited davidwbarratt β .
Or we figure out that we don't actually need to declare these things as completely uncachable anymore.
For instance, what if we decide to rely on SameSite=Lax rather than using our custom CSRF protection? This is what other projects like Next.js do and maybe that would be fine for us as well?
i think you won't be able to solve such problems without client side javascript code that updates part of the page outside the context of the whole page caching .
That seems fine?
In my mind, undercaching is always prefered to overcaching. In an ideal world, it would be perfectly cached, but if the form (or whatever) declares that the whole page is uncachable because of it's inclusion, then.... it is what it is. The only thing you can do is refactor that element to be cacheable (i.e. generate the dynamic bits with JS and/or WebAssembly)
I'm still subscribed to this issue 8 years later and I re-read the duplicate I created β and I'm still thinking about this:
There are existing issues for this, it's mostly by design and I'm not sure if it can be changed. It would result in many pages no longer being cacheable that currently are, for example as forms declare themself as uncacheable.
And now I'm wondering.... so what? Like if a bunch of things become uncachable in Drupal 11 and we go and fix those things (individually) in 11.1+ that seems... fine? Am I missing something here?
Yeah sorry about that. Something on Azure is borken so I'm migrating back to my Pi which is cheaper anyways.
Rebased
I created a new merge request that is more incremental, and I think should solve my problem without having to add the Transport
to the DI container.
https://git.drupalcode.org/project/drupal/-/merge_requests/5847
Please take a look and let me know what you think!
Would you accept a middle ground? Perhaps a TransportFactoryManager
that just collects tagged transports, but doesn't create the Symfony Transport
object?
I don't understand these test failures, if anyone could help explain them I would be much appreciated!
davidwbarratt β created an issue.
davidwbarratt β created an issue.
davidwbarratt β created an issue.
Adding attribution
davidwbarratt β created an issue.
davidwbarratt β created an issue.
davidwbarratt β created an issue.
davidwbarratt β created an issue.
davidwbarratt β created an issue.
done! Thanks for your help!