https://github.com/phpredis/phpredis?tab=readme-ov-file#multi-exec-discard
Redis::MULTI or Redis::PIPELINE. Defaults to Redis::MULTI. A Redis::MULTI block of commands runs as a single transaction; a Redis::PIPELINE block is simply transmitted faster to the server, but without any guarantee of atomicity.
I guess we want to keep atomicity. Instead of switching to Redis::PIPELINE, a check should be added on the return value of the multi() call.
Nice. This is also useful if you want things like queues and persistent locks to be stored in redis instances with persistence enabled, while caches can be stored in redis instances without persistence.
Note that I don't really like the name $settings['redis_client_bins']. I would go for $settings['redis.connections'] (or redis.servers) and $settings['redis.targets'] (or redis.bindings) and have the redis CacheBackendFactory look for "cache:$bin" and "cache". Maybe the same can be useful for queues.
Also, the patch seems to create a client per cache bin, even if they are configured to use the same server. I guess this can be done more efficiently.
Maybe related
In our monitoring tool we saw a lot of "Allowed memory size of ... bytes exhausted" errors. These errors were caused due to a infinite loop via DefaultExceptionHtmlSubscriber (and maybe in our case only because we are using menu_trail_by_path module).
The error still occurred after the MR, because the non-ASCII characters in the request path are urlencoded. I've updated the MR to handle both encoded and non-encoded non-ASCII characters.
The attached patch is a snapshot of the latest state for safe usage with composer patches.
casey → made their first commit to this issue’s fork.
Just like alexpott in #63 I am not too keen on the name ThemeHook.
What about ThemeableTemplate?
We already updated to 6.1.5 :(
Created a new MR containing the suggested change of kim.pepper
Snapshots of latest state of MRs for both 6.1.x and 7.x for safe usage with composer-patches
Sorry for the noise;
First I rebased the existing MR branch on the latest 6.1.x branch but apparently you can't change the destination of MRs. I then reverted that existing MR.
I then applied the patch from #64 to both the 6.1.x and the 7.x branch; apparently the 6.1.x branch has more (recent) changes than the 7.x branch. I've created MRs for both branches.
Snapshot of latest state of MR for usage with composer-patches and D10.3.
Thanks for reporting! I've removed the polyfills.
This issue exists since Drupal core 10.3 #3280359: Make jQuery.form internal →
We are currently using patch #16 in some of our production sites.
While ✨ Views UI should offer a 'Treat NULL values as FALSE' on boolean field filtering to expose the already existing code path for handling this Needs work indeed should fix the same issue, the changes/MRs/patches over there are too radicial (e.g. introduces config changes) to start using in these sites.
Attached patch is a reroll of #16 for usage with D10.3
Snapshot of latest state of MR for usage with composer patches on drupal/core:10.3.*
I have not actually been able to reproduce the problem.
Hmm, that's too bad. Not sure what is missing. To easily reproduce the problem you could alter the scheduler_content_moderation_integration_scheduler_publish_process (or any other publish_process hook implementation) so it always returns "-1". This way you should get a new revision every time you run the scheduler cron.
By "breaking change" I presume you are referring to the failures in the API tests.
I meant that aborting hook invocations once one implementation returns "1" or "-1" is a (breaking) API change; with semantic versioning in mind, this change should introduce a new major version. I argued however that this change in practice should not be a problem, and even/rather fixes issues due to unexpected behavior (for sites that have multiple publish_process hook implementations).
The failing tests are however also a result of these API changes; the MR alters SchedulesManager::publish() and SchedulesManager::unpublish() so it no longer saves (and therefor no longer enforces the (un)publish_on to be restored) the entity on failure of a publish_process hook implementation.
I do think the (un)publish_process hook implementation should ensure the (un)publish_on is kept/restored when it fails to publish the entity (if that is the intended behavior). I altered the failing test so it passes; actually it were the (un)publish_process hook implementations of the test module that caused to tests to fail, so I altered the hook implementations.
I've created a MR that aborts (un)publishing process hook invocations on success or failure result and stops saving entity on failure.
Note that this is a breaking change, I do however think it won´t break anything in practice, and rather fixes issues due to unexpected behavior.
While looking at SchedulerManager::publish(), I also noticed that if multiple hook_scheduler_publish_process() implementations exist and the first return a "-1" as an error occured, the processing is not stopped. This would mean that any next hook implementation can (and likely will) publish/save the entity.
I am not sure if this is intended/wanted behavior. Maybe this should be altered into some kind of chain-of responsibility pattern; as soon as a publish_process implementation returns 1 or -1, the processing is considered to be finished.
@smustgrave, that's right. We just don't want the dependency upon the rules module. I suggested the sub-module as I understand others might still want to use it. I don't think a hard dependency is necessary.
Currently I don't have the time to work on converting the dependency into a sub-module.
Attached patch is a reroll for quiz 6.0.x-dev.
We are having the same issue.
For now we simply clean out 'advanced_settings.gtm' manually in our google_tag.container.default.yml:
...
advanced_settings:
consent_mode: false
gtm: { }
...
This is possible as TagContainer::getGtmSettings() returns a default config array. We don't need to override these defaults (not up until now at least).
I do hope however a better solution will be found. Maybe a single google_tag container entity should only have a single GTM id?
What about:
1. adding a new entity storage handler that stores all configurable fields in a single JSON database-column
2. adding dirty checking to entity API so we can prevent saving unchanged entities
3. adding paragraphs (and entity_reference_revisions) to core using this new storage handler
Patch for CKEditor 5 (only) support
Patch #154 incorrectly did request_time - expire instead of expire - request_time to recalculate mag-age
This patch is a combination of the patch in #131 and 🐛 Stacked caches result in max-age drift in caches Active for usage in sites <=D10.2
Turns out, skipping max-age of 0 (that is, handling max-age=0 as permanent in page cache) is not a good solution; if you use max-age to ensure a cached page is rebuild after a certain timestamp, and if a request happens on exactly the expiration time, the response must not be cached (as the max-age is 0). At least in such cases, page cache must not handle max-age=0 as cache permanently.
And since we (currently) cannot determine why max-age was set to 0, I think page cache should never cache responses with a max-age>=0. Even if this means that certain (or all) pages will no longer be cached by the page cache. Note this means that for example multilingual sites using the language switcher block ( 📌 Make language switcher block cacheable Postponed ) no longer have their pages cached by the page cache module, but at least the page cache won't be incorrectly caching pages permanently that have a max-age>=0. Also, those pages will probably still be cached by the dynamic page cache.
I think we should deprecate using the Expires header as the expire time for the page cache and just use the max-age of the cacheable metadata of the response. There are several places (like access checks) that don't have access to the final reponse and can only pass cacheable metadata.
I also think this issue is actually about two different issues:
- Page cache incorrectly ignoring the max-age of the cacheable metadata of the response (patch #131)
- The Cache-Control response header not containing/reflecting the max-age of the cacheable metadata of the response (patch #148 or something like cache contol override module)
For now, if your site depends on time-based cache invalidations you at least need the patch from 🐛 Stacked caches result in max-age drift in caches Active , even if your site is not using the page_cache module.
If you are using the page_cache module, you also need the patch from #131 as a complete solution.
If you are not using the page_cache module (but are using another pubic/managed caching solution, like a reverse proxy like Varnish), the patch from #148 might work.
The Cache Control Override module can be used additionally if you want to set a minimum and/or maximum on the max-age of the Cache-Control response header.
Quick patch that synchronizes the max age of cached date with the current time if the cached item has an expire set.
casey → created an issue.
This patch is based upon #131, but ignores cacheable metadata's max-age if it is 0. This should work around the issues like suggested by @Berdir in #146
I agree that Timo has the necessary knowledge for this position and support his application.
Reroll of patch #16
I think we should either merge any existing cache metadata on the renderable, like the patch in #5,
or throw an exception if the CacheableMetadata removes any existing cache metadata when applied to a renderable (diffing the new resulting #cache with the old existing ##cache)
casey → created an issue.
What about changing the return type/hint of EntityInterface::id() and all other ID parameters//returns to ?string and dropping int altogether as possible return type.
IDs are used to uniquely identify entities and not to do calculations with or anything.
Dropping int as will simplify the interface and makes it easier to enforce strict typing (
https://www.drupal.org/project/drupal/issues/3050720#comment-13562649
🌱
[Meta] Implement strict typing in existing code
Active
)
It will still be possible to do things like auto increment (no issue at all when handled by the DB) and finding the next available (numeric) ID.
I think almost all existing code will continue to work; as already stated, the type of ID variables are already inconsistent.
It looks like this issue has been fixed in 🐛 ajax_page_state leaks through request in Views Ajax Needs work
In development environments the api key is typically not provided. When the api key is missing currently every drush call triggers at least one " [error] OhDear site id is missing! Fix it ..."
Previous patch doesn't work.
casey → created an issue.
Static patch file of MR up until #4 for use with composer-patches
Reroll. Still needs a test
Reroll for 10.2
Wrong patch
To be able to update the change time during syncing the EntityChangedConstraintValidator must also be updated
Previous patch used the wrong variable in constructor
Maybe we should also check visibility rules? Something like this might work.
casey → created an issue. See original summary → .
#24 does not apply to D10
Duplicate of 🐛 Query parameters and anchor of substitution url are being stripped Closed: duplicate
I think I agree with Berdir that the right solution is to trigger an exception.
Something like this would work.
To lock a row, it should not have a draggable class and its weight should be disabled. Also, this patch only works for (not select) inputs as weight.
This patch only fixes ImageStyleDownloadController and is only to be used in combination with patch #5 #3309534-6: Accessing private file with .jpg, .jpeg, .png extension returns WEBP image → .
Actually, why is it even allowed to request/generate a webp when a specific (not a image-style derivative) file is being requested?
This patch simply removes the FileDownloadController replacement.
It would be even better to first check if a webp is being requested, before trying to validate if the requested uri is an image.
Reroll of #2
Here's a patch that removes the jquery dependency from better_exposed_filters/general and better_exposed_filters/auto_submit
Just run into this issue as well. What about only checking the query arguments from data-drupal-link-query and only skip if any of those don´t match the current query, while ignoring any additional query arguments from the current query.
What about passing just query and fragment. This keeps the plugin interface much cleaner.