I disagree, this suggested change directly addresses a common requirement of sites with large (and changing) user bases.
No matter how good the training is, users make mistakes. This is exacerbated by a changing population of users, e.g. as staff leave and arrive at an organisation. As site builders/admins, if we can easily prevent a mistake from being made, we should, and this change enables us to do so.
The classic example is a user creating a page containing a media item customised to that page. They then later create a separate new page and re-use the media item, editing it inline to be more suitable for this new page, without realising that it's now unsuitable for the original page.
You could argue that this is simply user error, but the suggested change allows site maintainers to easily prevent this common mistake from being made, which directly improves the user experience.
And yes, the permissions around editing/deleting a user's own media items are insufficient, as demonstrated by the example above — the unintended consequences are due to editing the user's own media entity. (And we certainly don't want to prevent editing wholesale, because that's still needed.)
There would be no burden on site builders, since it defaults to the current behaviour, making it purely opt-in.
The burden on maintainers is very likely to be minimal given the essentially static nature of this patch for the last 5+ years.
@bvoynick It's not the cleanest, but we've been using the below to manually invalidate the purge credentials in non-production deployments (mostly local environments).
drush sset fastly.state.valid_purge_credentials FALSE
May be of use in your case ― perhaps before the drush cim?
(...Might be of no use at all!)
For our site this seems to be directly related to "Restore Client IP" setting.
Switching that option off (which is fine for our purposes, even desirable) resulted in a working site via the main route, proxied by Cloudflare.
I'm therefore able to move on from this issue for now, but I'll keep an eye out, and am happy to help diagnose if there are any questions.
We hit this as an edge case related to loading in config values via environment variables.
The attached patch just ensures variables are created before they're used, and also skips processing if $zones cannot be populated from the Cloudflare API.
There are still some deeper issues involving a mismatch between invalid credentials but $config['cloudflare.settings']['valid_credentials'] = TRUE
that can trigger WSODs, but that seems to be contained within the cloudflare/sdk
rather than this module directly ― and to be fair it's probably just a result of bad config.
Error is:
Cloudflare\API\Adapter\ResponseException: Invalid request headers in Cloudflare\API\Adapter\ResponseException::fromRequestException() (line 38 of /app/vendor/cloudflare/sdk/src/Adapter/ResponseException.php).
Just chipping in because I've just hit what appears to be the same (or very similar) problem after a beta2 to beta3 upgrade. We have redis in our stack too.
All requests coming via the site's default CloudFlare-cached route return the fromRequest() must be an instance
500 error, but a separate route to the same Drupal instance without CloudFlare caching works as expected.
(I imagine it's simply that the non-cached traffic doesn't trigger the module code, and therefore doesn't raise the error, but thought it was worth noting that the site can successfully handle requests with the module active and configured.)
For now I've disabled cloudflare
and cloudflarepurger
via drush, and the site is working normally via the default route ― though obviously I'd prefer to have those modules enabled!
I'm in the very early stages of diagnosing this, will add more as I learn more.
Rerolled to apply against RC15, however that version seems to introduce a related control (removed_reference
) which renders this patch partly redundant.
Needs unpicking, but this should keep anyone currently using this patch running for the time being.
Tweak to the patch in #23, seemed to miss one instance of ckeditor.link
and was causing the library to be found in the filesystem at the new location, but still report the original location.