So, what's left :
- Force HttpOnly to false and remove it from the admin form
- Update the MR with the minor fixes from 3308456-30.patch #30
- Change the MR target branch to 2.x (or create a new one if this fix should still apply to 1.x)
What are the prerequisites for using transactions? For example, does the binlog needs to be enable?
Yes, I know but with just two carriage returns all the module now complies with the Drupal standards. If you think it's not needed, I'll revert it.
Phpcs fixed, I also added two minors fixes on CronTest.php
The patch for #3471571 is now live on an production drupal. So far it seems ok.
Could some else tests it and marks the issue as RTBC if ok and close this one at the same time?
The patch is live on an production drupal. So far it seems ok
Could some else tests it and marks this issue as RTBC if ok?
The error is logged with an emergency level (!) which seems a bit overkilled for a simple ban call.
And by the way, Emergency level + rsyslog leads to sending message to all users shells.
*.emerg :omusrmsg:*
https://www.rsyslog.com/doc/configuration/modules/omusrmsg.html#write-em...
I don't remember why but I added this file to my "packaging exclusion list" (along with install.php, readme,...).
Can someone explain me what's the use of this file on a packaged deployed website (updates done only with composer)?
After digging a bit more into pruger I found that "Runtime measurement" is a "duration autodetection" feature :
https://git.drupalcode.org/project/purge/-/blame/8.x-3.x/src/Plugin/Purg...
So it's ok to have errors that pop during the first calls.
IMHO, this info should ba added to the help text of the field and the timeout exception should be handle specifically , do not log under a high value (30s or a multiple of the basic value) or not with the emergency status.
Sorry for the mess, I 'll have to dig a but more, all I recall is that #3293026 doesn't fix a specific case.
I'll try to make a summary
If your Varnish servers are load balanced, it should easier to add a specific rules to your load balancer (filtered on drupal IP + PURGE/BAN method) so it send the request to both all Varnish instances.
So you just need to set the inbound LB IP.
My 2c:
I wonder of this whole cache tag invalidation is not overkill?
It's a mess to configure: adding and setting two modules and purgers, managing a specific queue with processors like frequent cron or (worst) a later runtime, the queue could explode if you massively/frequently update contents, managing a specific varnish config (good luck if it's load balanced), ...
It add a real complexity to your website architecture.
I simple varnish configuration that cache your anonymous pages for 5 should be enough in most of the cases. If your website needs to display fresh information in real time: use ajax or push events API.
The MR is already done, I just added the patch so it can be added safely into compose.json (the MR link could be altered by additional commit, it's still an global issue: https://www.drupal.org/docs/develop/git/using-gitlab-to-contribute-to-dr... → ). I'll edit the message for clarity.
Please test the MR or patch
Here the MR and the patch.
The new token should only be used with the content type application/json
to avoid escape chars. Maybe this behavior should be extended to all purger's body?
It should also fix, the todo in buildFormTokensHelp
about token auto detection (there a lack of doc bout the core token api).
IMHO none of the tokens should be escaped.
The problem occurs in \Drupal\purge_purger_http\Plugin\Purge\Purger\HttpPurgerBase::getOptions
, this function should use \Drupal\Core\Utility\Token::replacePlain
instead of \Drupal\Core\Utility\Token::replace
.
I stumble on the same issue when trying to provide a new token (see related issue), in my patch I propose a "safe way" by switching to replacePlain
only if $this->settings->body_content_type === 'application/json'
.
o'briat → changed the visibility of the branch 3373411-how-to-purge to hidden.
The purge everything works, but the tag one needs a patch to provide a new purge token that return a json array.
If confirmed, I'll provide a patch for purge (if the module teams seems it's a legit format).
I got the same problem with daily massive update from external source. May be the Deduplicate Queued Items ✨ Deduplicate Queued Items Active issue could help ?
See also Cache Max Age warning text →
See also "Cache Max Age warning text"
Have a look to the patch of the "Deduplicate Queued Items" issue
This checks values (24h, one month, one year) seem very strange if not dangerous.
$this->config->get('cache.page.max_age')
is the value that Drupal use as max-age
in the cache-control
header of the response send to the client.
This value will be used by reverse proxy such as Varnish but also by the client browser which could not be purged by Drupal.
Could someone explain these choices?
Do you rely on conditional request header (If-None-Match
, ...)?
There's some info in \Drupal\Core\Logger\LoggerChannelInterface
, but I agree that this item usage (and the other context ones) should be better documented, since it spreads on all logger usages.
This page lakes a lot of info, specially about multi servers: the need of a shared filesystem (nfs) for public:// private:// and tmp://. The concept of loadbalancing (haproxy) and the concept of healthcheck, sticky sessions, the X_forwarded_* headers,...
The useless of the salve database settings.
This page should also point to performance pages and make reference to other scaling concepts: varnish, memcache/redis, syslog logging, ...
It should also provide basic metrics to monitor that could help to determine if scaling is needed, and if so, what kind
Could someone review the page and add it into the menu?
Hi,
I'm using the memcached php module, this client should provide server failure (https://www.php.net/manual/en/memcached.constants.php#memcached.constant...), so if a server is no more available the sharding method is updated and missing keys (previously allocated to the missing server) should be recreated on servers still available.
The performance loss should be punctual and inferior to a single
drush cr
.
But it doesn't seems to be the case: I tested it, I shutdowned one server (on two) and all performances were falling down. Everything returned to normal once the server was back online.
Am I missing something?
Also I don't get the point of using multiple bins and allocated them on specific memcached server/cluster. It looks like micro optimization, the eventual use cases should be better documented.
Add doc about file_chmod settings (and umask).
Place Windows at the end (still used?)
Remove "Summarizing the permissions" section which seems an unrevelant copy past from the D7 doc.
Remove reference to root (which should never be used in our case)
Replace apache by web server where it applies
Add a doc about web doc root
It's a catch 22 situation since it's a good practice to disallow shell access to the webserver user.
@nofue www-data is a convention, webserver create files with owner:group related to the user used to run them which should no be the same user that deploy the application (which need shell access).
Read the "Linux servers page" sections, user, group and permissions are defined here.
The config folder should also be readable by webserver user, since the backoffice could be used to import or make diff with the configuration set in files (or even export/override it)
There will be problems with already running drupal site, since Drupal creates its files/folder with the owner www-data
. chown
and chmod
executed with the user deploy
will failed unless executed with root (or a sudoer's command)
Yes , but this page should also give concrete advices on permissions on common folders where www-data needs to write, at least tmp, but also private (and config and translations, ...), followed by a link to the "Security in Drupal" page and folder location tips.
Maybe a paragraph asking to run drush st
in order to determine these folders and then applying the two find
770/600
on them.
Also this page should replace "Greg" with "deploy user".
@xaa, agree, there should also be info about other writable folders such as tmp, private, translation and sync. They should be outside the web root and the www-data group should have writable permissions on them (or not on specific environments).
The first one who tests successfully this solution switches the issue to RTBC?
No doc update needed on this module?
@fgm : Baleen just updated their API : https://baleen.atlassian.net/wiki/spaces/BS/pages/194576385/Invalider+vo...
It seems as easy as using the "Generic HTTP Purger" sub module.
Thanks.
Don't forget to add the info on the module page too ;)
Patch #54 generate a segmentation fault if you try to create an folder with an empty name (infinite recursion).
drush ev '$f = Drupal::service("file_system");$dir="";var_dump($f->mkdir($dir, 1));'
Segmentation fault
This bug was introduce by patch 54 on issue https://www.drupal.org/project/drupal/issues/2799635 🐛 FileSystem::mkdir should handle open_basedir correctly Needs work
this part seems to have rewritten in 11.x, can someone check if this is still an issue?
There are other cases that do not trigger recursion but give strange results:
drush ev '$f = Drupal::service("file_system");$dir="........";var_dump($f->mkdir($dir, NULL, TRUE));'
drush ev '$f = Drupal::service("file_system");$dir=" ";var_dump($f->mkdir($dir, NULL, TRUE));'
ls -la web/
drwxrwxrwx 2 www-data www-data 6 Jun 5 15:50 ........
drwxrwxrwx 2 www-data www-data 6 Jun 5 15:50 ' '
I will provide a MR that avoid the recursion, but feel free to propose other checks.
[edit] FileSystemInterface::prepareDirectory => FileSystem::mkdir
Baleen is based on Varnish, a tag based header BAN is in their roadmap
ok, sorry for the confusion (TL; DR 😬).
I think purge should just split tags by 16k chunks and make multiple calls to Varnish/CDN.
After reviewing my own MR, I recall that I was fixing the js variable "drupal-settings-json
.
So the description doesn't fit the MR :(
Even worst, this MR should have been attach to the issue 3293026 which is set as fixed :(. At the moment I don't understand why this previous MR is not on the 8.x-1.x branch.
Sorry for the mess, I'll update the description. There's still work to do
For the record the above MR (!5) works as expected, currentPath is empty:
MR
view-source:http://127.0.0.1:8888/user/1 action="/system/404?destination=/user/1" "currentPath":""
view-source:http://127.0.0.1:8888/user/2 action="/system/404?destination=/user/2" "currentPath":""
view-source:http://127.0.0.1:8888/user/666 action="/system/404?destination=/" "currentPath":""
8.x-1.x
view-source:http://127.0.0.1:8888/user/1 action="/system/404?destination=/user/1" "currentPath":"user\/1"
view-source:http://127.0.0.1:8888/user/2 action="/system/404?destination=/user/2" "currentPath":"user\/2"
view-source:http://127.0.0.1:8888/user/666 action="/system/404?destination=/" "currentPath":""
Nice command line (I take note).
So what's the next move? Do you have a suggestion?
Add details about the cache tags exceeding Varnish’s default header length
Other doc page need also to be update https://www.drupal.org/docs/drupal-apis/cache-api/cache-tags-varnish → , https://www.drupal.org/project/purge → , https://www.drupal.org/project/varnish_purge →
For the record, it also triggers a 503 on Varnish, to detect it use the option -g session
with varnishlog
. It should returns (after a slight wait) the following error:
...
--- VCL_return fetch
...
--- BogoHeader Header too long: Cache-Tags: config:b
...
If I recall it correctly (and it's the same Drupal 7 behavior), very few core mechanisms make usage of slave reading (comments ?).
So what's the point to keep this api if it's only legacy code that nobody uses
Whoa, 3 years ago I noticed this strange behavior on an old Drupal 7, I never checked on 8+ version.
If nothing changed since 7, there's room for improvement in db performance, almost no read requests use the slave db.
For the record,
drush now suggests →
to add a condition using drush maint:status
in the cron line:
10 * * * * cd [DOCROOT] && /usr/bin/env PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin COLUMNS=72 ../vendor/bin/drush --uri=your.drupalsite.org --quiet maint:status && /vendor/bin/drush --uri=your.drupalsite.org --quiet cron
I confirm that duplicated invalidation occur when Drupal is importing or update regularly large volume of content.
A simple solution could be to delete all identical "data" when purging an item?
Or just add a global duplicate deletion at the end of every purger, here's some pseudo code:
"SELECT MAX(item_id), data FROM purge_queue GROUP BY data HAVING COUNT(*) > 1"
foreach item_id, data
DELETE FROM purge_queue WHERE data=$data AND item_id != item_id
https://www.drupal.org/project/drupal/issues/3292350 🐛 file_validate_image_resolution does not update file size after resizing Needs work should fix the problem, is there other use case that prevent to close this issue?
Beware that patch #22 contains debug functions: console.log & dpm
The core patch https://www.drupal.org/project/drupal/issues/3292350 🐛 file_validate_image_resolution does not update file size after resizing Needs work will be included in the next drupal release, it should fix this issue. Can some one check is there is still work on this issue? If so, maybe update the issue name and description or open a new issue?
@catch, thanks for the validation! It's my first core contribution.
I'm also a bit puzzled by the location of this action, it's also strange that the image API allows direct file modifications that could lead to inconsistencies between the file and image objects.
@quietone, the MR for 10.2 is ready, this issue is a "non-disruptive bug fixes" so it could be merge into the next "Patch releases"? Is there a step I missed that block the next step or all I have to do is wait?
AMP-based / Win32 binaries development environments ?
May be a note "for the record" about "VM based" env that are no more used ?
You already list the main advantages, but to me, the main drawback of PHP on Windows is the binaries that suck (slow, not working with drush, windows path,... ). As for the " polished stack package", DDEV astonished me, the install is slick, fast and works like a charm. But the best part is that the setup and configuration of all the stack is in files. So it's commited with the project (revisions, everyone as the same setup) and all developers could have access to it.
And debugging command and a reference to monolog for advanced uses
There not so much text.
If possible, replace the "Installing Drupal on Windows for local usage" block with this "Docker-based development environments" paragraph.
Reuse this "AMP-based development environments" paragraph to introduce the 3 other blocks. You can shorten it because some inf are already in sub pages.
Honestly I don't know, I do my best to provide a MR for the 11.x and 10.2.x, now it's in the hand of the core maintainers team.
Any tips for accelerating the process are welcome.