- Issue created by @tedbow
- 🇺🇸United States effulgentsia
I think it makes a lot of sense to decouple unattended updates from cron.php and hook_cron(). Primarily so that unattended updates can run as a non-web user while cron.php runs as the web user. 📌 Add Drush command to allow running cron updates via console and by a separate user, for defense-in-depth Fixed is starting on that separation, though I'm unclear whether it is providing the Drush command instead of or in addition to hook_cron(). My recommendation would be to do it instead of. In other words, to never run automatic updates as part of cron.php.
Scheduling the Drush command to run automatically would require adding it to the host system's crontab. If it's being done this way (and not via a web URL like cron.php), then I don't think there's a timeout to worry about. So I think we can do all the phases in a single request.
However, to support sites that can't add to their host system's crontab, we'd need to also implement something similar to the Automated Cron module, meaning our own subscriber to
KernelEvents::TERMINATE
that runs when the site receives traffic. Since this is a web-launched PHP request, it would only be suitable for a web-writable codebase (which I think is fine; site owners who are able to setup a non-web-writable codebase should also be able to add to their crontab), and it would be potentially subject to timeouts depending on how the web server interacts with PHP (e.g., via cgi, mod_php, php-fpm, etc.). Because of these timeouts, I agree that theKernelEvents::TERMINATE
subscriber should only process one phase per request.For a 0 traffic website that isn't running the Drush command in their crontab, we'd need to also handle the case that the first request to hit the website could be many hours after a security update is available, which could also mean that this request is itself an attack trying to exploit that vulnerability. So I think in addition to
KernelEvents::TERMINATE
, we'd need an early-in-the-request event that checks if unattended updates are enabled but too long has passed since we last checked for an update, in which case we should return a "site under maintenance" message and then proceed to run theKernelEvents::TERMINATE
event. - 🇧🇪Belgium wim leers Ghent 🇧🇪🇪🇺
If it's being done this way (and not via a web URL like cron.php), then I don't think there's a timeout to worry about. So I think we can do all the phases in a single request.
@effulgentsia is correct:
This sets the maximum time in seconds a script is allowed to run before it is terminated by the parser. This helps prevent poorly written scripts from tying up the server. The default setting is 30. When running PHP from the command line the default setting is 0.
— https://www.php.net/manual/en/info.configuration.php#ini.max-execution-time
Because of these timeouts, I agree that the
KernelEvents::TERMINATE
subscriber should only process one phase per request.
But there's a significantly higher amount of time needed for Automatic Updates than for pretty much every cron hook: to process the typical massive file copying that is necessary in the
create
andapply
phases of the stage life cycle, it can easily take minutes.See the preliminary performance test results at #3304108-13: Determine if rsync is faster than the PHP file copier for Composer stager → and #3304108-16: Determine if rsync is faster than the PHP file copier for Composer stager → : even for a very barebones site, it takes ~30 seconds for the
begin
stage to complete and another ~30 for theapply
stage. That's a whole minute. That makes it very likely for this (very creative! 😁👏) use ofKernelEvents::TERMINATE
to exceed the time allowed byphp-fpm
etc[…] in which case we should return a "site under maintenance" message and then proceed to run the
KernelEvents::TERMINATE
event.Ah, interesting! So sites without a "proper"
crontab
in place which would execute the CLI command to apply automatic updates (see 📌 Add Drush command to allow running cron updates via console and by a separate user, for defense-in-depth Fixed ) would then spend effectively an entire request/response cycle on this. That would mean there is a higher likelihood of not running into timeout problems!We could then even combine that with a
<meta http-equiv="Refresh" content="120; URL…
to have the browser reload the page automatically after 2 minutes (for example, exact logic/duration TBD). That would largely address the edge case concern I raised in the preceding paragraph? - 🇧🇪Belgium wim leers Ghent 🇧🇪🇪🇺
I checked how WordPress' auto-updater works.
- they do have problems with timeouts — example: https://core.trac.wordpress.org/ticket/54166
- https://github.com/WordPress/wordpress-develop/blob/trunk/src/wp-admin/i... ← they do avoid others from performing requests while the site is being updated, de facto announcing to the world that the site is being updated
- This is the heart of their core update logic: https://github.com/WordPress/wordpress-develop/blob/d62f8581899be84fd59b... — it all happens in a single request.
- The essence of the process is documented in https://core.trac.wordpress.org/browser/trunk/src/wp-admin/includes/upda...
1014 * The steps for the upgrader for after the new release is downloaded and 1015 * unzipped is: 1016 * 1. Test unzipped location for select files to ensure that unzipped worked. 1017 * 2. Create the .maintenance file in current WordPress base. 1018 * 3. Copy new WordPress directory over old WordPress files. 1019 * 4. Upgrade WordPress to new version. 1020 * 4.1. Copy all files/folders other than wp-content 1021 * 4.2. Copy any language files to WP_LANG_DIR (which may differ from WP_CONTENT_DIR 1022 * 4.3. Copy any new bundled themes/plugins to their respective locations 1023 * 5. Delete new WordPress directory path. 1024 * 6. Delete .maintenance file. 1025 * 7. Remove old files. 1026 * 8. Delete 'update_core' option. 1027. * 1028 * There are several areas of failure. For instance if PHP times out before step 1029 * 6, then you will not be able to access any portion of your site. Also, since 1030 * the upgrade will not continue where it left off, you will not be able to 1031 * automatically remove old files and remove the 'update_core' option. This 1032 * isn't that bad.
(Yes, that's right, WordPress literally hardcoded lists of files and directories to add/remove per past release of WordPress. 🤯)
It seems they have gotten by just fine for the vast majority of their users? Or perhaps WP users complain not in their official issue tracker?
We validate many things already that they don't — such as file system permissions, so we pre-emptively catch many problems that WP does not seem to check.
But … their architecture has some performance/reliability/recoverability advantages that we don't:
- performance: they do not create a full copy of the site's codebase: they first block incoming requests, then modify the live codebase in-place 😱 → far less file I/O!
- reliability/recoverability: in case of problems, a rollback is possible: because they literally have lists of files + directories that are new/removed/changed between versions (at least for core) — if in our case the copying from
stage
toactive
fails due to a timeout, we have no way to recover, other than asking the site owner to restore the DB + files from a back-up.
The reliability/recoverability part could be addressed by keeping track of which package versions were installed previously and are supposed to be installed now — independently of the state of the filesystem. That way, even when the
composer.json|lock
file does not match the actual state on the filesystem, we could redostage
as it was intended and re-copy it overactive
. - 🇧🇪Belgium wim leers Ghent 🇧🇪🇪🇺
Details for that last paragraph in #3343802-4: When determining scaffold file mappings, determine them consistently and completely, accounting for overrides → .
- 🇧🇪Belgium wim leers Ghent 🇧🇪🇪🇺
@tedbow posted an interesting observation in Slack:
@wimleers on interesting idea is their
spawn_cron
https://github.com/WordPress/WordPress/blob/0008d8df0605c9eccf9c4b9015d3...
It seems like their version ofautomated_cron
Sends a request to run cron through HTTP request that doesn’t halt page loading.
So this would also the problem of cron running at the end of a request with time out issues.
We could possibly do something like this but for auto updates.
At an early part of the request make a 2nd request out toauto-update.php
that would only be responsible for updating, 1 phase at a time.
or inKernelEvents::TERMINATE
we could make a request to `auto-update.php
but then that request would not have to worry about what else is going on in the page. The whole request would be for performing 1 phase(begin, stage, apply, destroy) of the update process.Very interesting indeed! That reminds me of something:
max_execution_time
does NOT apply tosleep()
and other I/O — only to actual PHP execution. See #3357941-3: Write helpful message to FailureMarker if script execution time was exceeded during stage life cycle → . Run that script locally to observe itAny time spent on activity that happens outside the execution of the script such as system calls using system(), stream operations, database queries, etc. is not included when determining the maximum time that the script has been running.
— https://www.php.net/manual/en/function.set-time-limit.php (edited)
So, combining @tedbow's idea with that information::
- if in
KernelEvents::TERMINATE
we don’t do any work but trigger a request to do the ACTUAL work and wait for that, it will NOT count towards PHP’s execution time! - that way it could easily last multiple minutes — which does not overcome the
php-fpm
etc timeout of course, but it definitely is one layer of extra hardening - … because in the
KernelEvents::TERMINATE
we’ll basically do one request forcreate
, one forrequire
, one forapply
and one fordestroy
… even if each of those take 1 minute, the time the Terminate event handler has consumed will still be less than a second!
- if in
- 🇺🇸United States phenaproxima Massachusetts
performance: they do not create a full copy of the site's codebase: they first block incoming requests, then modify the live codebase in-place 😱 → far less file I/O!
I could absolutely see a contrib module (or custom code) modifying Package Manager to do exactly this. Really, all you'd have to do would be to override the Beginner and Committer services so that they do nothing at all, and then override the Stager so that it always does its thing in the active directory.
reliability/recoverability: in case of problems, a rollback is possible: because they literally have lists of files + directories that are new/removed/changed between versions (at least for core) — if in our case the copying from stage to active fails due to a timeout, we have no way to recover, other than asking the site owner to restore the DB + files from a back-up.
Another thing I could see a contrib case for -- using the event system to create a git snapshot of the site before the update, so that it can be rolled back if there's a failure. Probably with some (hopefully small) changes to Composer Stager, you could probably diff the active and stage directories.
- 🇺🇸United States effulgentsia
max_execution_time does NOT apply to sleep() and other I/O
Correct, but I also don't think we need to worry about
max_execution_time
at all, because we can callset_time_limit(0)
if we want to. Are there cases where PHP can disallowset_time_limit(0)
from being called and overridingmax_execution_time
?The timeouts we need to worry about are the web server killing the PHP process. In the case of Apache and mod_php, I don't think there's a timeout. In the case of Apache calling PHP as a CGI script, I think Apache will time out waiting on the CGI script, and return a 504 or similar, but I don't think it actually kills the PHP script when that happens, so
KernelEvents::TERMINATE
can still take however long it needs to. However, I think there are cases where PHP-FPM can kill a PHP script that's taking too long, and I also don't know if there are shared hosting companies that implement hard timeouts on long running PHP scripts as well. - 🇧🇪Belgium wim leers Ghent 🇧🇪🇪🇺
Are there cases where PHP can disallow
set_time_limit(0)
from being called and overridingmax_execution_time
?Yes: https://stackoverflow.com/questions/17434013/set-time-limit-has-been-dis... — so this is likely to be enab
Hosting companies do use this:
- Gandi sets it to 180 seconds and does not allow overriding it.
- Hostgator sets it to 30 seconds and does not allow overriding it.
- … I can probably find more
But Dreamhost does not forbid this though.
This is why Drupal's
\Drupal\Component\Utility\Environment::setTimeLimit()
checks this explicitly!I don't think it's common for that to propagate back to the web server and result in the web server killing the PHP script, but maybe there are some web servers / configurations where it does?
IIRC stopping the request triggers the HTTP connection getting closed, which triggers PHP's shutdown functions. See the big comment block inside
_batch_page()
for that — introduced in #2851111: fastcgi_finish_request and shutdown functions (e.g. batch) do not play nicely together → .The timeouts we need to worry about are the web server killing the PHP process.
💯
For example,
mod_fcgid
defaults to a pretty aggressive 40 seconds: https://httpd.apache.org/mod_fcgid/mod/mod_fcgid.html#fcgidiotimeout - 🇺🇸United States effulgentsia
Hostgator sets it to 30 seconds and does not allow overriding it.
Wow, that's aggressive. Nice find!
mod_fcgid defaults to a pretty aggressive 40 seconds
That's the IO timeout, so I think that's the case where it will return a 504, but the script can keep running, just without sending anything back to the client. However, at that point, I think Apache might consider that process to be idle, so the (default 5 minute) idle timeout might kick in, or maybe Apache will kill and restart that process before then if it needs it to serve another request?
- 🇧🇪Belgium wim leers Ghent 🇧🇪🇪🇺
That's the IO timeout, so I think […]
Right, understanding that on all common platforms is something we should also do. That will require a pretty significant amount of research though; there appears no centralized overview or some authoritative article: https://duckduckgo.com/?q=understanding+php+timeout+fcgi+fpm&ia=web 😓
- Open on Drupal.org →Core: 10.1.x + Environment: PHP 8.1 & MySQL 8last update
over 1 year ago Not currently mergeable. - @tedbow opened merge request.
- last update
over 1 year ago 787 pass - Assigned to tedbow
- 🇺🇸United States tedbow Ithaca, NY, USA
Assign to myself to set up the boiler plate code
- 🇧🇪Belgium wim leers Ghent 🇧🇪🇪🇺
Related: see my proposal at #3318587-18: Research alternatives to curl request on post apply event during cron updates → .
- 🇺🇸United States effulgentsia
If it's being done this way (and not via a web URL like cron.php), then I don't think there's a timeout to worry about. So I think we can do all the phases in a single request.
I wonder if we can take advantage of this even in the case where someone can't setup a crontab by having
KernelEvents::TERMINATE
execute a background process to do the update and exit without waiting for that background process to finish. Then the web server would have no need to kill the original process, and the background process would follow the rules of a console process rather than a CGI/mod_php/php-fpm/etc. process. The answers to https://stackoverflow.com/questions/43235629/symfony-process-background-... seem promising, but I have not tested this myself. Some hosts may disallowexec
and similar functions, and I don't know ifproc_open
to an intermediary shell process that runs a background process would work. If something like this works for a large majority of hosts, then I think I'd be in favor of saying that for the minority of hosts that disable any form of launching background processes that those sites are required to use a crontab entry. - 🇧🇪Belgium wim leers Ghent 🇧🇪🇪🇺
I'm not too hopeful that many shared hosting companies allow
exec()
,system()
,proc_open()
and friends.Gathering data for that seems hard.
BUT!!!!!!!!!!!!!
php-tuf/composer-stager
usessymfony/process
, which … does this:public function __construct(array $command, string $cwd = null, array $env = null, mixed $input = null, ?float $timeout = 60) { if (!\function_exists('proc_open')) { throw new LogicException('The Process class relies on proc_open, which is not available on your PHP installation.'); }
… meaning that without this, NONE of what Automatic Updates will work anyway. Which makes sense: we need to be able to execute the installed
composer
.Which means that what @effulgentsia proposed in #22 should totally work?! 🤩🤯🥳🥳🥳
We'd just A) specify timeout
NULL
(meaning "no limit") and B) use the console command (well, for now Drush command) that 📌 Add Drush command to allow running cron updates via console and by a separate user, for defense-in-depth Fixed introduced! Tada, problem solved! - last update
over 1 year ago Custom Commands Failed - last update
over 1 year ago Custom Commands Failed - last update
over 1 year ago 543 pass, 39 fail - last update
over 1 year ago 543 pass, 39 fail - 🇺🇸United States effulgentsia
I don't know if proc_open to an intermediary shell process that runs a background process would work.
It does on my Mac. If I have the following
test.php
:<?php use Symfony\Component\Process\Process; $autoloader = require_once 'autoload.php'; $process = new Process(['sh', '-c', 'php test2.php &'], __DIR__, null, null, 0); $process->start(); sleep(5); print 'done sleeping, exiting';
And the following
test2.php
:<?php sleep(20); touch("/tmp/file.txt");
Then when I visit
test.php
in the browser, I get the "done sleeping, exiting" response in 5 seconds, and after 20 seconds/tmp/file.txt
gets created.I don't know if some shared hosting companies have ways to prevent this or to periodically kill long-running processes, but short of such measures, I think the above is fairly robust.
- last update
over 1 year ago 701 pass, 25 fail - 🇧🇪Belgium wim leers Ghent 🇧🇪🇪🇺
#25: wow, very interesting! I didn't realize that was a shell feature.
In my experiment, I didn't use a shell, and that failed:
$process = new Process(['chromedriver', '--port=4444', '&'], null, null, null, 0); $process->start(); var_dump($process->isStarted()); print 'yar'; sleep(5); var_dump($process->getOutput()); sleep(5); var_dump($process->getOutput()); print 'done sleeping, exiting'; exit;
I don't know if some shared hosting companies have ways to prevent this or to periodically kill long-running processes, but short of such measures, I think the above is fairly robust.
I'd be more worried about PHP being executed by a user that is not allowed to execute
sh
. IOW: that the user of the PHP process cannot access$ which sh /bin/sh
Could that be as simple as
is_executable()
on the output ofwhich sh
? 🤔 - 🇺🇸United States tedbow Ithaca, NY, USA
I talked with @Wim Leers yesterday about this issue
My thoughts
- Most(all?) of the discussion on this issue has not revolved around my MR or proposed solution.
I think the proposed solution is smaller change that needs less research and will help out the problem because it will significantly reduce the time we spend on updating in any one request and will make sureapply()
is nearly the only that that happens in 1 request. - The discussion here about using using processes in way that is affected by hosting timeouts is good though a larger change that needs more research into actual hosting set ups and may have some risks.
- For those reasons I have moved my proposal to newly created 📌 For web cron updates run each stage life cycle phase in a different request Closed: won't fix . It is easier to move a MR with no discussion to a new issue than to move the discussion. I will move the MR there and close it here.
- I have deleted everything in this issue's summary except the Problem/Motivation section. @Wim Leers please update the summary as you see fit.
I have also updated the title with a guess at one the solution wanted is
- I think this ideas are potentially very good. The one big problem I see with them is:
(caveat I am not a hosting platform expert, just brainstorming 😉)We have long process that if we could do it all at once without risk of timeouts it would really benefit sites that need to run the updates during the web request, especially low traffic sites that if we split the updating into multiple requests could take a long time to complete.
Most of the solutions discussed here look to solve the problem by figuring out "How can we spawn another process during a web request that will not be affected by the timeout restrictions of the web host?"
While if we could solve that question it would be very good for us it seems like it would be in conflict with the intentions of timeout restrictions on hosting platforms, especially lower priced ones, if not their actual configuration. It seems like hosting companies put in time restrictions so that on web request cannot take a very long time and a lot of resources.
My guess would be that we will not a way find a to spawn another process that is restricted on some level by timeout restrictions on the host. If we do find a way to spawn another process that is not subject to time out restrictions then it would probably good method for anyone who wants work around time out restrictions on their host and run processes as long as they want from a single request.
For that reason I think even if we find a way that works now to spawn such a process we should consider possibility that hosting companies would want to figure out a way to put some time restriction on such process.
- Most(all?) of the discussion on this issue has not revolved around my MR or proposed solution.
- 🇺🇸United States tedbow Ithaca, NY, USA
@Wim Leers assigning to you to update the summary to reflect the discussion
- Assigned to wim leers
- 🇧🇪Belgium wim leers Ghent 🇧🇪🇪🇺
RE #27.5: regarding violating the intentions of a host when they impose restrictions: agreed with what you wrote (and as we discussed several times several weeks ago). I just realized one more nuance though: the intentions of shared hosting companies are typically to keep resources available for serving requests.
A long-running PHP process in the background does not consume any of the available allocated web server (Apache/nginx/…) workers or PHP workers (PHP-FPM/…), nor does it trigger additional DB queries. In other words: it consumes none of the sold resources, and in particular it consumes almost no CPU.
What this long-running PHP process does consume is quite a bit of file I/O (temporarily some storage, but primarily I/O: moving bytes around) and some CPU time (not much though: primarily for computing how dependencies need to be updated, which since Composer 2 is very fast — in a Composer 1 world this would've been a hard blocker most likely).
Conclusion: it is is not strongly in conflict with the intentions of a hosting company — because it's about different concerns … which means it may very well be acceptable.
- Issue was unassigned.
- Assigned to tedbow
- Open on Drupal.org →Core: 10.1.x + Environment: PHP 8.1 & MySQL 8last update
over 1 year ago Not currently mergeable. - @tedbow opened merge request.
- Open on Drupal.org →Core: 10.1.x + Environment: PHP 8.1 & MySQL 8last update
over 1 year ago Not currently mergeable. - Open on Drupal.org →Core: 10.1.x + Environment: PHP 8.1 & MySQL 8last update
over 1 year ago Not currently mergeable. - 🇺🇸United States pwolanin
Talking about this at Drupalcon, going to work on implementing this. I don't think we need to use the extra "sh" but but in any case I don't think this is going to work on windows, so those sites would need to run cron or the dedicated console command via the command line on a regular basis. We validated this restriction as acceptable with xjm in terms of the impact on future core releases.
- 🇺🇸United States effulgentsia
I don't think we need to use the extra "sh"
That would be great if possible. I don't know off-hand how PHP can put a process into the background without going through an intermediary shell process, but if there is a way that would be great. Without putting the process into the background, I think the process would be terminated when the web request finishes (see #26).
- last update
over 1 year ago Custom Commands Failed - last update
over 1 year ago Custom Commands Failed - last update
over 1 year ago Custom Commands Failed - last update
over 1 year ago Custom Commands Failed - last update
over 1 year ago 528 pass, 92 fail - 🇺🇸United States pwolanin
Found this post that seems quite close to what we want to do:
https://stackoverflow.com/questions/50039911/how-to-run-a-background-pro... - 🇺🇸United States pwolanin
Ok, here's a variant on what effulgentsia first tried. I ran this in a local docker setup and checked the process list and can see that we have a orphan process that has a PPID of 1:
UID PID PPID C STIME TTY TIME CMD ... www-data 110 84 0 20:21 ? 00:00:00 php-fpm: pool www root 116 0 0 20:22 pts/0 00:00:00 /bin/bash www-data 130 86 0 20:24 ? 00:00:00 nginx: worker process www-data 131 86 0 20:24 ? 00:00:00 nginx: worker process www-data 213 1 1 20:39 ? 00:00:00 /usr/bin/php test2.php
The output from the webserver:
PHP path /usr/bin/php 2023-06-07T20:39:03+00:00 done sleeping, exiting
The content of the temp file:
2023-06-07T20:39:13+00:00
web/test.php:
use Symfony\Component\Process\Process; use Symfony\Component\Process\PhpExecutableFinder; $autoloader = require_once 'autoload.php'; $phpBinaryFinder = new PhpExecutableFinder(); $phpBinaryPath = $phpBinaryFinder->find(); $process = Process::fromShellCommandline($phpBinaryPath . ' test2.php &'); $process->setWorkingDirectory(__DIR__); $process->disableOutput(); $process->setTimeout(0); $process->start(); sleep(1); print"PHP path $phpBinaryPath\n "; print gmdate('c') . "\n"; print 'done sleeping, exiting';
web/test2.php:
sleep(10); file_put_contents("/tmp/file.txt", gmdate('c') . "\n");
- 🇧🇪Belgium wim leers Ghent 🇧🇪🇪🇺
Per @tedbow at #3360656-7: For web cron updates run each stage life cycle phase in a different request → , this is now THE way to solve not only tighter security for those who want it, but also to prevent running into the PHP max execution time during cron/unattended updates 👍
#36: I wasn't around for that, but that's WONDERFUL news!
#39: 🤩🤩🤩👏
- last update
over 1 year ago CI aborted - @tedbow opened merge request.
- last update
over 1 year ago 733 pass, 10 fail - last update
over 1 year ago 733 pass, 10 fail - last update
over 1 year ago 733 pass, 10 fail - Open on Drupal.org →Core: 10.1.x + Environment: PHP 8.1 & MySQL 8last update
over 1 year ago Not currently mergeable. - Open on Drupal.org →Core: 10.1.x + Environment: PHP 8.1 & MySQL 8last update
over 1 year ago Not currently mergeable. - Open on Drupal.org →Core: 10.1.x + Environment: PHP 8.1 & MySQL 8last update
over 1 year ago Not currently mergeable. - 🇧🇪Belgium wim leers Ghent 🇧🇪🇪🇺
AFAICT this is the logic to spawn the background process:
$process = Process::fromShellCommandline($phpBinaryFinder->find() . " $drush_path auto-update &"); // $process = new Process([$phpBinaryFinder->find(), $drush_path, 'auto-update', '&']); $process->setWorkingDirectory($path_locator->getProjectRoot() . DIRECTORY_SEPARATOR . $path_locator->getWebRoot()); $process->disableOutput(); $process->setTimeout(0);
We just discussed in a meeting at a high level the need for ensuring this does not run forever, both for testing and production purposes. We need to know whether the process is still doing things or not.
Ideas:
- at a minimum, the Drush/console command AFAICT could guarantee that it notifies us when it either crashes or exits gracefully, by using
register_shutdown_function()
and writing a key/value pair toState
Process
relies onproc_open()
. We can call$process->getPid()
, store that PID inState
and then in any subsequent request we can useproc_get_status()
to see if it's still running! (or even use the classical PID file approach)\Symfony\Component\Process\Process::signal()
orpctnl_signal
allows sending a signal to the opened process. We could make it at that time write something toState
. Example code: https://www.php.net/manual/en/pcntl.example.php.- Shell: https://stackoverflow.com/questions/117226/how-to-check-if-a-php-script-...
Thoughts on each:
- 👎 Does not indicate current aliveness.
- 👍 No new system requirements, matches long-standing UNIX patterns.
- 👎 Requires the
pcntl
PHP extension. - 👎 Triggers another process, which is … fine but unnecessary since we can do without: see 2. I think this is a good alternative to 2 if that turns out to be impossible.
- at a minimum, the Drush/console command AFAICT could guarantee that it notifies us when it either crashes or exits gracefully, by using
- Status changed to Postponed
over 1 year ago 7:41pm 23 June 2023 - 🇺🇸United States tedbow Ithaca, NY, USA
@Wim Leers thanks for ideas!
Postoning on 🐛 Drush build test symlinks Drush, and does not install any of its dependencies Fixed because build test will not pass locally at least until then
- Status changed to Needs work
over 1 year ago 4:16pm 26 June 2023 - last update
over 1 year ago Custom Commands Failed - last update
over 1 year ago 727 pass, 23 fail - last update
over 1 year ago 735 pass, 10 fail - last update
over 1 year ago 745 pass, 9 fail - last update
over 1 year ago 751 pass, 7 fail - last update
over 1 year ago 754 pass, 5 fail - 🇺🇸United States effulgentsia
I think there are cases where PHP-FPM can kill a PHP script that's taking too long
In case it helps with manual testing, the PHP-FPM configuration to do this is request-terminate-timeout. It would be great to incorporate that into an automated test, but DrupalCI containers don't use FPM, and I don't know what the equivalent configuration for Apache mod_php is.
- last update
over 1 year ago 754 pass, 3 fail - last update
over 1 year ago 754 pass, 3 fail - last update
over 1 year ago 754 pass, 3 fail - last update
over 1 year ago Custom Commands Failed - last update
over 1 year ago 754 pass, 3 fail - last update
over 1 year ago 754 pass, 3 fail - last update
over 1 year ago 754 pass, 3 fail - last update
over 1 year ago 754 pass, 3 fail - last update
over 1 year ago 754 pass, 4 fail - last update
over 1 year ago 754 pass, 3 fail - last update
over 1 year ago 722 pass, 4 fail - last update
over 1 year ago Custom Commands Failed - last update
over 1 year ago Custom Commands Failed - last update
over 1 year ago 754 pass, 3 fail - last update
over 1 year ago 2 fail - 🇺🇸United States tedbow Ithaca, NY, USA
Postponing on 📌 For web server dependent unattended updates run the entire life cycle in a separate process that will not be affected by hosting time limits Fixed because this will make it easier to fix our last test failures
- Status changed to Postponed
over 1 year ago 6:25pm 14 July 2023 - 🇺🇸United States tedbow Ithaca, NY, USA
Whoops wrong issue 📌 HookCronTest is broken and doesn't test how often status checks will be run Fixed
- last update
over 1 year ago 2 fail - 🇺🇸United States tedbow Ithaca, NY, USA
📌 HookCronTest is broken and doesn't test how often status checks will be run Fixed was committed but now discovered 🐛 CronFrequencyValidator should not check if updates will be run via console Fixed
so leaving postponed
- last update
over 1 year ago 11 pass - last update
over 1 year ago 11 pass - last update
over 1 year ago 11 pass - last update
over 1 year ago 11 pass - last update
over 1 year ago 11 pass - last update
over 1 year ago CI aborted - last update
over 1 year ago 769 pass, 2 fail - last update
over 1 year ago 769 pass, 2 fail - last update
over 1 year ago 769 pass, 2 fail - last update
over 1 year ago 770 pass, 1 fail - Assigned to phenaproxima
- Status changed to Needs review
over 1 year ago 4:14am 20 July 2023 - 🇺🇸United States tedbow Ithaca, NY, USA
🐛 CronFrequencyValidator should not check if updates will be run via console Fixed committed.
StatusCheckFailureEmailTest
should pass now.\Drupal\Tests\automatic_updates\Kernel\HookCronTest::testStatusChecksRunOnCron
will be the lone failure hopefully.I think we need to change who we test this.
We no longer have hook_cron implementation in this module.
so I think how we could test it is
- Test that terminal command is invoked on cron. We could expand
CronUpdateStageTest
though it is test somewhat here. - Test that terminal command does or does not run status checks we could use the DrushTest trait to test this directly
@phenaproxima if you could start to look at this that would be great. I am sure it has more to be done.
- Test that terminal command is invoked on cron. We could expand
- last update
over 1 year ago 770 pass - last update
over 1 year ago 770 pass - last update
over 1 year ago 759 pass, 2 fail - last update
over 1 year ago 770 pass - Assigned to tedbow
- Status changed to Needs work
over 1 year ago 6:05pm 21 July 2023 - 🇺🇸United States phenaproxima Massachusetts
Okay, I carefully reviewed this entire thing. First of all, major kudos to @tedbow for taking this on! It's been nearly two months.
The one file I basically skipped over was DrushUpdateStageTest. It looks like basically a rename of CronUpdateStageTest, without any actual cron runs.
I think the overall approach makes sense here. DrushUpdateStage is doing 95% of what CronUpdateStage used to do. CronUpdateStage, which will be renamed, is a relatively thin decorator around the cron service that knows to invoke the stage if there's an update waiting in the wings.
Most of my comments are about the details, making sure everything is properly commented. But if it works properly, I can get behind the architectural decisions that have been made here.
- last update
over 1 year ago 770 pass - last update
over 1 year ago 770 pass - last update
over 1 year ago 770 pass - last update
over 1 year ago 770 pass - last update
over 1 year ago 770 pass - last update
over 1 year ago 770 pass - last update
over 1 year ago 770 pass - last update
over 1 year ago 763 pass, 2 fail - last update
over 1 year ago 763 pass, 2 fail - last update
over 1 year ago 770 pass - last update
over 1 year ago Composer require failure - last update
over 1 year ago 770 pass - last update
over 1 year ago Custom Commands Failed - last update
over 1 year ago 770 pass - last update
over 1 year ago 770 pass - last update
over 1 year ago CI aborted - last update
over 1 year ago 4 pass, 2 fail - last update
over 1 year ago 4 pass, 1 fail - last update
over 1 year ago 763 pass, 1 fail - last update
over 1 year ago 770 pass - last update
over 1 year ago 770 pass - last update
over 1 year ago 770 pass - last update
over 1 year ago 770 pass - last update
over 1 year ago 770 pass - Assigned to phenaproxima
- Status changed to Needs review
over 1 year ago 6:39pm 1 August 2023 - 🇺🇸United States tedbow Ithaca, NY, USA
@phenaproxima I think all the threads are back to you
- last update
over 1 year ago 770 pass - last update
over 1 year ago 770 pass - last update
over 1 year ago 763 pass, 1 fail - Assigned to tedbow
- Status changed to Needs work
over 1 year ago 5:12pm 2 August 2023 - 🇺🇸United States phenaproxima Massachusetts
Kicking over for adjustments now that 📌 Change package_manager_test_event_logger\EventSubscriber to log to a file to simplify it and code which asserts against it Fixed is in.
- Open on Drupal.org →Core: 10.1.x + Environment: PHP 8.1 & MySQL 8last update
over 1 year ago Not currently mergeable. - Open on Drupal.org →Core: 10.1.x + Environment: PHP 8.1 & MySQL 8last update
over 1 year ago Not currently mergeable. - last update
over 1 year ago 770 pass - last update
over 1 year ago 770 pass - last update
over 1 year ago 770 pass - Assigned to phenaproxima
- Status changed to Needs review
over 1 year ago 9:59pm 2 August 2023 - 🇺🇸United States tedbow Ithaca, NY, USA
@phenaproxima I merged in 3.0.x now that we finished 📌 Move error checking that should apply to all updates from cron build test to update assert Fixed and 📌 Change package_manager_test_event_logger\EventSubscriber to log to a file to simplify it and code which asserts against it Fixed please take another look
- last update
over 1 year ago 770 pass - last update
over 1 year ago 770 pass - last update
over 1 year ago 770 pass - last update
over 1 year ago 763 pass, 1 fail - last update
over 1 year ago 770 pass - last update
over 1 year ago Custom Commands Failed - last update
over 1 year ago 770 pass - last update
over 1 year ago 770 pass - last update
over 1 year ago 771 pass - last update
over 1 year ago 771 pass - last update
over 1 year ago Custom Commands Failed - last update
over 1 year ago Custom Commands Failed - last update
over 1 year ago 734 pass, 2 fail - Assigned to tedbow
- Status changed to Needs work
over 1 year ago 9:29pm 9 August 2023 - 🇺🇸United States phenaproxima Massachusetts
Back to @tedbow for test failures...
- Open on Drupal.org →Core: 10.1.x + Environment: PHP 8.1 & MySQL 8last update
over 1 year ago Not currently mergeable. - last update
over 1 year ago 734 pass, 1 fail - Assigned to phenaproxima
- Status changed to Needs review
over 1 year ago 2:43pm 11 August 2023 - 🇺🇸United States tedbow Ithaca, NY, USA
Fixed test failure. It was because in DrushUpdateStageTest::testUpdateStageCalled(which was moved unchanged from CronUpdateStageTest) we were mocking the event_dispatcher service to not fire validation. But now since we persisting that service it doesn't work anymore.
I think need to worry about not firing the validation. Maybe at the time there was some problem with one or more of the validators
- Assigned to tedbow
- Status changed to Needs work
over 1 year ago 2:45pm 11 August 2023 - 🇺🇸United States tedbow Ithaca, NY, USA
DrushUpdateStageTest is still failing for some reason on drupalci
- Open on Drupal.org →Core: 10.1.x + Environment: PHP 8.1 & MySQL 8last update
over 1 year ago Not currently mergeable. - Assigned to phenaproxima
- Status changed to Needs review
over 1 year ago 3:05pm 11 August 2023 - 🇺🇸United States tedbow Ithaca, NY, USA
Ok last test should be back to passing
- last update
over 1 year ago 771 pass - last update
over 1 year ago 771 pass - Issue was unassigned.
- Status changed to RTBC
over 1 year ago 4:46pm 11 August 2023 - 🇺🇸United States phenaproxima Massachusetts
Okay, I think I'm comfortable with where this is.
This is a major, seismic change in Automatic Updates that took months to do. It means we can't support Windows for the time being. Our class naming leaves something to be desired, and we need to get rid of the Drush integration as soon as humanly possible, since it adds a lot of complication that we'd best remove.
But this is still a big improvement -- we're finally compatible with Automated Cron, and this lays the groundwork for us to be even more secure, since updates don't have to be done via the web.
Let's ship it.
- last update
over 1 year ago 771 pass - last update
over 1 year ago 771 pass - 🇧🇪Belgium wim leers Ghent 🇧🇪🇪🇺
Epic to see this land! 🤩👏
935 additions and 1158 deletions
👈 not surprised at this epic diff stat 😅Was very curious to see this, so skimmed the MR and had a few questions:
-
// Wait a bit before checking again. sleep(5);
Why 5 seconds? Is it related to
$this->logger->error('Background update failed because the process did not start within 5 seconds.');
which I see elsewhere? 🤔
-
* @todo Rename this class to CronUpdateRunner because it is no longer a stage * in https://drupal.org/i/3375940.
Unpostponed! 👍
-
// @todo Check if on Windows to not allow cron updates in // https://drupal.org/i/3377237.
Already finished by @phenaproxima! 😄
-
// For some reason 'vendor/bin/drush' does not exist in build tests but this // method will be removed entirely before beta. $command_path = $this->pathLocator->getVendorDirectory() . '/drush/drush/drush';
In which issue will that happen? 🤓
-
- Status changed to Fixed
over 1 year ago 2:15pm 14 August 2023 - 🇺🇸United States tedbow Ithaca, NY, USA
- No, this basically waits until the site has completed the update process by checking to see all the expected events have been logged via
\Drupal\package_manager_test_event_logger\EventSubscriber\EventLogSubscriber::logEventInfo()
. Since we maybe be for the whole update life cycle, in the case of cron update, this could take a few minutes. Basically so we don't check hundreds of times.In case of the UI updates where we fire this at the end of the update after we have seen confirmation in the UI we should never get to this wait in a passing test because it will have exited the loop right before this.
- 📌 Add Symfony Console command to allow running cron updates via console and by a separate user, for defense-in-depth Fixed . In that issue we will search our codebase for "drush" and make sure all references are gone because we will remove the dependency. but yes we should have linked the todo.
- No, this basically waits until the site has completed the update process by checking to see all the expected events have been logged via
Automatically closed - issue fixed for 2 weeks with no activity.