- Issue created by @catch
- π¬π§United Kingdom catch
My assumption here is that hosting.com has some kind of monthly/daily/hourly cap on file operations, this would explain it sometimes working and sometimes getting locked intermittently.
On top of the ~20k files in core, there are 9k in modules/contrib
find ./modules/contrib -type f | wc -l 9215
And another 500 in /recipes:
find ./recipes -type f | wc -l 522
And another 500 from drupal_cms_olivero/gin/easy_email_theme
find ./themes/contrib -type f | wc -l 528
So altogether we're looking at 30,000 or more files - this is without installing anything extra.
The rsync from the sandbox directory back to the live install should not be doing 30,000 file operations, but copying 30,000 files into an empty directory always will. Also deleting 30,000 files is also a file operation.
So installing multiple recipes or modules one by one with project_browser must look something like this:
- create sandbox directory - rsync 30,000 files
- copy a subset back and delete 30,000 files- create new sandbox directory, rsync 30,000 files
- copy a subset back and delete 30,000 files- create another new sandbox directory, rsync 30,000 files
- copy a subset back and delete 30,000 filesSo if both copy and delete count towards usage limits, that's 60,000 file operations each time.
If you install ten new modules one by one, that could be 600,000 file operations within a few hours.
If we keep a single sandbox directory running however, it would look like this:
- create sandbox directory rsync 30,000 files
- copy a subset back
- update sandbox directory (no change unless the live code base has been updated via a different method)
- copy a subset backetc. etc.
Even if one composer operation is a major core update which changes 15,000 files in core, this would still be 15,000 files changed by composer, 15,000 file changes to copy back to the live code base, and no deletions - so 1/4 of the approximately 60,000 that happens every time at the moment.
- π¦πΊAustralia pameeela
Just noting all of my testing was done with Drupal core, because that was the only option at the time with Softaculous.
If memory serves it was still nearly 30k files for core plus the necessary contrib modules.
- πΊπΈUnited States phenaproxima Massachusetts
I don't think this will be trivial to implement...but it might not be particularly hard, either.
Why? Because Package Manager -- well, Composer Stager, really -- uses
rsync
to copy files to and fro. If I remember correctly,rsync
only copies things that have changed. I'm not sure if Composer Stager is sending any flags torsync
that might change the behavior, but if it doesn't, then...well...in theory, if we keep a sandbox directory alive for a long time, it might still work out just fine.But we'd probably want to have some kind of expiration, like if the sandbox directory has been sitting around for more than two weeks, it might well be time to clean it out.
- π¬π§United Kingdom catch
Yeah rsync will only copy changed files and delete deleted ones, but we need to check the flags so it's only treating actually changed files as changed (e.g. are timestamps preserved on copy, we should make sure that happens anyway though because filecache depends on mtime).
For expiration we could store a timestamp of the latest package_manager operation in k/v, and check that on cron. Two months seems about right since that would mean the site has skipped two core updates by that point.
Even if we're deleting the directory occasionally it should still help as soon as it starts getting used again for >1 operation in 2 months.
- πΊπΈUnited States phenaproxima Massachusetts
Here are the flags used by Composer Stager -- it looks like it's using file checksums: https://github.com/php-tuf/composer-stager/blob/develop/src/Internal/Fil...
- π¬π§United Kingdom catch
File checksums should be fine and --archive should be preserving mtime anyway, but combination of both should mean absolute minimum changes once this is implemented.