Package Manager actions intermittently blocked on A2 Hosting

Created on 3 July 2025, 4 days ago

Problem/Motivation

Not calling this a bug since it's not strictly an issue on our end, but in my testing of Package Manager / Project Browser on A2 Hosting, there are times when it works as expected and other times where adding a module seems to trigger some kind of automated blocker that kills the site.

When this occurs, the site becomes unavailable on a 403 server response, and the only way to get it back is to contact support who can then unblock it.

I have had this occur multiple times but never was able to get them to confirm that the block was based on automated detection, but this is our best theory. Considering that tens of thousands of files are being copied around, it is very plausible.

Steps to reproduce

  1. Install Drupal on A2 Hosting
  2. Set up Project Browser per the instructions β†’
  3. Visit the PB modules page and try to install a module that is not already on local disk
  4. See that sometimes the task fails and the site is no longer accessible

Proposed resolution

??

Remaining tasks

TBC

User interface changes

N/A

Introduced terminology

N/A

API changes

TBC?

Data model changes

N/A

Release notes snippet

N/A

πŸ“Œ Task
Status

Active

Version

11.0 πŸ”₯

Component

package_manager.module

Created by

πŸ‡¦πŸ‡ΊAustralia pameeela

Live updates comments and jobs are added and updated live.
Sign in to follow issues

Comments & Activities

  • Issue created by @pameeela
  • πŸ‡ΊπŸ‡ΈUnited States hestenet Portland, OR πŸ‡ΊπŸ‡Έ
  • πŸ‡¬πŸ‡§United Kingdom catch

    While this might be a task, it seems at least major if package_manager can't work on one of its main target environments (assuming similar limits exists on other shared hosts). I also personally feel it needs to hard block any attempt to charge money for site templates because people actually need to able to successfully use those sites on cheap hosting - if they can't they deserve their money back.

    Did some quick numbers:

    There are about 20k files in Drupal's /core directory

    find ./ -type f | wc -l
    19855
    

    Of these, 4.2k are test classes

    find ./core -type f -name '*Test.php' | wc -l
    4225
    

    There are a further 1.2k files in system's test module directory:

     find ./core/modules/system/tests/modules -type f | wc -l
    1242
    

    Just in Umami there are over 500 files, probably a lot of this is config:

    find web/core/profiles/demo_umami -type f | wc -l
    576
    

    For comparison there are more like 3k files in /vendor from all of our production dependencies put together.

    find vendor -type f | wc -l
    2911
    

    So it looks to me like if we remove test modules and concrete test files from the core subtree split, we could drop at least one quarter of files for production Drupal sites, but it might be with more investigation that we could drop even more of that. We could also look into dropping tests from packages for contrib modules too.

    There has already been an issue open for that since 2019, so actually trying to do it should happen there: #3067979: Exclude test files from release packages β†’ .

    But there might be other things going on.

    I don't know off the top of my head which flags package_manager uses for rsync, but is it setting --no-compress and is it using delta-transfer? (I think it defaults to off for local directories, so would have to be explicitly flagged back to on to use it).

    Also, could we look at permanently leaving a sandbox in place, and then relying on rsync --delete etc. to remove any cruft in the sandbox the next time it's used?

  • πŸ‡ΊπŸ‡ΈUnited States hestenet Portland, OR πŸ‡ΊπŸ‡Έ
  • πŸ‡¦πŸ‡ΊAustralia pameeela

    Evidently A2 has been rebranded as 'Hosting.com' so just updating to reflect that.

  • πŸ‡ΊπŸ‡ΈUnited States nicxvan

    We probably also need an issue to standardize test module location.

    Most are in modulename/tests/modules

    But some like options are in options/tests

    I have a few clients where I just delete core/tests
    and
    core/*/tests

    I should add
    umami
    core/profiles/tests

  • πŸ‡¬πŸ‡§United Kingdom catch

    I've opened πŸ“Œ Permanently maintain a sandbox directory in package_manager Active , which is going to be non-trivial to implement but I think might be the overall root cause here, although it needs to be combined with other issues to remove the total amount of files too.

    Did some googling for shared hosting limits, and have a theory for what the actual problem might be.

    Lots of shared hosts have an inode limit per account, e.g. this reddit thread talks about 250k inode limit. https://www.reddit.com/r/webhosting/comments/2y4qa4/a_question_about_ino...

    On πŸ“Œ Permanently maintain a sandbox directory in package_manager Active I'm estimating about 30,000 files in Drupal CMS, so when that's copied to the sandbox directory, that's 60,000 - this is obviously way under 250k and ought to not trip the limit.

    However, I then found https://support.hostinger.com/en/articles/1583210-what-is-the-inode-limi...

    And more importantly https://support.hostinger.com/en/articles/1583491-why-does-the-number-of...

    So if hosting.com has a similar system, the following can be happening:

    1. Every time something is done via project browser, a sandbox directory is created and deleted.

    2. Creating the directory adds 30,000 files to the inode limit.

    3. cpanel or whatever inode tracking is buggy, and does not always remove files from the inode limit when they're deleted.

    4. This keeps happening until the hosting account thinks there are 250,000 files, even if there are still 30,000 or 60,000 - and the account gets locked.

    5. You open a support ticket, and someone in hosting support goes and clicks the 'recalculate file limit' button which resets it to the actual amount of files on disk.

    And this also might explain why they wouldn't tell @pameela what the problem was directly, because the actual limit was not reached, but instead the sheer number of file operations triggers a bug on the hosting platform itself, which they probably don't want to advertise.

    hosting.com does say publicly they have an inode limit of 600,000 files https://kb.hosting.com/docs/inode-count

    If I'm right, then it only requires ten package_manager operations to potentially hit that limit assuming they fail to register deletions consistently. It might also possible to tell if this is the right diagnosis by following the instructions on that page having broken the account, and then checking the inode limit in cpanel, vs. counting the number of files on disk. If cpanel says 600,000 and the command line says 30,000 or 60,000 then that's it.

  • πŸ‡¦πŸ‡ΊAustralia pameeela

    @catch, the limit on the number of files is a totally separate problem. I did hit that once but the resulting behaviour is not the same (500s instead of 403).

    But I believe that I only hit that because of the failing jobs from being blocked, since the job never finished and the cleanup didn’t happen. In my testing, the cleanup happens just fine as long as the job completes, the number of files was not meaningfully growing in normal usage. (The number of files as well as the limit is displayed in CPanel allowing you to easily track it.) However, it would still be worthwhile trying to reduce the number of files!

  • πŸ‡¬πŸ‡§United Kingdom catch

    @pameela I don't think it's actually hitting the file limit, but it could be incrementing the inode limit without decrementing it per the links above - the actual number of files remains OK but the host thinks there's more.

    But also, πŸ“Œ Permanently maintain a sandbox directory in package_manager Active would dramatically reduce the number of file operations that package_manager makes in almost all circumstances, so even if it's another limit like disk operations, CPU etc. being hit, there is a good chance of it helping with that anyway.

Production build 0.71.5 2024