Make the public file system an optional configuration

Created on 13 May 2016, over 8 years ago
Updated 24 March 2024, 9 months ago

Problem/Motivation

When Drupal is installed, it requires a valid writable path for the public file system. Unfortunately, this breaks on environments where there is no local shared storage and only an upstream block store like S3 or OpenStack Cinder is used. The public files requirement is particularly acute when using containers, where traditional solutions like NFS aren't usable due to security concerns. It's difficult for new developers as well, since disabling the public file system is error-prone, and developers in contrib often hardcode public:// instead of getting the default scheme.

Proposed resolution

  1. Allow for the configuration of alternate file systems in the installer, if they are available. This is somewhat possible in contrib, but the public stream wrapper would still need to be configured on top of the alternate stream wrapper.
  2. Allow for installation without valid public files, disabling aggregation, Twig template caching, and so on until a valid stream wrapper is configured. This is addressed in bits and pieces in contrib, e.g. twig_temp β†’ module for Twig template caches. Could this be brought into core?
  3. Have a setting (in settings.php) to use APCu for writing purposes. Use something like #2513326: Performance: create a PHP storage backend directly backed by cache β†’ for phpstorage and add a very small php file serving CSS and JS from apcu.

Remaining tasks

Decide. Implement.

Could it be possible to offer a contrib module that overrides Drupal early enough to offer installation and runtime support according to this model, without needing to bring this into core itself?

User interface changes

API changes

Data model changes

Release notes snippet

✨ Feature request
Status

Active

Version

11.0 πŸ”₯

Component
File systemΒ  β†’

Last updated about 5 hours ago

Created by

πŸ‡¨πŸ‡¦Canada deviantintegral

Live updates comments and jobs are added and updated live.
Sign in to follow issues

Comments & Activities

Not all content is available!

It's likely this issue predates Contrib.social: some issue and comment data are missing.

  • πŸ‡ΊπŸ‡ΈUnited States lhridley

    Adding this contrib module, as it seems relevant to the discussion: https://www.drupal.org/project/filecache β†’

    I've used this module to resolve an issue related to database caching performance impacts and database stability issues. The particulars of the use case were:

    • two newly built Drupal 9 sites, launched within a day of each other, both receiving a high amount of traffic and constant edits to the content by content maintainers
    • Both sites made heavy use of Layout Builder. One was built with a large number of components created in Storybook and leveraged Layout Builders "dynamic" page configuration (stored as part of content), the other used a smaller set of components and more heavily leveraged Layout Builder page templates (stored in configuration)
    • Both sites were hosted in a containerized environment managed with Kubernetes (that did not scale horizontally, a discussion for another day) using Google Cloud SQL as a managed database option (Kubernetes may or may not be relevant, Google Cloud SQL definitely was)
    • Within 48 hours the two sites were returning WSODs on a consistent basis. During troubleshooting the sizes of the backend database cache bins were excessive
    • Both sites had Varnish caching, but due to the high number of page edits taking place, cached pages were consistently invalidated. One site (the one leveraging components in Storybook and Layout Builder dynamic page construction) had Memcached in place as well
    • Before we could get additional caching layers in place, the constant thrashing of the backend cache bins brought the entire Google Cloud SQL instance offline, bringing all sites using the managed Google Cloud SQL instance offline as writes to the database were locked. In addition, the size of the cache tables made data recovery from traditional Cloud SQL backups impossible because the cache tables could not be imported, and a side consequence was that the Cloud SQL user table was lost as well, necessitating a total rebuild of the CloudSQL instance before manual imports of each individual database could be used to recover from backups. (not a glowing reference for Google CloudSQL, but the architecture of the overall setup may have been a factor).

    To reduce the size of the cache tables immediately and allow for recovery from backup by truncating the cache tables in the database, we started looking for alternatives for the backend cache bins. Ran across this module and added it to both sites, adding a dedicated volume mount for file cache storage and management.

    This solved the issue at hand, and worked beautifully. WE continually monitored the disk space usage of the dedicated volume mount and after two months of monitoring the usage leveled out to the point that we were able to adjust the sizes of the volume mounts downward, and implemented a monitoring alert system to alert the developer team when the disk space usage for that volume exceeded 70%.

    To my knowledge this cache system is still in place for both sites.

Production build 0.71.5 2024