Drupal\s3fs\Exceptions\CrossSchemeAccessException: Cross scheme access attempt blocked in Drupal\s3fs\StreamWrapper\S3fsStream->preventCrossSchemeAccess() (line 81 of modules/contrib/s3fs/src/Traits/S3fsPathsTrait.php).

Created on 28 November 2022, about 2 years ago
Updated 11 November 2023, about 1 year ago

Problem/Motivation

Drupal\s3fs\Exceptions\CrossSchemeAccessException: Cross scheme access attempt blocked in Drupal\s3fs\StreamWrapper\S3fsStream->preventCrossSchemeAccess() (line 81 of modules/contrib/s3fs/src/Traits/S3fsPathsTrait.php).
Condition checked with or condition. But we need to check this with and condition. Because Public or private uri satisfied with anyone condition.
( mb_strpos($uri, 's3://' . $public_folder . '/') === 0 && mb_strpos($uri, 's3://' . $private_folder . '/') === 0 )

🐛 Bug report
Status

Closed: cannot reproduce

Version

3.0

Component

Code

Created by

🇮🇳India Jeya sundhar Coimbatore

Live updates comments and jobs are added and updated live.
Sign in to follow issues

Comments & Activities

Not all content is available!

It's likely this issue predates Contrib.social: some issue and comment data are missing.

  • 🇮🇳India sushma22

    Hi @cmlara,

    Export to excel feature is not working after inctroducing the preventCrossSchemeAccess() function under listing page.

    Regards,
    Sushma

  • 🇧🇷Brazil varod-br São Paulo

    Hi there!

    What's the real reason to have this condition in this trait?

    I recently upgraded from 8-beta3 to 3.3, as part of my Drupal core upgrade (D10) and I'm facing this issue.

    What happens is that when you configure the fields public folder with the value "public" or the private field with the value "private", it enters the condition and throws the exception.

    I had to rename the folder from public to whatever, like: public2 ou pub, to solve without any code change.

    The site was working fine using the module beta version without this trait. So I'm thinking here if I move on without any patch (if it's working as it should) or we must consider patching this.

    Do you need any more information on this regard?

  • 🇺🇸United States cmlara

    What's the real reason to have this condition in this trait?

    It’s a security check.

    Its primary reason is to prevent files stored under private:// from being read without applying security access controls. Secondarily it also validates public:// as there are still risks to overwriting data cross scheme with that path.

    Without these checks a user could read (or write) using the public path of s3://s3fs-private/secret.txt which is actually private://secret.txt.

    I strongly recommend NOT removing the trait given the risk of leaking private data or silently overwriting existing data.

    Normally I would consider this an administrator error for placing a private folder under a public path, however for s3fs it’s a fundamental design limitation of how the 3.x code operates so the code needs to be responsible for the security checks.

    I had to rename the folder from public to whatever, like: public2 ou pub, to solve without any code change

    Correct, by doing so you have changed where new files are going to be uploaded into the bucket, and where s3fs is going to look for existing files which changes the impact on what paths need to be protected.

  • 🇧🇷Brazil varod-br São Paulo

    I still guess there is a false positive, since my path was s3://public/filename.ext and not public://filename.ext.

    If anyone in the S3FS configuration form uses "public" as the name of their public folder or "private" as the name of their private folder, the check is going to throw the exception.

    If this is not allowed, the form validation should check and avoid this.

    Best regards!

  • 🇺🇸United States cmlara

    I still guess there is a false positive, since my path was s3://public/filename.ext and not public://filename.ext.

    Based on what you have a described this sounds exactly like what I described above as what it’s the security check is designed to do.

    The location in your bucket listed for public_folder has always been reserved for use by the public:// takeover feature.

    s3://$config[‘public_folder’] is only intended to be accessed by the public:// scheme.

    • public://filename.ext == Safe and intended use of public:// takeover.
    • s3://$config[‘public_folder’]/filename.ext == Dangerous, bypasses policy, allows for files to be overwritten resulting in data loss.

    To my knowledge trying to to access s3://$config[‘public_folder’/ was never intended behavior, sometimes only worked because of other unrelated bugs (or operating in a debug mode) and could break anytime a metadata cache refresh was performed.

Production build 0.71.5 2024