- Issue created by @afi13
- Status changed to Needs review
about 1 year ago 1:20pm 5 March 2024 - Issue was unassigned.
- Status changed to Closed: duplicate
about 1 year ago 2:36am 6 March 2024 - 🇺🇸United States cmlara
I believe this is a duplicate of #2984268: Configuration option for disabling deletion from S3 → .
As noted in #3185760: Allow read only use of S3 bucket → I personally prefer that we trust the bucket to enforce file access controls. This is why the read-only control is implemented without actually preventing writes as we depend upon the bucket to be correctly configured.
Preventing deletes does not prevent truncating the files meaning this would generally have to be paired with read-only credentials and a read-only bucket setting making it a duplicate feature.
Not discussed in either issue is the fact this breaks the standard filesystem API. Reporting that a file has been deleted when it has not is a breach that can/will confuse well written scripts.
Since you mentioned development, I will note it is generally recommend that a duplicate development/staging bucket be used. This bucket can carry all, or a subset, of your files. AWS provides tools for duplicating buckets (and routinely refreshing them).
At this time I still can not see this as being a viable solution to implement without creating larger concerns.
I will note in 4.x modules are fully customizable and that should have a need to do so they will be able to implement this without patching the core s3fs module.