The patch appears to change the "Forced save as" code however the duplication steps do not indicate this is configured.
Can you confirm this only occurs on paths that match "Forced save as" ?
The problem is that the URL incorrectly starts with /s3/files, which should not be present.
If the derivative is not present in the bucket this is expected correct behavior. This URL format allows the site to generate derivatives on demand.
After a derivative is generated caches will be cleared and URL generation will be direct to the object in the the bucket.
The image does not load in the frontend because of the malformed path.
What error messages occur when visiting the path? Are any errors present in the site logs?
Other styles work correctly.
Does this style use a plugin or feature that other styles do not?
In the past we have encountered image plugin assuming images are on local disk (use of realpath() method), when this occurs the plugins are unable to render an image style.
Had a chance to look into this module again.
This issue appears to primarily be caused by site admin error.
It is required that via admin/reports/fields/gdpr-fields the field that references the user (such as 'author' for an article) must be set to "This content is owned by the referenced user", this populates a relationships lookup for entities to be searched in response to the user request.
If one attempts to set the fields via the content type (manage fields) this field is often not available and can be overlooked (I suspect this is what occurred when I originally opened this issue.). In hindsight (with deeper core knowledge) this seems obvious, there is no unified way between entities to know who the owner of a record and as such it must be configured in some manner by a site owner.
Still diagnosing other concerns however they can be a new issue in future.
It has been 2 weeks without a request for further information or indication that #3 did not answer the questions asked.
My understanding now is that $settings['s3fs.upload_as_private'] = TRUE; would mark individual files and folders as private when uploaded but that it has little to do whether the module works with a bucket set as private?
Very close, there is a minor distinction is that what this does is prevents S3FS from setting a "public read" ACL entry. When a bucket is set to "block public ACL" (private) this would error out on a putObject call. By not placing an ACL entry a bucket defaults to blocking access to an object.
Note: we never set a public ACL entry for the private:// scheme.
is my understanding that a bucket MUST be set to public for this module to work?" In the sense that there's no combination of settings that would work with a bucket configured as private?
Depending upon how your bucket is configured there are options when ACL's are blocked.
- A bucket Access Policy permitting access (this is considered more secure as the policy is defined in a single policy element and can be updated without touching each object in the bucket)
- Configure the module to utilize a CNAME entry that directs traffic to a CDN/Proxy that has access to the bucket
- Utilize only the private:// scheme takeover (all files are delivered through Drupal)
- Pre-signed URL's (may need to disable Drupal caching, ref 🐛 [PP-1] S3 Access Denied Request has expired - getExternalUrl() cached beyond presign timeout Postponed )
Our README currently addresses this as If s3fs will provide storage for s3:// or public:// files the generated links will return 403 errors unless access is granted either with presigned urls or through other external means. which does have a connotation of assuming a site owner deploying the s3fs module has a basic understanding of their providers S3 bucket service.
The exact option you choose will depend upon what features your provider allows and your design decisions.
Looking at a new project created as described and drupal/recommended-project does not appear to require paragonie/constant_time_encoding or block ^2. This appears to be normal composer operations and local constraint conflicts by adding an additional conflict in step 2 of the reproduction example.
christian-riesen/otp >= 2.4.0 only supports ^1 || ^2.
^3 would lock back to christian-riesen/otp:2.3.0 (see https://git.drupalcode.org/issue/tfa-3551511/-/jobs/6875247#L88 ).
While we support the older API I'm not sure we want TFA "encouraging" composer to allow this rollback when we would want to prefer the newer versions.
At a minimal we would want the tests to use ^2.7.
This does point out another reason that having the plugins not be part of the main TFA library would allow more flexibility (ref: 📌 Remove TOTP/HOTP plugins from TFA module (adding was a regression) Closed: works as designed )
I'm inclined to believe we should won't-fix this unless christian-riesen/otp has plans to support ^3.
I will additionally note our 2.x branch will not depend upon either of these packages directly.
Quick glancing \Drupal\system\Form\ThemeSettingsForm::validatePath() my primary question would be, was the favicon file uploaded to s3://via Drupal (processed via the s3fs module) or was it uploaded directly to the bucket? If directly to the bucket have you refreshed the s3fs metadata cache after uploading? Files uploaded outside of s3fs are unknown to us (unless ignore metadata is enabled which is not recommended for production) or a metadata cache refresh occurs.
File management modules could be used to upload the file, as well as direct PHP code to copy() from local disk to s3:// to avoid needing to perform a full metadata cache refresh.
Performed a quick test locally in the lab nd it appears to work (when the file is known to s3fs)
$ ddev drush php
> copy('/var/www/html/web/core/themes/olivero/favicon.ico', 's3://test.ico');
= true
Generates:
<link rel="icon" href="https://s3fs-test-bucket.s3.localhost.localstack.cloud:4566/test.ico" type="image/vnd.microsoft.icon">
I'm in pretty regular contact with coltrane.
That is great to hear! That does give hope that the owner scenario could be resolved without needing to involve third parties or forking.
Would certainly be wise to make it a routine process.
There are a few options to handle coltrane's access that are less disruptive to the userbase than a fork. What of those are you considering?
Main item I can see is trying to reach out via any contact point we can find, social media, git log commit emails, contact form, etc, though I'm hesitant to think we would get a response given the long inactive period.
D.O. does have other methods to 'steal' the namespace however they are morally/ethically questionable and it is not a process I can personally follow.
I will note with 2.x being a significant change in architecture the overall impact would be minor if it was the point a fork occurred.
I'm not sure either of these really do what this issue proposed (at least as a spinoff from the original opening Slack discussion)
Warning says it should be rarely be used.
Does Description really cover the case of explaining "This permission is only suppose to allow you to do XYZ and anything outside of that should be considered a security flaw"?
I will add on that the advice from #19 regarding parent base class (even though #20 appears to indicate it is no longer relevant for this application the applicant may find this useful in the future).
Drupal Core API BC Policy:
Where a base class or trait is provided, use those. This way your class should inherit new methods from the class or trait when they're added.
https://www.drupal.org/about/core/policies/core-change-policies/bc-polic... →
One may wish to use the Base class (even when normally not calling the parent implementation) in order to be prepared for future core API changes (new methods added). May not happen frequently, however the core team notes that base classes are their method for avoiding breaking deployments if they modify interfaces in a minor release (which is allowed in Core's BC policy, compared to symfony where it is not allowed).
#4 doesn't seem like a valid argument for this. It should be very easy to have a 'proof' job that runs (when not in main project) that generates the assets and makes them available to view/download without publishing to pages.
if this job is triggered on a MR pipeline that was created by a maintainer, then the MR doc changes are written (and therefore publically published) to the "real" live doc site, before the change is committed.
IIRC protections were put in for that, here is one of my pipelines where I (maintainer) started the job, it ran as a merged result outside the fork yet the Pages job didn't run. https://git.drupalcode.org/project/tfa/-/merge_requests/161/pipelines. Even if this is moved to non-parent project this protection would still be needed to avoid each fork having a published site (IIRC we have seen at least one case of the issue fork site being seen in search engines before protections were put in).
Merge Trains:
Personally I will continue merge trains. I've been saved by them a number of times where something changed and train caught it before it turn into an elevated response need (head should not be failing) allowing to be worked at leisure. It also allows me to bulk commit a number of issues in a 3 minute window rather than being forced to wait for each commit to pass testing/manually re-run testing).
I can re-enable them on all my projects (except one where I'm just a 'developer') and will likely just write a script to restore them. I will find it annoying to do, however can live with it.
I do believe its a bad precedent to set (telling maintainers and the public that code failures in HEAD is better than a bit of CI) however I'm not paying the CI bill so I suppose what I say has limited weight.
Some quotes of mine from 2024:
I'll note the 1/3 figure is a theoretical maximum savings, it is actually 1/(total_commit_tests + 2).
That said this is also fundamentally a question of what does D.O. want to say as a community.
Is human or computer time considered more valuable?
Is reliably clean HEAD more valuable to the community than possibly broken HEAD?
https://drupal.slack.com/archives/CGKLP028K/p1712685849266349?thread_ts=...
Merge trains test for that scenario where new node may have been added that interacts with the code being committed or where your committing multiple issues at once and do not want to go back and run tests on each issue after the first issue is committed (do I spend 60 seconds to commit 3 issues or 25 minutes).
We could have spent time teaching maintainers how to use the "Merge immediately" feature (which I use when I know the MR just tested HEAD and no other commits are going in) however the whole GitLab UI experience has been very much a 'figure it out own your own' process (to be fair: it isn't that hard, and I'm shocked how many maintainers have had trouble learning to use it).
ci_allow_fork_pipelines_to_run_in_parent_project:
If I understand correctly this setting be changed to false might be an argument to keep merge trains.
GitLab does warn:
If the last pipeline ran in the fork project, it may be inaccurate. Before merge, we advise running a pipeline in this project. Learn more
However I do imagine that would not be a significant issue for most maintainers as I haven't seen much in the way of isolated secrets or very long running jobs that need cache from the parent project (though if we ever deploy the GitLab Cache API this could change) that benefit from having different tasks in the parent project.
Again as long as we can still enable it per project (IIRC this is one of those "API only" config options) it shouldn't be a major issue.
There are indeed some security positives around this in that code never executes with the parent projects secrets until the MR is being merged and I could agree the average maintainer may not be looking at the code closely enough to see what it impacts.
I found myself questioning the code execution flow when looking at the tests, as such I added a comment in the code under test in order to clarify what is occurring.
https://git.drupalcode.org/issue/tfa-3548543/-/pipelines/608296 is the ^10.5 run of the pipeline.
#5 and #6 bring up a good point.
How much of this is just maintainers have chosen to not have enough permissions (choosing not to pursue full maintainer level permissions) where a warning will provide no advantage and may instead just confuse maintainers?
I know @nicxvan and @smustave have been previously made aware of these limitations of being added as a co-maintainer, I can only assume it is a choice not to have that ability that they do not request an upgrade to full maintainer on projects.
If we could allow maintainers to change the default that might be nice too.
I imagine D.O. could add this back in as a feature on the D.O. side (IIRC that has been rejected in the past) however I will note that I tend to agree with GitLab on this one, a 'developer' shouldn't have this level of control, this is a roadmap defining change, if your making that level of decision you should also be making other decisions around development and have the appropriate level of permissions as a maintainer.
Upgrading ✨ Disallow viewing recovery codes after first display Active to a must-have after the contrib SA earlier this year.
Adding 📌 Visual audit of code execution flows Active as we should give one more visual look over all areas as a final check for any visible flaws.
Upgrading 📌 Public followup for SA-CONTRIB-2025-023 Active to must have list as it involves adding alerting/monitoring of our security sensitive components to reduce future risks of exploits.
I just need to finish the migration test on this this, otherwise most of the code is ready.
This will have some impact at upgrade time due to the need to hash all recovery codes.
but even Drupal 10 requires at least PHP 8.1.
Going even further, D9.0.0 required PHP 7.3.
I could say there really are no D8.9 installs out there however https://www.drupal.org/project/usage/drupal → would show there are some (and is known to under report), very very few however still some. I won't claim any of those are running TFA (and if they were the odds they are updating is even lower). (Side note: last year I upgraded a site that was running Drupal 8.0.0, interestingly, they were running the latest version of the contrib modules they could even ones released several core minors later.
On top of that, I don't see us as developing for a release of core, I see us as developing for an API spec originally defined by Core. API specs never really go end of life they just eventually get replaced. It just so happens that majority of the installs will be Drupal Core releases matching those versions (or at least very minimally patched forks).
Postponing 8.4 fixes on
I would contend these are not fixes, they are feature enhancements, and this create a subtle yet important contextual difference to the discussions.
the off chance someone is using an ancient version of PHP feels backwards to me,
I can respect that feeling, I find it annoying at times too, it would be very easy for me as a developer to say "lets just drop that so we can adopt this new shiny feature" however I came up the ranks on the system admin and network engineering sides, I've dealt with crazy deployments in the past where very old software is used in production. SemVer wasn't utilized during those times, however it would of helped.
SemVer is fundamentally about requiring developers to think and plan in advance while retaining all of their past,to move away from the 'wild west' without regard to the history (no matter how old that history is) and make a conscious, visible (and in some circles painful) choice to drop that history.
As frustrating on the developer side as it is to adhere to the the strict principals I see it as adhering to the fundamental promises from developers to the engineers who have to implement our code to make life more predictable.
we're soft-blocked on a new major given the architectural changes in 2.x that is not production ready.
This isn't wrong, and I'm responsible for that as I haven't set a hard cutoff on the specs for 2.x. It also doesn't help that I took a detour from the major issues recently to look at smaller easier 'wins' to clear my head after a year and a half of security only design changes.
Replied to comment #13 in #3496667-2: PHP 7.0 Support → to keep the questions/responses closer linked to the issues.
Back to postponed on that issues decisions.
Responding to comment #3496146-13: Implicitly marking parameter as nullable is deprecated in PHP8.4 → :
PHP 7.4 has been EOL for almost 3 years
Zend is still providing Generally Available LTS support until the end of 2026 for I believe as far back as 7.2. This doesn't consider possibilities that private contracts may exist for even older versions. This doesn't even consider any other vendor that may have taken on the challenge.
why would we need to support 7.0?
Personally I follow a strict interpretation of SEMVER, you don't drop operating environments without a new major. Do I recommend this be used on a PHP 7.0 install, hell no, however as seen above other vendors do run support for PHP longer than its EOL. This is part of the reason why I as a developer prefer a strict interpretation, we just don't know how our project is being used in other environments, even the most insane of deployments may be still in operation and be secure.
Do I believe that a sizeable number (or even any) sites are runnign TFA on PHP7.0, no, however In my mind I don't see that as a reason to do a 'breaking' change outside of a major.
We can certainly argue that the 8.x-1.x branch isn't bound to SEMVER given its version scheme, though I generally prefer to avoid that as I see it being a bad habit to get into. There is also room to argue that Composers 'operational controls' also come into play in blocking upgrades (avoiding creating a a WSOD) making it non-disruptive, though again I also tend to avoid that as I view 'supported environments' as the standard to key off of.
These warnings become a problem if the module ever needs to support PHP9, however I don't see that as a likely occurrence for the 8.x-1.x branch. The 8.x-1.x branch already has known gross deficiencies, and with no PHP9 release on the roadmap the odds are 2.x will be out well before PHP9, if for some reason it is not, 2.x could be pushed into service well in advance to provide PHP9 support.
That said, this isn't entirely up to me, as 'just a co-maintainer' any of the other developers could override me at any time which is why the this issue was opened to allow them to provide their input.
Thanks for the backport. I had to run out right after committing the 2.x branch.
(we can keep a map so we don't do duplicate calls). It's an API call for each individual email.
Considering this is a privileged call and would have to be made by D.O. infra I don't see a reason that there can't be a backend cache that retains these mappings. We do have 'drive by' only ever seen once in their lifetime contributors, however are those the majority of ecosystem commits? This feels like a non-barrier to me.
I have to imagine when #3300281: Have git.drupalcode.org manage secondary emails, replacing multiple_email module → we knew at some point we would have to have D.O. query G.D.O. and accepted those calls as acceptable.
Going back to #8. We'd need to do an API call for every commit listed on every MR for an issue
The Commits API can return multiple commits per call.
If this does take more API calls, then we should look if we need to consider other options.
This can also be hooked into the webhook system so that the majority of the time this is done when a commit is made or a comment posted leaving only a rare need to hit the “sync” button (which is not on every page load).
Question:
Where would the decision from this issue be documented? My understanding is that 'How site moderators handle requests to be project owner, maintainer, or co-maintainer.' is not a policy document (and that this is a 'policy documenting' issue).
and this will only work for the public_email, not secondary emails
Has this been validated? https://gitlab.com/gitlab-org/gitlab/-/issues/26110 implies secondary and private emails are searchable if done using an account with sufficient privileges.
You can also use ?search= to search for users by name, username, or email. For example, /users?search=John. When you search for a:
https://docs.gitlab.com/api/users/#as-an-administrator
I'm not sure whether those other issues are related or not. But the fixes on those issues is not a good one, if anything it's worse.
You should likely comment on those issues as to why they are not a reasonable fix.
Additionally be sure to validate that there are not multiple bugs present at the same time (which would be fixed in separate issues).
though I'm not thrilled about it being described as a "V sketchy fix" haha.
When a developer says that it generally is true. My experience is those type of fixes are a guide for context, however are usually not themself the solution.
TFA set up process when having multiple validation plugins available is clearly broken and most end users have not been able to complete the process because they didn't understand the conflicting messaging.
Unsure how many are unable to do so (as it took a very long time for anyone to even notice this bug existed and it seems to only occur if you don't setup the 'default' type first ) however I agree that there have been issues that continue to need to be solved.
Patch files are no longer utilized on Drupal.org.
All work must be submitted in the form of an MR in order to run tests and be evaluated for acceptance.
This is not a new problem, and the commit message format or the recent changes in the credit system storage have no effect on this..
Previous versions did not attempt to claim the individual wrote (part of) the code, as is being done with the proposed change in text to "Co-authored-by".
Setting needs work as I don't see any discussion about the potential liability for core committers claiming an individual who provided review comments disagreeing with the commit is an 'author' of the commit. This was raised in #3540547-8: Add a UI to change "role" values for each user in the footer → . As I am not a lawyer, I would suggest core run this past counsel before using "Co-authored-by:".
Yes using By: breaks 3rd party tooling support (which if I understand correctly was a major reason to migrate to the new standard message format) however misusing Co-authored-by would do the same. Addtionaly we already deviate from the standard in other ways (prefixing the issue ID) which breaks tooling.
I also want to note that #29 echos a similar view I previously posted on Slack, that if maintainers (originally discussing Contrib however equally applies to Core) have a concern about the commit message it can indeed be made a gate for RTBC (although I believe only MR opener and Maintainers can edit a MR message).
even if GitLab would provide some limited “magic” automatically if we used Co-authored-by.
Doesn't GitHub and GitLab need full email addresses for this (that match one of the users emails)? Or are we referring to the fact that GiLab can automatically populate Co-authored-by based on commit authors? If the former we won't really see any magic from any of these proposed methods.
If we need to test any of these ideas we still have https://www.drupal.org/project/test_commit_message → we can put some test commits in.
Is there a release notes generator handling the new commit format?
I believe Matt Glaman's Drupal MRN site should support all 3 versions of this (By, Authored-by, and Co-authored-by). Related SLACK thread: https://drupal.slack.com/archives/C1BMUQ9U6/p1756151091432389
Addtionaly some more flexible generators (I'm looking at git-cliff at moment) may be able to do the basic release notes portions as well with customization (not sure how they would handled the attribution being malformed, I suspect ignored).
The DA/Core may be abandoning the roles classification.
Infra pushed
📌
Simplify commit message
Active
changing it to just By: foo.
Core is discussing policy in ✨ Simplify agreed commit format Active .
Not closing this issue as outdated as it could be considered a followup request to restore the feature.
If you are considering removing the user you may wish to also take into account 📌 Employees of CMS Minds could use some training assistance Active where the vendor the user represents has previously been identified as having deficiencies.
📌 [META|POLICY] Think of a way to make it less easy to become a (co-) maintainer Active (and its related issues) may also be worth reading for consideration of the concerns related to the adding random users with no significant work history to a module, especially a user lacking the security opt in permission when the module they are applying for is enrolled.
Encountered a wave of PHPCS issues being updated by
https://www.drupal.org/user/3709758/track →
.
Quick glanced the users history and it appears the majority of their work over the past 3 years is PHPCS,
Including in this wave were the following issues on modules that have no actively supported releases (essentially abandoned) indicating this may be abusive behavior:
https://www.drupal.org/project/business/issues/3353194
🐛
Drupal Coding Standard As Per 'phpcs --standard=Drupal'.
Needs work
https://www.drupal.org/project/private_taxonomy/issues/3404201
🐛
Issue reported by Drupal coding standards
Active
https://www.drupal.org/project/bealestreet/issues/3285835
💬
Theme is not compatible to Drupal coding standard.
Needs work
https://www.drupal.org/project/public_redaction/issues/3249159
🐛
Minor PHPCS issues
Needs work
https://www.drupal.org/project/omdb_api/issues/3293022
📌
phpcs --standard=DrupalPractice unused variable
Needs review
Also worth noting, the user submitted patches instead of MR's and did not provide interdiffs implying there is still a lack of training inside "TO THE NEW" regarding how to work on issues.
I see no post above regarding the DA having reached out to the vendor, this may have been one they failed to act upon.
I'd suggest that we only complete the first section of the v4 calculator initially.
You may want to consider the “Provider Urgency” value from the supplemental metrics as it can be used by projects to signify “how much do we think you really need to worry about this”. An extra signifier for “yes this is technically a vulnerability, the odds of it causing you a problem are almost nil however we have to disclose it” (clear) vs “This is the new drupelgeddon, patch now” (red).
I believe this is currently postponed on 📌 FileSystem::tempnam() should use realpath() instead of getDirectoryPath() Active which is postponed on 🐛 FileSystem::tempnam() doesn't respect subdirectories for stream wrappers Needs work
Adjusting IS and Title to capture that the root of this issue is the DST using criteria that differ from the CVE standards resulting in a CVE being issued for an unfixed core vulnerability.
There have been multiple proposals to replace CVE with something better, and as laudable as those efforts are, https://xkcd.com/927/ comes to mind.
@mcdruid - Drupal Security Team Member in comment #7 💬 Publication of CVE-2024-45440 by MITRE Active captures the problem perfectly, what makes the DST qualified to create a competing standard, especially when a portion of the Drupal ecosystem (the Drupal CNA) is contractually obligated to adhere to the CNA standard.
A Drupal Security Team Member recently said in a message:
I have not previously looked at CVE Numbering Authority (CNA) Operational Rules, but I believe this policy applies to the Drupal Security Team.
While in some ways this should not be surprising due to the DST and CNA are operating under two different system, in other ways it shocking that it shows how the DST is not aware rules the Drupal CNA may need to adhere to and likely are clearing issues for public disclosure without the CNA rules being applied.
Compelling alignment to the CNA standard will likely extra responsibility on Core and Contrib to resolve issues privately that would otherwise be ignored (made public).
updated the baseline and IS.
I was more looking for the IS to reflect the change that now the queue channel would be permitted to be re-oppened without having to construct a new queue instance as this significantly changed the scope of the issue from not closing the connection (yet closing the channel), to changing the handling of channel shutdowns and recovery.
On a positive note it allowed queue channels to rebuild themselves if they need to, however on a negative it means there is no way to assure the queue instance is not used to inject more messages. Could this be a concern? Why would we call Shutdown if we just wanted to allow a re-connect?
I'll note the sample uses the Drupal Queue service, though really that is somewhat a bad example (we likely need a separate issue to de-scope this from public) as it only constructs a single instance and has no logic to create a new one, 3rd party API may not have this requirement (symfony non-shared service) so the ability to call createItem() again on the same queue can be considered a non-issue.
Just was encountering this as a contrib maintainer where it was suggesting author status for me when I haven't done any code (and would not be entitled to an author statues).
are the extra choices for maintainers to add more detail just to a commit message worth their time? Maintainers who do want to add the extra detail can edit the commit message after copying it.
Having the UI would reduce the time maintainers have to spend adding the extra detail, a few hours work to save countless hours of maintainer time per year compounded over the next decade.
Core included this in their commit standard, I would assume Infra would want to implement this for them (otherwise they might as well just use the GitLab UI).
The commit trailers are not used in any crediting on Drupal.org, they are only for people who read the commit history.
Anyone listed as an author would be generally presumed to potentially hold copyright to the code and joint liability for any misappropriated code, while reviewers may have not blocked the code from enter the repo they can reasonably assert that they were not involved in the obtaining of said code.
This can lead into even more interesting territory when we discuss issues where a user comments that they object to code, if the code latter causes issues (site down/etc) including them as an 'author' could be possibly seen as libel
As an infrequent contributor to core: I absolute do not want my name listed as an author if all I did was endorse the concept (or worse, if I objected to the concept).
A big concern I have with the credit system is maintainers needing to spend time doing crediting paperwork.
Unfortunately our time to resolve that was before this was adopted as a standard, now that its standard we just have to deal with it.
Duplicate of ✨ Add a UI to change footer "token" values for each user Active (I had thought we had an issue for this, though the title didn't stand out to me when I was looking before creating this issue.)
NW for the PHPstan Baseline ignore removal and an IS update to capture the changes to scope.
Adding PHP to the list.
Drupal 11.1+ do not support older than PHP 8.3. Drupal 10 does support 8.1 and 8.2, however existing sites can remain on the 1.x branch.
I am not sure we should delete them and lose all the history. Can we archive them somewhere?
report
Related question that comes to mind after recently revisiting an incident (couple years ago) where a site moderator deleted content and re-posted it.
How has the CC-BY-SA 2.0 been honored in the new coding standards repo for these? Was every revision given a commit or is D.O. dependent upon retaining these links to retain a record of who changed what and when to attribute it? (Though a deeper question; does the edit history itself even comply with the license if multiple contributors were involved in editing the source changes in the issue queue.)
Coming from Slack: This is a 60 second review (more time spent writing this than actually looking at issues)
based on your recent commits, Slack comments, and the stack trace present in your logs.
https://git.drupalcode.org/issue/entity_share-3540410/-/blob/8.x-3.x/mod...
Without 100% validating the API I suspect (95% confidence) that your telling Guzzle to ignore your base_uri by having a leading / on the path (base paths are generally only for relative links and a leading '/' is not relative).
Explains the login failure and inability to download files (if I'm reading the rest of the test logs correctly).
it doesn't make it optional in the code base you're arguing for the exact opposite here.
Sorry if there is confusion around this point. I have been trying to say that yes the composer package is not optional in the code base, the Drupal Module is optional from the standpoint of Drupal core operations. (Module disabled, Drupal doesn't care that it is on disk, composer however can still access the libraries/interfaces).
There is no proposal here for how to provide backwards compatibility for that case or otherwise how it would be addressed.
Thank you for providing a technical concern that is rooted in a current logic instead of developer design choice. I don’t have an answer for that scenario at this moment and would need to take time to consider the impact.
I do wonder if this issue been acted upon sooner, or at least been in the roadmap for all new designs, might we have avoided those concerns as part of the attributes conversion. We can't change the past, and just have to push on, just want to annotate how we could possible work on additional coordination during design phases.
We had to workaround a PHP bug with missing traits to support attribute discovery in
Indeed. I believe I was monitoring a number of issues related to annotations conversions including that issue.
interfaces it provides into a separate 'contracts' component
FWIW that’s currently the leading contender for my project at the moment, though option #30-2 would IMO having been through the design evaluation also be significantly valid in this particular case.
Realistically Module B belongs inside Module A’s code base, however when maintainers don't act we look at the options.
You're pretending as if this doesn't exist and no-one has thought about this before.
There is significantly more in my designs evaluations than can be laid out here in short blurbs, especially since the majority is not directly related to this issue. I have noted previously some ideas you have suggested would work technically they failed the DX review which leads to the scenario proposed.
Re title as this appears to be focused on D7 ES providers currently.
Much of this would appear to relate to contractual agreement the D7ES providers have with the DST and Drupal CNA and as such normal rules may not apply.
The Drupal Security Team operates under the Red Hat Root CNA, which is the preferred Root CNA for most open-source projects.
Considering a RedHat only became a root in 2022 is this actually true ? The Drupal CNA predates the Red Hat Root. The CVE website only lists MITRE as their top level root, so that does leave ambiguity about if they have a parent root.
MITRE was used as the CNA-LR. It’s not clear to me if Red Hat was asked first or if the requester went directly to MITRE
Red Hat only became a CNA-LR in February of 2025, they would not have been capable of accepting the report in 2024.
No it's not the same, because you're specifically talking about the case of a contrib developer making a deliberate choice to add a hard composer dependency on another contrib module, instead of making that dependency properly optional.
Welcome to the PHP Language. Interfaces will/must be loaded by the parser to be implemented. This is no different than being dependent upon psr/log for the interfaces it provides.
Loading the namespace into the autolaoder makes the module optional, otherwise (one of) the (possible) solution is that Module C has to make Module B (the module that provides the interface) a hard dependency. Autoloading makes it so as long as the interface is properly formatted PHP it loads (Module B would then have config options to disable its features, but now it sits taking up space and processing time in all Drupal related module functions such as the hook dispatchers, updates checker, etc)
it's the site owner's fault if they install a module
To be clear, the primary fault was directed to the developer who failed to vet their dependencies. The site owner example was thrown in as secondary responsibility.
Having a composer dependency on a module that may or may not be installed can introduce problems if module C isn't compatible with newer Drupal core versions -
For clarity, I believe you mean Module B.
That really is no different than having a dependency on module that can be installed that isn't compatible (you can't PMU it if its a Drupal Dependency of Module C) or a dependency on any other composer library that would conflict with Drupal Core.
If you choose to implement a dependency or even install a package on your Drupal site you need to have vetted the developer to be sure they will continue to maintain the software (or be prepared to dump them should it be required).
should be solved by ...
For brevity I did not list all the scenarios that had been considered and failed an initial architectural review.
Given above regarding B2 I'm going to close this as outdated.
I had previously noted R2 had issues (and their documentation still seems to imply no checksum support) however it appears they resolved this even before I made my previous post, ref, https://community.cloudflare.com/t/aws-s3-sdk-compatibility-inconsistenc...
We can certainly raise questions about the ethics of Amazon making a change like this in minor release, however it is unfortunately a complex discussion given the specifications for these headers were known well in advance of them being deployed and this can also be seen as a fault of the vendors for advertising S3 compatibility when they only offer partial support.
If anyone does encounter this, setting environment variables via settings.php is sufficient to prevent these errors.
Also, you can technically set any email via git config --global user.email "EMAIL". That's the one that will be linked to the commit, regardless of whether you have that email in your emails setting: https://git.drupalcode.org/-/profile/emails
That starts to touch on the often avoided issue of properly acknowledging copyright holders. this may be better a separate issue, however it is a point of compliance we eventually need to address.
We can not assume that every author is a D.O. user when formatting the author-by lines, these generally should include everyone who has a legal claim in the code being committed, which can be overly simplified as, anyone who authored a change requiring original thought.
This was a bit less of an issue when the commit template did not assert included names were authors, it is a burden that was already required, and became more deeply accepted when 🌱 [policy] Decide on format of commit message Active was adopted, especially with the original discussions believing GitLab would populate that data for us as part of its template system before the need to adhere to Drupalisms pulled this back into the contribution UI.
This starts to go back to my point in #7 regarding authors who work for multiple organizations on the same issue and that commit email could be indicative of corporate ownership (work for hire copyright law).
For DX: I have encountered this in a couple scenarios related to implementing interfaces:
Scenario 1:
Decorating a core service that may not always be enabled using autowire:
The decorator has to implement the service interface, however with the core module namespace not resolvable and the design of Symfony PHP will throw a fatal error that the interface was not found. If the interface was resolvable Symfony onInvalid: ContainerInterface::IGNORE_ON_INVALID_REFERENCE should allow for the service to remain autowired yet not registered in the container.
Scenario 2 (I'm working through this with contrib modules right now)
Plugin implementing interfaces from two modules, where one is optional additional features:
Module A exists in contrib and provides a plugin system.
Module B wishes to extend the features of module A, to do this it will provide an additional interface that third party plugins will implement that provides these features.
Module C implements a Module A plugin and also wishes to implement the enhancements of Module B.
In this scenario Module B must be enabled for Module C's plugins to load Module B's interface forcing the site to use Module B even if they do not desire the extra features. As above this would be due to PHP throwing a fatal error for an unknown interface. If Module B's namespace were resolvable it need only be a composer dependency not an install dependency.
Of course there are ways around both of these, such as with use of additional meta packages, custom ServiceProvider alters, not using autowiring, etc, however these push us away from aligning with upstream support and the general PHP ecosystem.
Always have all modules available in autoloader.
+1 from me as a contrib developer.
Yes as #6 calls out this will require contrib to do some work with the (hacky) use of class_exists() that has been used to determine if a module was enabled, however we now have a robust system with Drupal Rector that would allow automated updates by converting class_exists('\Drupal\some_module\some_class') to \Drupal::service('module_handler')->moduleExists('some_module') && class_exists('\Drupal\some_module\some_class')
See #12
There is still concern for the 8.x-1.x branch that the solution should not allow admins access due to the possibility that we may grant access to an area that should be prohibited. A strong argument needs to be provided to refute this if a change allowing access to login plugins is to be accepted for 1.x, otherwise it needs to be done only in 2.x where plugin API rules can change (which necessitates the need to answer the related questions of API changes).
As specific example, based on visual review, it appears this change may allow an admin to register their browser as trusted to a user account creating a method for them to silently bypass TFA.
For now if we want a quick 1.x fix we should likely look at always denying access to login plugins setup unless it is for the current user.
SA-CONTRIB-2025-085 is somewhat related, as it involved flaws in the expectations of allowUserSetupAccess() causing security concerns.
Note that MRs could potentially have commits from the same person coming from different emails
I had originally thought take last (allow users to do an update to correct bad data) however likley the take all is a better choice.
Committers can clean up the data at commit if need be, and this goes to the fact that authors may be working for different organizations on the same issues (copyright of work for hire owned by the hiring company) which aligns to D.O. credit policy of credit everyone involved.
we won't have a way to remove that PII from the commit history.
This is covered by https://www.drupal.org/docs/develop/git/setting-up-git-for-drupal/drupal... →
The no-reply addresses themselves (since they identify a specific user by username and user ID) are also likely PII. There appears to be no real new concern here.
I never heard back from @josh-waihi regarding a possible ownership transfer and the other maintainers are still not present working issues.
Given concerns raised above regarding the ethics of committing and the non-response I have forked the module into the drupal/bunny namespace, added D11 support, and created a 1.0.0 release that is API and Config compatible with the drupal/rabbitmq 3.1.0 release.
Site owners are welcomed/encouraged to migrate to the new namespace, however should they desire not to do so, the maintainers and community of this module will, until such time as the two modules drift apart in API, be able to benefit from any improvements made in the fork and vice versa by cherry-picking commits.
Converting should be as simple as replacing a drupal/rabbitmq:^3 requirement with drupal/bunny:^1 in a site local composer.json and running a standard Drupal UpdateDB action.
With the questions about the ethics committing code removed development should be able to accelerate to resolve the backlog of open bug and feature requests.
Is there some setting in GitLab where we could specify that we'd prefer to use a real email address for these, instead of the no-reply ones?
✨ Allow users to set GitLab Commit email Active would possibly be related as a starting point. GitLab allows users to choose what email address should be used for commits, however Infra has to date not made this UI available to users.
I for example have an open request to Infra to manually update my user record if they will not expose a UI ref 📌 Update cmlara's commiter email address in GitLab Active as further example that developers do desire to have the ability to use real email addresses.
I have also opened 🐛 Authored-by should use email address from commits Active which can be seen as related to this, if we correctly utilize the email address as submitted by the author in their commit we also resolve this concern.
This wouldn't really be a false positive.
myMethod(string $var = null) is indeed missing the nullable parameter ?string
This is a PHP Language change not a documentation change. Unlike with phpstan where a @param could help narrow the data, PHPCS is alerting you that this line will be invalid PHP when PHP 9 is eventually released.
(suggest this is "works as designed")
Linking a related, and still open, issue that also discusses retrying known flakey tests. No decision is in this issue, and the issue predates GitLabCI.
@taran2l apologies for the tag and extra emails this would of generated. I had multiple tabs open and credited you on the incorrect one.
Committed to Dev.
phenaproxima → credited cmlara → .
It has been over 6 years since I have had a production site in Pantheon so I can't say that I remember their architecture layout.
Quick glancing:
This error should only be present if:
- For some reason the UID is not an an integer like string (should never happen).
- If the session is missing data(such as attempting to visit the entry form again after success, without auditing every line of code, maybe a login plugin granting access and another module redirecting back to the entry form).
- PrivateTempStore not returning data (If the session somehow became authenticated mid request, or if the session was not synced between backends this could occur). I would expect Redis to solve the not-syncing issue.
Revert Accidental change of subject until formal feedback is provided.
On July 30th I asked @avpaderno via Slack (ref: https://drupal.slack.com/archives/C5ABFBZ5Z/p1753922982270609) on how he would like this issued scoped.
Awaiting his response before this issue goes any further.
See 🌱 Change how modules and submodule dependencies are handled to have more consistent namepsaces [DRAFT]. Active for the long story on this and how sub modules can steal composer namespaces (and related proposal to prevent this from occurring)
See 💬 Submodule created a package Active where a similar request was most recently rejected.
$ composer show -a drupal/analyze_ai_sentiment
name : drupal/analyze_ai_sentiment
descrip. : A submodule for Analyze that provides AI-powered sentiment analysis capabilities.
keywords :
versions : 1.0.x-dev, dev-1.0.x
type : metapackage
license : GNU General Public License v2.0 or later (GPL-2.0-or-later) (OSI approved) https://spdx.org/licenses/GPL-2.0-or-later.html#licenseText
homepage : https://www.drupal.org/project/analyze
source : []
dist : []
names : drupal/analyze_ai_sentiment
support
source : https://git.drupalcode.org/project/analyze
Infra has the final say, however I would suggest (based on prior actions) you be prepared to consider either being stuck with the duplicate name, or consider abandoning the namespace to choose a new (non-duplicated) name.
No because the issue is not a known command, it is the fact that we don't have the capacity to thoroughly audit the risk profile of every aspect of the API, especially because they have made undocumented changes in the past in new releases.
The endpoint is open to those using Cookie authentication.
The Infra team already needs to audit this endpoint on every upgrade if there is a risk concern.