The motivation to release the information should be separated from publishing CVEs.
CNA rules
5.1.7 a CVE MUST identify the type of Vulnerability.
5.1.1 SHOULD contain sufficient information to uniquely identify the Vulnerability and distinguish it from similar Vulnerabilities.
5.1.7 is the one that likely puts us at most risk It requires we disclose at least a vulnerability type in the CVE. We can get away with just saying a basic (sample I have not looked at this particular advisory lately) "Remote Code exploit", "XSS" , "Authentication Bypass" or other similar category to maintain that compliance, however even to do that we have to disclose that specific bit of information which should likely be done under D.O. coordinating its disclosure of more details (why hide on the D.O. SA that it is a specific type when an attacker can go to the CVE to find that out).
5.1.1, we need to have a very good reason to refute why we do not align ourselves with what we are instructed that we should do. Will the CNA loose its contract for failing to comply with a should, no, will they loose respect in the industry, possibly, will they waste other organizations resources, likely.
We published sa-contrib-2024-075 as an unsupported module with CVE-2024-13311. This report does not appear to uniquely identify the vulnerability or indicate vulnerability type.
Looking at the enrichment data for CVE-2024-13311 if I am reading it correctly it appears that CISA was unable to provide full enrichment due to "not enough information".
Being flagged as "not enough information" should itself be a warning to us that we need to work on our disclosure process.
it's possible to create CVEs without specifying the CWE/CAPEC/risk score, so the title and some of the description could be updated to reflect that. It may be desirable for some to know that information.
Can you clarify what you are trying to say? You want this issues title updated? Or do you mean that that the title for modules previously published as "unsupported module" should be updated?
lostcarpark → credited cmlara → .
SA-CONTRIB-2025-023:
Curious how documented in the Security Advisory by the reporter and fixer as CAPEC-115 became CAPEC-85?
Disables users' ability to set up validation plugins except for the default validation plugin if the default validation isn't set up.
I believe we ruled that out in #5.
Even if we were to implement it, that is purely a TFA issue, the plugins should not be involved. Especially not injecting the @internal TfaPluginManager. While TFA could call that service adding it to the TfaBasePlugin would put the internal service into external plugins.
Last I understood we viewed putting default plugin first in the list as reasonable and we were open for researching if it would be possible for Recovery Tokens to not meet the "tfa is configured" provision (without breaking the edge cases such as a user needing to run on recovery tokens while awaiting for new tokens to arrive)
Does it mean that we can't merge https://git.drupalcode.org/project/rabbitmq/-/merge_requests/58 ? These changes are quite straightforward and allow the module to work in Drupal 11 as well.
It means that at this time I do not feel comfortable merging any more commits into the repository without a clear approval from maintainers added through a less ethically questionable process.
I've reached out to the current owner (who was seen active a few days ago) and have started looking at the fork process as an option if needed.
Could we start by looking at how we would change \Drupal\Core\File\FileSystem::tempnam() to not use ::getDirectoryPath()?
::realpath()
of the scheme would accomplish the same outcome.
Do we need more strict type checks for LocalStream?
If using ::realpath()
no, just validate the return is not FALSE. This allows classes that may implement local storage, however not extend LocalStream (such as extending a 3rd party framework to interface with core) to still function.
This sounds like you've identified a new Feature request for the module. Maybe a test encryption plugin that only gets enabled via a drush command to make testing in general easier?
One already exists as a testing module and we use it as part of unit Functional tests, however Drupal must be configured to use such modules on 'production' sites (not UI accessible to my knowledge).
After this is merged, on UX issues it can help people see the change visually or on multiple devices which screenshots can't really do.
We have very little in the way of UX issues directly, and those that do touch UX do not have any real customization to them, renderings would generally be controlled by site themes and the Drupal Core API meaning we have little need for multi-platform testing on a daily basis, if something is broken it is (in the current code) most likely going to be transferred to core as a FormAPI flaw.
When you are working in your "lab" setup, how easy is it for you to preview the changes on your mobile device?
Fairly easily, un-comment a port setting on the lab config and the site is directly exposed, even easier when its on one of internally routed dev machines where it has an internal hostname assigned to it (although that is rare).
Tugboat previews makes it very easy. It makes it easy for others, without extensive lab set-up to see it too.
That is the part I am wondering how useful it is, I'm involvded in many of the deeper issue so perhaps my vision is a bit narrow. I haven't kept any statistical logging of how often I reached an issue that would be acceptable to approve on just a visual without a lab setup.
Doing so means all co-maintainers and any new co-maintainers would have access to the power and benefit of your local lab type setup.
Fair, though these would likely involve a little more complex of a setup as some of the data is to my knowledge not exposed via direct commands. Nothing impossible to do, just another maintenance point to consider if the burden is worth the return, which is the root question, how often tugboat would actually help move issues forward, if it is is frequent enough it may be worth the maintenance burden
I think you are right, usefulness will differ module to module so it really up to you and the folks who are invested in this module.
The simple community question would be "What issues have you worked on in the TFA queue in the past year or two that Tugboat would have allowed you to avoid a local lab and expend less time/effort testing , and in what way would it have done so?"
Critically there is no encryption plugin installable through the UI which renders TFA impossible to setup and test inside of Tugboat. NW for resolution
As a maintainer it would be great to have MR environments so I can save tons of time evaluating MRs without needing to pull it locally.
As an actual (co)maintainer, I wonder how useful this is:
With previews only existing 5 days and most issues taking longer will we see an increase in issue noise? (although that may have just increased to 30 days in 📌 Increase Tugboat Preview expiration to 30 days Active , better but still possible noise for longer lived issues if users are closing/opening mr's to generate previews).
Is code updated when new code is pushed? If so are these new environments or continuations of the previous environment?
In this basic form there is a decent amount of setup that still needs to be done from an admin before they can test much/any of TFA's code. Add an encryption key, setup an encryption profile (which requires an encryption module), before TFA can be enabled, add a user, add a token, etc.
As a maintainer In the amount of time that takes I can (if at a terminal) easily checkout the MR and use an existing deployed lab to test (for context in my local lab I have multiple users, multiple tokens, multiple token types, user roles, and requirements setups configured to test most usage scenarios). There may be an advantage to this when I'm on a mobile device without being near a development terminal.
Presumably some of that setup time could be reduced by configuring the environment inside the template.
Beyond that I'm trying to think how often would I use this. I often review code first and then load into the IDE to validate API usage, sometimes with a requirement for a breakpoint in XDEBUG. To perform those operations I would already have the code in my local lab where I can view the changes.
How often would users use this? How many that do use this it will perform an actual test rather than a "it loaded" test which provide us little to no validation. We are not a theme, our issues are not usually visual they are technical. How much of our code is testable without edge cases that require server changes?
I don't have a good answer for those questions. Leaving this open for opinions from other maintainers and community on if tugboat adds value for our workflow.
Unable to reproduce and the reporter has not provided more information.
I will also add on that had this been able to be traced to an actual bug it likely should have been reported as a private security issue.
This project is in a bit of a weird spot right now.
I was added as a maintainer through the accelerated adoption process some time back.
That process (along with the entire D.O. adoption process) is subject to some rather significant ethic questions as it is essentially a social engineering and supply chain attack.
Factor in that one maintainers returned for a short period of time last yesr (before disappearing again) raising questions with some of my design choices and it raises even more ethical questions about my right to commit code to the project.
The project has been heavily on hold since those incidents.
I’ve been pondering my options, at the moment the following two look to be the most likely paths:
- Owner transfers the project of their own free will (do not use the Project Ownershop Queue) (may not happen due to non responsive maintainer)
- Fork into a new project.
It’s a situation that needs to be sorted out for this project to be able to continue on.
Side note:
I’ll note no one has actually tested the code above and reported back.
The user rajan kumar@2026 is well known for not testing code/credit farming leading to disciplinary action on D.O.
The other options beyond rolling back the API is setting the environment variables or placing the configuration parameters in the shared config file (not the credentials config file).
I have read that CloudFront R2 is also impacted
Minio and Ceph appear to have support for this header (as does LocalStack that we use for the GItLab pipelines) which means the impacted hosts should be limited in general.
Merged to 8.x-1.x
For 2.x we do not have the same concerns for design, however we can learn from 8.x-1.x fix and consider adding hook_requirements () checks for our core security concerns.
TfaUserSetSubscriber::class would be our prime candidate to monitor, to warn if another subscriber is before it so that site owners can validate security. We would likely want to implement this with an 'approve' list so site owners can approve a subscriber to move it from warning to notice.
We may also wish to monitor the logon routes unless 📌 Evaluate restoring using a form alter instead of extending UserLoginForm Active is adopted, however unlike 8.x-1.x where that monitoring is security critical in 2.x it (should) be only advisory as TfaUserSetSubscriber provides the backend security.
Was primarily allowing a little more time for anyone who might want to provide feedback since this decently significant code.
This is one the areas that has had security vulnerabilities in the past, including 2 weeks ago (note: the report that triggered two weeks ago would not have been allowed either with existing 2.x branch or the 2.x branch and this code.
That said I believe this is ready to go in and anything else can be followups.
Letting final test run occur after removing unused code and will merge.
Regarding Simple Password Reset:
Recent data indicates Password Reset Landing Page (PRLP) may be better aligned to API and provides a similar feature set. Currently PRLP does not work with this code however PRLP may be in a better spot than Simple Password Reset to modify their process to work with TFA.
Looks like this was sitting on my machine waiting for the deprecation bug fixes.
Uploading current progress.
I’m not seeing any explicit notes about cached items expiring and being cleaned up, we will want to make sure it's not a runaway cost.
IIRC Gitlab currently expects that the storage layer will handle this (add lifecycle rules in S3).
Considering the propose config change has been waiting since September of 2023:
What is infras opinion on this? Is this something they actually plan to enable or not given its known positives and negatives?
The currently posted policy only strictly talks about ownership.
Should the policy be updated to note that if a Trademark owner raises objections thah site moderators may not add maintainer or co-maintainers?
It’s been suggested in #3304661-8: Change how modules and submodule dependencies are handled to have more consistent namepsaces [DRAFT]. → that this should be changed.
Or does this only happen when the submodule has its own composer.json?
It impacts all sub-modules.
I wan to reiterate this just essentially opens the 'next major' split early, we do so every major release so the process isn't abnormal in that regard. IMHO this tends to align with having a MAIN branch which is intended to eventually be done for core.
Pros:
New API can be implemented immediately.
No need to postpone API breaking change or major system overhauls to branch split (issues can be worked on when devs have the time rather than crushed into a shortened release window)
Ecosystem can begin testing with the new major, as it will exist, much sooner (possibly reducing the number of critical bugs to fix during alpha/beta). This goes beyond just phpstan
Less followup tickets (possibly less cluttered issue queue).
Cons:
An additional branch to maintain.
One way to avoid having two versions of an MR would be to just not require it for the next major commit
Considering we have a fairly standardized deprecation format, It may be possible to work the issues with deprecation and let rector strip them to create the mainline MR. Could be tested and possibly become an "on commit" task if it proves reliable.
Addtionaly a healthy workflow inside MR can make it easier if developers separate deprecation from non deprecation it could be a simple re-base with excluded commits. This would take a bit of training however is reasonable to expect long term we could accomplish this.
We almost somewhat have the work already separated, I frequently see issues get through review to only later get tagged "need deprecation" before the issue can be committed. For those issues it is not much more effort to cherry-pick the current work based on the mainline branch on top of the released branch and add the deprecation.
Bumping back to the top of the queue to remind myself to work on this issue this week.
Sounds like this may be a new default in recent SDK versions
https://docs.aws.amazon.com/sdk-for-php/v3/developer-guide/s3-checksums....
Version 3.337.0 is January 15th. Rolling back should work as an initial fix.
This sounds like BackBlaze being non-compliant with the Full S3 protocol that they admit to:
https://www.backblaze.com/docs/cloud-storage-s3-compatible-api
This isn’t a new feature, it has been around since at least the 2.x SDK, just added as a default enabled now. I would not want to try and disable this feature for all systems, and would like to avoid adding a configuration option for something that seems reasonable to expect to be standard.
I wonder if this can be disabled via a few of our hooks and this can go into documentation?
I am unclear on what you are trying to say in #5. All that was posted was an error message indicating an AWS Client configuration error.
teacher.-bucket</bucket> I've checked with AWS and they confirm <code>Bucket name must not contain dash next to period
. This appears to be the first error based on the details provided.
for any reason, the region saved in the sf3fs config form is eu-south-2.
https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketLocation.html
As noted in #2 We use the AWS API to obtain the location. That value comes directly from Amazon Web Services.
Still awaiting answer to the other questions from #2 to attempt to move this forward.
Note: I am assuming you are not using a custom host, if so that should be removed first.
the region is blocked and it is showing eu-south-2 after saving the configuration.
We obtain this from AWS. AWS believes the bucket is in eu-south-2, if this is incorrect than AWS internally is going to have a significant problem.
teacher.-bucket
Is this the actual bucket name? IIRC a hyphen is not permitted at the start of a hostname portion, was this suppose to be teacher-bucket
or is this an older path based only bucket?
I don't know why the enpodint is pointing to eu-south-2
I've seen AWS redirect across regions. I suspect we are connecting to us-east-2 and AWS is redirecting us to eu-south-2 (as that is where they believe the bucket is located) where the region field (set in the API locally) now indicates a mismatch and AWS rejects the request.
Other possible thoughts: is this a multi-region access point or an accelerated access point?
Should fix src/S3fsFileService.php at the same time.
And might as well fix all references to 'the the' as it appears to have slipped into the code in a few places.
tests/src/Functional/S3fsTest.php: $this->assertNotFalse($s3_file2, "Copied the the first test file to $test_uri2.");
tests/src/Functional/S3fsTest.php: $this->assertNotFalse($img_file, "Copied the the test image to $img_uri1.");
tests/src/Functional/S3fsImageStyleControllerLockdownTest.php: $this->assertNotFalse($img_file, "Copied the the test image to $img_uri1.");
In our case we login via Rest and not the normal Drupal login form.
I'll note per SA-CONTRIB-2023-030 REST should be disabled in 8.x-.1.x deployments. The rest login protection in 8.x-1.x is a failsafe to block access not to allow access.
2.x has the foundations of allowing rest to function (however it is not yet suitable for deployment outside of development labs)
I am not sure why in isTfaDisabled the code only checks for status and plugins in the user_tfa_data.
A number of actors involved.
IIRC I have seen times where the data did not get set for a plugin meaning we can't trust the user list and have needed to fall back to the plugins (it creates a mess for the rest of the code).
From a 'fail secure' standpoint (ref:
🌱
[META] Convert fail-open code executions to fail-secure alternatives
Active
) the we should not operate on an assumption that 'no data means not enabled'. Longer term I've theorized we likely need to have a specific record on all users that explicitly states "no data here" (eg for this case it would be something like a tfa_disabled = TRUE
record, however that would need to be done in 2.x as a data model change.
As jq is instalkled as dependency here's MR
I generally recommend to be explicit.
For all we know next year yq could change and not use the jq package (highly unlikely however if they do we loose jq and don’t realize it)
I mean adding yq requires to add python interpreter
The python interpreter is already on the images and is used by gitlab_templates to provision the mkdocs stage (in other words: we are not likely to accept the removal of the python interpreter anytime soon) .
with few dependencies,
Fair it may add more dependencies into python.
If there is a binary XML->JSON->XML and YAML->JSON->YAML converter that could be a reasonable alternative to the yq package.
IMO it's useless for CI image
https://git.drupalcode.org/project/gitlab_templates/-/blob/main/scripts/... is an example of where we are creating a 51 line script for what could likely be done more efficiently and reliably a 1 line yq command. I've had other discussions in Slack where we are making the jobs harder to maintain, more fragile (dependent upon another HTTP GET, and can be broken by mismatch in versions downloaded vs version of the gitlab template) (these are incidents that have occurred in the past) by using external PHP Scripts.
Downloading extra scripts is somewhat of an anti-pattern in GitLab, and the tricks that gitlab_templates team currently use do not currently exist for GitLab Components meaning the ability to inline commands can become even more important long term.
The script alone proves that we need to make changes to XML files as part of the CI process which renders the argument 'useless' as moot and only leaves 'is it worth the cost vs savings'.
I think jq is the most common and it should be enough
The yq package adds support for XML which is likley just as important given we now have been modifying the phpunit config to support differences in core versions. As noted above yq uses jq for the parsing engine and is primarily a converter.
who can review dependencies and security consequences
A good question, to my knowledge this has not been requested before in this queue and there is no documented review policy. Given the lack of policy it would likley need to be done by a queue admin (yourself?) as none of us can speak to an undefined standard.
Just a note this should likley have been done as a v2 only change as it still can break pipelines.
As noted in 📌 Ignore all git files in artifacts Active I have used jobs in the past that depend upon git being present in the modules folder (my workflow involves copying all of the modules code to the custom folder and working out of the custom folder for all later steps as it’s better aligns to core design and avoids symlink faults)
and the CI artifacts can grow really big so it fails.
The better solution for this would likley be to not depend upon the artifacts being built until needed. We need to deal with the lack of caching on d.o however over in Quasar I’ve been designing (in loca testing) so that my phpunit and phpstan stages can be built “on demand” without storing the large asset files. Similar can be done as a v2 for gitlab_templates and is somewhat a design expectation if the templates ever move to components.
Quick glance, it appears the tests run with just a single plugin, they would not hit this condition. which is reasonable, the tests as written are plugin specific and should not be testing the rest of the general usage.
Ideally yes this would be a dedicated test that performs a mulit-step setup to validate the multi step form. Ideally built off the test plugins and not the production plugins.
This is true but it gives you the option of not continuing with the install in the first place if it was an actual dealbreaker,, and a hint that it's possible to uninstall things later.
This seems like it could be the best method, if the owner does not consent throw an error that advises them it is required.
The checkbox could be removed making it a non-option, however does that actually fill the role of consent? (the user will be warned, however if they were not compelled to agree did they actually knowingly consent?)
If you're willing to roll a stable release of https://www.drupal.org/project/site_key_mutator → , could we not just include that with Drupal CMS
I can do this if it ends up being the route that is needed/useful.
Re #5 missing library 🐛 ParagonIE\ConstantTime\Encoding dependency not installed from TFA Needs review
The 2.x branch was tagged as alpha way back when I was an active maintainer. Back then, the objective was to tag version 8.x-1.0 and then never release anything anymore in that branch. Then life got in the way...
My recollection is that part of this was that you were unsure how to add D10 support to 8.x-1.x and believed it was not possible due to core API changes. Sites (for better or worse) adopted this early alpha into production eager to deploy D10 with unfinished code knowing there were risk to doing so. Once the solution was added to support D10 in 8.x-1.x the critical need to release a 2.x (and for sites to run 2.x) went down allowing us to spend the time on 2.x to 'get it right' and find the 'best' code fixes rather than the 'simple quick' fixes we have been putting in 8.x-1.x.
When I tagged alpha3 (and alpha4 for a quick fix after) there were 1500 sites using the alpha2 tagged version that would otherwise be blocked from upgrading to Drupal 11. Not tagging those releases is actually a maintenance problem.
Personally, I wanted to get those users off 2.x, that was why the warnings in the releases, in the project pages, the desire to have the DST flag alpha 2 as insecure. At the time of Alpha3 being tagged there was (to my knowledge) nothing preventing a rollback to 8.x-1.x, see 💬 Can we switch site from 2.0.0-apha2 to 8.x-1.7? Active . Going off my notes in that issue the Alpha3/Alpha4 may now complicate that scenario (although that is the responsibility of a site owner for using the alpha).
For actively developed projects, periodic releases make sense to me. They encourage testing and help with the QA process, as they allow for a named version to be tested.
There were many commits added to the 2.x branch that were not available in a tagged release and should be.
I disagree with this, tagged releases are for when you want more general public usage, 2.x has however been in a "DO NOT USE" state. The only users who should have it deployed are those who are developing TFA 2.x, those types of users should be running off GIT HEAD as that is where the merge targets will be, how code interacted 2 commits prior is not relevant, it is how it interacts with the branch HEAD.
Yes we eventually want more active usage, however that is once all the security issues are closed out, any major architectural decisions are made, and general use is believed to be working.
I would suggest the 2.x branch is removed from the project page then, as it might be expected that users see a D11 version and they read only that.
I had considered that option in the past and have been on the fence about it. Its generally custom on D.O. to include the dev branches out as 'supported' indicating they are being worked on, and that they should be the target of new changes for backport to an older release. I would be a bit concerned about the signal it sends regarding the fact that 2.x is indeed intended to be the eventual solution, especially since I consider 8.x-1.x non-viable long term.
If you do decide that the 2.x changes will never happen,
Currently no plans for 2.x not to happen. It is just been in a state where it has needed larger arhictecutlar changes, some of them quite massive to get rid of what I see as significant design fault/limitations of 8.x-1.x. That said I see 8.x-1.x as non-viable long term, given changes found in other issues and the triggering of yet another Security Advisory for 8.x-.1.x that 2.x was not vulnerable to. There could be a decent argument for releasing 2.x with the known vulnerabilities, rapidly marking 8.x-1.x as unsupported, and very soon after spin off a 3.x as we still have more API breaking changes on the roadmap. I had avoided that
Should this issue be moved there?
I don't see why it should be.
The code is available in standard drupal/core however there is a prompt to reject the tracking while Drupal CMS removed the prompt and disclosure.
Users download Drupal CMS, Drupal CMS enable the modules in its default install without user consent. It follows Drupal CMS (and the Drupal.org/Drupal Association as the collectors of the tracking data and sponsors of Drupal CMS) holds the responsibility for the feature being enabled.
Linking a few issues related to discussion of undisclosed trackers being considered a breach of trust. While these discussed contrib and were more significant in how they were implemented I would maintain Starshot is required to be held to the same , or even higher, standard.
Was reviewing this a week ago, the main change I wanted was to make classes final. I have pushed those up today.
This could use additional eyes.
If fixing this will take a while then I guess the project page should be updated and maybe link here.
Indeed. Although the roadmap issue may be better 🌱 Roadmap for 2.0.0 release Active as the needs continue to evolve (looks like I need to update that issue, will give it a review this week).
I am not sure why a generally inactive maintainer felt compelled to push out 2 new Alpha releases on the 2.x branch when the branch is not suitable for use anywhere except a development lab (which should be deployed as 2.x-dev not a tagged release)
A cursory glance indicates that "VWO" may be a registered wordmark with the USPTO (registration number 4663137).
As the project appears it may be currently maintained by VWO representatives I am adding the following information to aid VWO in locating the tools D.O. provides them to enforce their brand protection if they should object to this change in the future:
As @mandclu was only added as a co-maintainer the existing maintainers (with permission to manage maintainers) may remove them at any time through Manage VWO Maintainers → .
Addtionaly a representative authorized to enforce the VWO Trademark may utilize the policy located in How to become project owner, maintainer, or co-maintainer - Transferring a project that used a trademark in its short name → to instruct Site Moderators to not assign privileges to other users in the future.
@mandclu: Given the above you (or Acquia) may wish to (unless you have consent from VWO) consider forking the project instead of committing code as a co-maintainer. Addtionaly I will note any commits made to the repository on D.O. could be considered a derivative work and that the GPL license does not grant derivatives a license to use Trademarks (I have not reviewed the code to evaluate if there is any Trademark usage in the codebase that would not comply with fair-use exemptions).
@cmlara have you reviewed what's auditing logs are gitlab's codebase for events like this? That seems like a useful area of research.
The basics are the project activity event log (sample: https://git.drupalcode.org/project/commerce_purchase_order/activity) which is also available via the API.
It is not an indefinite log (removed after 3 years for performance). It was mentioned in ✨ Add log of mainatiner level changes to project pages and provide notice when the Project Ownership queue adds/promotes a user Active there are some limitations in the activity feed as compared to D.O. permissions mapping where (based on how we use GitLab) not everything on D.O. side will register a change on GitLab such as change of owners.
There are webooks for pushing data into an external system, however I have not investigated the specific data it could push in relation to permission changes at the project level.
Note for context: I suspect @fathershawn did have the rights requested, however as a security engineer concerned about the overall safety of the Drupal ecosystem, and the lack of logs that can prove otherwise I do wish to raise the following points in regards to #12
We can look at commits... and at releases... and see that fathershawn had the access described in the issue summary<,
Neither of those would confirm that fathershawn had:
- Access to modify maintainers
- Access to edit the project page
again as recently as this month
That was after @avpaderno granted them permission. Those need to be ignored.
We can look at the history of the project page and see the previous edit was 5 Apr 2024, that leaves around 10 months where a maintainer, or another user with enhanced powers on D.O. could have revoked the access to edit the project page. I do not have access to confirm the maintainer has not actually logged in, however in 💬 Request to become project owner Active mentions an admin reaching out which makes me wonder if their last login was <1 year ago ?
Even assuming ability to edit the project page that would not prove the user had the ability to modify users.
We can look at https://git.drupalcode.org/project/commerce_purchase_order/-/project_mem... and notice there is a disconnect that some of the maintainers are still listed as full maintainers while their D.O. permission is no commit privileges, however that does not confirm that those changes to the user were not done sometime in the past (2020 when @guillaumev had their last activity?)
@fathershawn shawn being listed as 'maintainer' does imply at some time they were elevated to have the ability to modify users, however does not confirm they still had the D.O. permission at the time of the request (changes on D.O. downgrading permissions are not populated to GitLab and maintainers have been known to miss downgrading users in GitLab, which if @fathershawn is to be taken at their word may have proven himself by the other users still having maintainer level access)
Without an aduit log I would suggest that is not a reasonable level of proof when it comes to supply chain integrity to prove @fathershawn had all the permissions listed at the time of the incident.
Note: I'm not saying undo this as the permissions needed up being granted under a different policy rule allowing changes, I am just indicating that #12 has some flaws in the assertions the external data proves the user had the permissions claimed on the day of the incident.
Overall looks good.
Did provide a small suggested change in the MR on the UI language that may more accurately describe the situation regarding the field and suggest we add the field to the config form test to avoid a regression of it being deleted.
Moving this over to a task as that was a Drush beta 'bug'.
This is something we will need to deal with eventually. It might be possible to avoid working on it at all in the 8.x-1.x branches depending upon timelines.
Thank you @avpaderno
I am not seeing the errors on just a drush cr
. I am testing on PHP 8.4 and Drupal 11 receiving Drush 13.3.3.
Looking at Drush source the old interface definition still exists for compatibility https://github.com/drush-ops/drush/blob/660d7ec654d3e37472ec479a993ea643... and has been present since Drush 13.0.0-beta4.
What version of Drush is being installed on your deployment?
Crediting reporter.
We appreciate that this issue was submitted through the private tracker for evaluation prior to public disclosure.
The scenario I believe you are alluding to where an owner agrees to an unqualified individual is being discussed in 📌 [META|POLICY] Think of a way to make it less easy to become a (co-) maintainer Active .
We test this as part of the CI runs in S3fsTest::testImageDerivative. I find it unlikely to be inside the module if file access is otherwise working.
My initial thoughts would be is there any other changes as part of any other upgrade to modules the site uses as part of image style generation?
A plugin accidentally using $streamWrapper->realPath() would be a not so obvious change that can cause issues with non-local streamWrappers
+1 on switching as well.
This would also help the CVE process as it is preferred CVE’s be scored with CVSS (although they can be scored with a priority metric it is just not preferred).
As well as documentation there is also an entry level training course https://learn.first.org/catalog/info/id:126,cms_featured_course:1
I was intending to circle back after the last contrib SA I worked on and post an example score:
SA-CONTRIB-2024-043: TFA: Session Fixation:
Under Drupal’s scoring system this scored Critical 15 ∕ 25 AC:Complex/A:None/CI:Some/II:Some/E:Theoretical/TD:All
Cross scoring to CVSS:4.0 the score would be closer to Low: 2.1 CVSS:4.0/AV:N/AC:H/AT:N/PR:N/UI:A/VC:L/VI:L/VA:N/SC:L/SI:L/SA:N/AU:N/R:U/V:D/RE:L/U:Clear
This is an extream example, and I’ve seen some SA’s become higher under the CVSS, however in all cases CVSS told a more complete story.
Is there an issue to load these CVE ID's into the Security Advisories?
The "Things you do not have to worry about" section is intended to convince D.O. users to adopt a project rather than forking or otherwise creating a new project. Convincing prospective adopters that adopting is ethical to avoid a fork aligns with that sections responsibility.
I cannot even say that being considered unethical by security focused community members is one of the reasons for which people do not ask to being maintainer/co-maintainer of an existing project and they instead create their own project.
I was added via the shortened adoption process in #3237307: Offering to maintain RabbitMQ → to a module.
At the time I took these policy as "its in the ecosystem, everyone agrees with it". It was certainly not my first concern at the time. As I have become more involved in the community and seen its apathy to secure practices I find myself questioning that logic.
Since the time of the application a maintainer returned and since disappeared again. During that time they raised questions about design and future plans. This has lead to a stall in development (including lack of D11 support) while the moral and ethical concerns are being evaluated with the potential of a fork being required as I question if I have the ethical authority to continue development.
As noted above the "Things you do not have to worry about" section is responsible to assuage those concerns.
There is enough to write on the subject that four lines could not be enough for a summary.
That is fair point, and why the section may need to refer to reference material in other sections.
Possible phrasings to answer the topic:
I want to be maintainer/co-maintainer of a project, but I do not want to risk my future employability by performing a supply chain attack.
I want to be maintainer/co-maintainer of a project, but I do not want to commit a supply chain attack.
I want to be maintainer/co-maintainer of a project, but isn’t this processes a supply chain attack.
I want to be maintainer/co-maintainer of a project, but do not want to steal from other members of the community.
I want to be maintainer/co-maintainer of a project, but I do not want to be considered unethical by security focused community members.
I’m pretty sure you see where this is going, it is possible to fit in to that section.
engineering and supply chain attack to inject oneself as a new owner is a topic relevant for everybody who is a project maintainer, independently from being added as project maintainer
True, though it is equally relevant in that section as well. Perhaps it needs content in two places.
Is GLFM a coding standard or a parser standard?
GitLab Flavored Markdown (GLFM) would be a better choice
Is GLFM fully compatible with Python Markdown (goes back to the mkdocs discussion before)?
I’ve spent more time dealing with markdown since #10. My experience now is that it may be very hard to specify a coding standard and unlikely that we will find own pre-made.
There is a big difference between a coding standard and a parsing standard.
A coding standard dictates how detail must be laid out even though the language supports a diffrent layout, while a parser standard defines how it must be laid out to parse and any deviation will result in an error state:
To put the above in the form of an example:
PHP defines a parsing standard.
drupal/coder defines a coding standard.
Is it within the Coding Standards charter to define the use of a parser standard?
One item I don't see mentioned is mockability for Unit Testing.
Inside a namespaced class a Global functions preceded by a backslash can not be mocked while global functions called without a preceding backslash can be.
How widely this trick is used I am not sure, and ideally we are moving away from global functions, however eventually they do get called (especially when you start working with more 'low level' code), If you are going to simulate every code flow execution sometimes you just have to mock the response.
IIRC I've tried this once and it worked decently to abstract away having to include 3rd party code into the test case.
D.O. has migrated away from patch files.
In order for tests to run we require all submissions to be in the form of a Merge Request.
More details may be found at: https://www.drupal.org/docs/develop/git/using-gitlab-to-contribute-to-dr... →
Never linked the related issue that describes the underlying database issue.
#3224245: Open MySQL connection using \PDO::ATTR_STRINGIFY_FETCHES →
can (last I looked) be overridden by supplying custom connection option.
You should not close this without providing a solution.
The won't fix is that the proposal is non-viable and will not be committed. Any other suggestion as an alternative would involve a significantly different solution, the existing discussions would be of little direct value.
Alternatives have also been provided such as not sanitizing or disabling TFA.
A drush command that allows provisioning tokens has been suggested in other issues, I would support such a feature and it would allow for automated setting tokens for developers after sanitation. A similar solution was suggested in #6 for adding a command that provisions tokens however it too has not received traction due to the non-viable patches being re-rolled.
No admins want to set a TFA per environment and re-set it every time we pull the production database down to the other environments.
It was requested in #13 how the developer avoid the compliance breach of copying a private encryption key between environments. Adding on to that, how would the developers deal with the replay attack risk of a key being submitted in dev and yet still being usable on the production environment?
Judging by the comments it sounds like your development firm may not be dealing with these other security concerns. If the dataset is desired to be sanitized that should also includes encryption keys(which when sanitized will render the TFA user data invalid), if the dataset does not desire to be sanitized there would be no need to call the drush sanitize command.
Users may not understand these risks, this is why they contract development firms and why we TFA as a security solution should not add features that do not have a reasonable use case.
Having this in drush doesn't make sense
Agree, see comment #12.
This sniff is being applied to contributed modules when phpcs is run, but as far as I can tell it's not being used in core. I don't know if this is intentional or not.
I'll note that DrupalPractice sniffs are 'common practices' however to my knowledge the majority of them are not actually codified by the Drupal Coding Standard. There are some good sniffs in there however they may not fit for every situation.
The default for gitlab_templates only uses the Drupal profile not the DrupalPractice profile.
If a maintainer goes through the Project Applications queue to receive the 'git vetted' role (permission to opt projects into security coverage) role the DrupalPractice ruleset is often applied and some 'contributors' believe the DrupalPractice is a required standard.
So the problem is the differing standards between what is allowed in core and what is allowed in contrib.
To my knowledge Core has never fully complied with the Drupal Coding Standards. They use it as a guideline for new code however it is not fully enforced by testing as their code is not up to full coding standards and some standards are deemed 'too disruptive' to implement.
I don't see that this sniff is being explicitly excluded from core either.
Core selectively enables sniffs, they do not use the Drupal or DrupalPractice per-configured profiles that contrib often uses.
The current sniffs used by core are located in:
https://git.drupalcode.org/project/drupal/-/blob/11.x/core/phpcs.xml.dis...
Can someone point me to documentation or explain to me how this situation happens and how to correct it in core?
https://www.drupal.org/docs/develop/issues/issue-procedures-and-etiquett... → contains the procedure for proposing a new coding standard be used in core.
https://www.drupal.org/project/coding_standards → discusses in step 9 of the templates procedure discusses that after the coding standards committee adopts a new standard an issue be created in core to add the rule to phpcs.xml.dist and to cleanup any code that may not comply. Obviously code that is not in the formal coding standards would never trigger this step (also 2022 was back when the Coding Standards committee was defunct).
I will note that to my knowledge D.O. does not logs these changes to prove the details of this request.
✨ Add log of mainatiner level changes to project pages and provide notice when the Project Ownership queue adds/promotes a user Active is open regarding creating an audit log.
If a developer doesn't have permission to redistribute the code as GPL, they are violating (D.O.'s GIT Term of Service)
We agree.
the project is already being distributed as GPL-2.0 from Drupal.org
This is a well known flaw inside D.O infrastructure (with interesting discussion room about D.A. liability).
composer licenses, they should be able to trust that the report is accurate including..
This is an interesting one. The composer license field is called out for the 'package license' what that means in this case is an interesting thought experiment for another day given its general irrelevance to D.O. operations.
It is also NOT by licensing DSFR itself as MIT.
Its an interesting choice for sure, I would love to know their backed logic on the decision.
distribute custom projects directly from your own namespace.
Indeed, and logically it likely makes a lot more sense to do so. I know when I saw the other 'limited intended audience' theme my thoughts were very much "Why is this even being published to D.O. why not some internal distribution system that is shared between the organizations"
Commited to 3.x
Standard I am not a lawyer, this is my advice based on general knowledge and years of navigating source compliance::
As the theme has now been published, it is now automatically released inner gplv2 (see https://opensource.org/license/gpl-2-0) and this licence requires no restrictions on use of the software
The D.O. policy makes it a rules violation to publish non-GPL code on D.O., it does not change the license of submitted code (especially if the poster does not have the authority to do so) as D.O. does not own the copyright or receive an assignment of copyright during the upload process.
The most D.O. can do is ban the user and delete the code from its repository.
Implying that it is too late for the user to have the code removed is misleading to both the code owners and others who may consider using the code as a base for other modules.
For the record, there seem to be a number of other related projects with the same restriction claimed.
I even observed one pass D.O. Project Applications review “recently” (sometime in the past 6 months IIRC). IIRC it was for NY State.
The source code that generates the output may be open source yet the use of the output may not. The GPL already maintains the output of a program is not covered under the GPL license.
Not an art law expert however I wonder if an overall design could be copyrighted as a protected work of art?
Could also be a consideration of French Law (which I have zero experiance under) would use of a theme that matches the French Goverment be a criminal offense of any kind(impersonating a Goverment agency)?
We also have Trademark laws, are any of the French icons included in the code covered under French Trademark and thus being unlawful for use without permission? The GPL does not cover Trademark usage. This is another method where a GPL theme could be posted on D.O. but not lawful for use by anyone except an approved group (a derivative theme however might be lawful however at that point you are not us g DSFR you are using a derivative).
Not intending to imply any of the above actually applies in this situation, only pointing out that there are possible scenarios where code may be licensed under the GPL yet not be freely useable.
cmlara → created an issue.
The fact they are allowed to set a 10 minute validation for the TFA token, but the timeout is hardcoded to 5 minutes makes for a very confusing user experience.
One could also contend that this is the fault of the email token for allowing a 10 minute timeout without informing the site owner. That said I can understand why they would want to, you need to allow enough time for a user to pull their email client and submit the code. We (TFA) however need to defend against the browser being left unattended or tokens being shared.
We allow a long timeout for the TOTP token, however that is probably more historical. I was working with a TOTP solution the other day that only allowed 1 sync difference. In the past we needed to worry about very low precision crystal oscillators that could drift by over 10 minutes a year (with the battery lasting 3+ years). Now we use most often use devices with built in clock synchronization and server infrastructure with sub second time holding accuracy. There could be a good argument that in a future version we should hard code a shorter limit in the TOTP plugin.
Wouldn't it make more sense that on form submit for the TFA module configuration, if the email validity period is greater than 5 minutes, it automatically updates the timeout to match that selected value, and if it's less than 5 minutes, it defaults to 5 minutes.
From a security standpoint, not necessarily. As noted in #2 there is a difference between 'the token is valid' and 'we are validating a 2nd factor login'. The first part is the on the token plugin, the second is TFA's responsibility, we can't legitimately deffer that to a 3rd party, we need to have some level of it from our side. A configurable window is an option inside the TFA settings, however it would certainly need boundaries (and I would prefer we have some sort of data reference to look at.
Side task: Finding ways to give the Contribution Guidelines more visibility where they might be needed?
I've mentioned in the past that https://www.drupal.org/drupalorg/docs/marketplace/abuse-of-the-contribut... → really needs to be shown as part of the D.O. Terms of Service, and that users need to 'sign' that they agree to it and that a mass mailer should have been sent out when the Terms of Service were changed (the abuse policy is a ToS addendum). 'We didn't know' should never be a valid excuse for violating the ToS. If users are not forced to agree to the terms the site is making a mistake. The contribution guidelines are incorporated by reference in the Credit Abuse policy. I had also suggested these terms should be incorporated by reference directly into the Drupal Certified Partner agreement, unsure if that was ever undertaken.
I agree with you, more could be done to present the policy to users.
Re #26 and this possibly being 'new contributors':
Tata has already been warned for this however I stumbled upon a module that somewhat shows just how absurd Tata's requests are.
💬
Offering to co-maintain subsite
Fixed
@kuhikar requested on Dec 2nd 2023 to become a Co-Maintainer. Advises they have a good team for managing the module
Dec 4th 2023, it appears they were added as a full maintainer (or at least a co-maintiner and updated to a full maintainer at a later time)
💬
Offering to co-maintain subsite
Active
27 Dec 2024 at 05:15 PST @dileepkumargembali (A new account, checking right now it displays as "2 month 3 week" old) requested to be added as co-mainainer, on behalf of Tata using the standard comment that Tata had a good team for working on the module
This issue was closed by site moderators on 7 Feb 2025 at 04:33 PST due to inaction (likely related to this issue)
💬
Offering to co-maintain fieldblock
Active
27 Dec 2024 at 05:38 PST (33 minutes after the request by @dileepkumargembali) @manishvaity requested to be added as a co-maintainer. They got the module name wrong. Possibly meant to apply for a different module. In either case they clearly were not paying attention to their actions to close the request, move the request, or rename the module in the request.
10 Jan 2025 at 05:49 PST Granted access by @kuhikar.
These 3 issues show a decent overview of just what is going on inside Tata.
- The added maintainers are not keeping up with future development despite their promises to do so.
- Tata employes are opening issue at such a rapid rate they are not even reviewing the submissions (mistakes happen, wrong queue, etc, however not noticing them for a thread that is a precursor to injecting yourself into the supply chain it is significant).
- Tata employees are potentially not aware of their own management of the modules.
- Tata appears to have no solid internal team communications to manage the modules despite their public assurances of placing a strong team behind it resulting in public requests to be added as maintainers to modules the organization already manages.
As a security engineer I agree with @leewillis77 assessment.
Announcing you will not continue developing and allowing users a chance to find an alternative is basic secure maintainer ethics.
Won’t fix is a standard method to signal a patch will not be merged.
Despite how D.O. attempts to downplay it owners are indeed (ethically) responsible for any maintainers they add to a project.
The XZ back door last year is an example of a security incident that nearly compromised a major portion of the Linux community, the cause, overworked project owner adding a developer they could not properly validate and supervise.
@leewillis77 FYI this is originating out of
💬
Offering to co-maintain
Active
. The user came onto Slack seeking assistance to bypass your refusal to add them. @bramdriesen likely isn’t advocating for that user (we have an open abuse issue for TATA employees) however is expecting that some day the module will eventually be D11 compatible.
More details in 💬 TATA Consultancy Services is bulk posting requests to gain maintainer access to modules Active .
While I agree with how you are handling this I do want to mention there is a good chance D.O. Staff will eventually override you either through site ownership queue or the Project Update Working Group and force either a D11 fix or a new maintainer into the project, messages like offering to become a maintainer are a step one in that forceful takeover process.
@bramdriesen: there is an open issue encouraging D.O. to adopt the fork positive workflow: ✨ Prohibit the ability to adopt a project Active . Never let a Drupalism stand in the way of better security. Just because it is how it has always been done does not mean it is a secure process. As a future Full DST member it is your responsibility to always argue for the more secure software development path not the easiest.
Adding
https://www.drupal.org/u/kuhikar →
.
They came into Slack #d11readiness questioning how to deal with a maintainer refusing to add them as a co-maintainer even though the messages offered them alternatives.
Slack Thread:
https://drupal.slack.com/archives/C03L6441E1W/p1739345746560339
Issue Thread:
https://www.drupal.org/project/login_tracker/issues/3502566
💬
Offering to co-maintain
Active
(note this issue was opened on January 27th, the warning in this thread was sent on January 16th)
Module is security opt in enrolled, user does not have the security opt in permission.
There are a number of request to be added that were recently closed that track back to 2023. I didn't see on quick glance any others from 2024/2025 however this goes to history of their account and Tata.
An interesting one in the quick history is somewhat relevant to a module I co-maintain https://www.drupal.org/project/ga_login/issues/3405783 💬 Offering to co-maintain ga_login Closed: won't fix , the user tried in 2023 to adopt a module that had been rolled into the parent TFA module making it no longer needed.
D.O. has migrated away from patch files.
In order for tests to run we require all submissions to be in the form of a Merge Request.
More details may be found at: https://www.drupal.org/docs/develop/git/using-gitlab-to-contribute-to-dr... →
Quick glancing:
I suspect there will be a PHPCS warning on the comment in the patch file,
I haven’t analyzed the logic flow.
We can call this a duplicate of 🐛 Installing contrib modules can lead to TFA accidently being bypassed Fixed ( Not scheduled for backport to 8.x-1.x due to significant infrastructure changes) as it resolves the long term issues of enforcing TFA on all requests.
📌 Warn users that TFA is incompatible with modules overiding specific routes Active and 📌 Evaluate restoring using a form alter instead of extending UserLoginForm Active are also candidates for this to be considered a duplicate of.
Note: If this was not an already known and publicly disclosed it would have been considered a potential security vulnerability inside TFA and would be desired to be reported privately.
See 📌 [policy, no patch] Decide policy related to use of novice tag Active regarding the (lack of) policy regarding Novice tagged issues.
Note: This will cause the system to no longer render HTML logs on Drupal 11 tests, see 🐛 browser_output is empty when testing with Drupal 11 Active .
Since we can run the tests locally that is a minder hindrance that we can eventually solve with custom PHPUnit configuration in our repo.
PHPUnit fail is expected. No stream_context deprecation seen.
This is a pre-requisite to resolving PHPUnit failing on deprecation as it can not be ignored in phpunit configuration.
We may want to re-open 📌 Contact Material+ (formerly Srijan Technologies) about Contribution Best Practices Active .
Well intention or not, Material+ has been previously instructed to both train and audit its employees contributions.
I believe this will need to ported to 2.x.
🐛 Original page lost after TOTP authentication Active does not on cursory glance appear to be applicable as this is our 'force to redirect' where we do want to ignore the destination.
Initial thoughts are why we should be even touching the Global Request.. Turns out this is from a OLD core bug that this was even needed. 🐛 Allow form redirects to ignore ?destination query parameter Fixed .
https://www.drupal.org/node/3375113 → for how this would apply to newer versions of core.
That all aside, it appears this is the way it needs to be for now and matches code later in the same class.
Forcing commit to bypass tests due to issues from 🐛 beStrictAboutOutputDuringTests=true or failOnRisky=true causing test failures due to PHP8.4 Implicit Nullable deprecation Active .
Linking related 🐛 Aggregated asset generation causes uncacheable assets Fixed .
A hook requirements warning sounds good here.
I would suggest if there are 2 possibly unconnected issues it might be time to take this to the next level and upgrade install.php to not not allow executing if a settings.php exists and does not have an "allow install" setting contained inside.
cmlara → created an issue.
This sounds like it could go into Drupal.org Customization as a new feature for the website or at least perhaps some additions to the Update Status module in Drupal Core.
Re-opening and transferring queue for a proper review.
I can not run a script to scrape the data of the Drupal.org website.
Why not? (This data is available via various API's) (I'm not saying making everyone do this is the best method, however I'm sure this would be asked so should at least have an answer for the D.O. developers ready).
This appears to be a duplicate of
✨
Confirmation forms should not require passwords
Needs review
.
Closing in favor of the existing open issue to centralize discussion.
We need to restrict access to local Drupal accounts for users authenticated via SSO.
The view/policy established in #2855394-8: Support TFA on sites with LDAP authentication → is generally that an external identity provider should provide the MFA and related login protections.
Addtionaly:
D.O. has migrated away from patch files.
In order for tests to run we require all submissions to be in the form of a Merge Request.
More details may be found at: https://www.drupal.org/docs/develop/git/using-gitlab-to-contribute-to-dr... →
possibly add a Service Decorator on the update.fetcher service. The decorator could always provide an empty value to the the site_key parameter,
I have not published a release yet, however there is now https://www.drupal.org/project/site_key_mutator → which will (by default) empty out the site id removing the Unique Identifier (phone home) portion of the update checker.
✨ Moving project telemetry reporting from the update module into core Active would have been the better issue however its postponed and unlikely to see resolution anytime soon.
This one kinda fell through and was lost.
Wording is still the biggest one. With a fresh set of eyes it may need to be something along the lines of "Account is known to have configured TFA" or similar as the isReady() method really is checking for a known configured plugin yet doesn't assert that it is properly operational.
The entire 'check if a user needs to login with TFA' subsystem is seeing patch after patch as we find new bad assumptions (at the time my last comments were made
https://www.drupal.org/sa-contrib-2024-003 →
was likely running through my mind).
Eventually we can come back to firm it up to be more towards known operational.
Needs work as it requires a rebase at a minimal. We also may want to look at the fact ECA 2.x is now out and see what impact this has, if both can be supported or not.