Note: I am in both the linked Slack threads, I am one of the individual who first hand saw significant flaws in minutes of glancing at the code. I am in full agreement that Advanced File Destination is unsuitable for install on any site in its current condition. I agree that other commenters made points that raise reasonable questions regarding Static Node being functional for the described purpose.
Adding a bit of complication to this discussion:
The individual has published a few other AI written modules in the past, I make no comment at this time in regards to the quality of the code related to the modules in discussion here.
Projects with Stable and Security Covered releases:
https://www.drupal.org/project/auto_translation →
- 290 reported installs.
https://www.drupal.org/project/graph_element →
- 10 reported installs.
https://www.drupal.org/project/jw_player_media_source →
- 6 reported installs
Projects enrolled in security opt in with no stable release:
https://www.drupal.org/project/content_reporting →
- 42 reported installs
I think that revoking git access and unpublishing the projects temporarily might be warranted
I certainly understand the desire to stop the publication of significantly buggy code before site owners install it and to avoid new code from being published.
I am however a bit concerned that this could lead to a repeat of community disruption that was discussed in #3094854: Plan for dealing with projects that may be abandoned → .
Is there a plan to minimize the impact? Thankfully these are smaller install count modules compared to the previous incident, however that does not help the unlucky site owner who is using the module and finds the maintainers unable to commit code.
Thanks @scott_euser.
I should also put a note in this issue that I have mentioned in a few issues now.
I see many of the "Should have" list as long term essential, however in the past year since this list was originally started each new vulnerability has given more insight into limitations of the 8.x-1.x branch that can not be resolved without the 2.x branch.
While the "Should have" list is long and incomplete at this time, it may be time to consider focusing in on narrowing the 2.x target to more essentials, releasing 2.x (through an alpha/beta/stable) and opening a 3.x branch for additional API breaks.
Speaking as a dev who targeted D8+ while other devs were responsible for the D7 releases:
Some contrib issues never were assigned to D8 releases and only existed in D7 branches. After EOL I went through my modules issue by issue and migrating several still relevant into the active release branch.
It would seem treasonable to suspect similar to exist across the ecosystem which makes closing contrib issues far more questionable.
If contrib issues were to be closed I highly recommend they be given a unique issue tag so that a future developer can easily query them without needing to sort through unrelated issues.
First off: I'm aware of the user and modules refereed to in #23, and agree they are poorly written (to the extent I've added them to an internal list of modules to not to utilize and maintainers to avoid).
These modules should be removed from security coverage and their publisher needs to lose their security coverage privileges IMO perhaps based on a strike system.
I am weary of this as an option. Responsible/Ethical/Coordinated disclosure is a key aspect of security. It would be one concept if D.O. did not use wording such as Security issues do not need to be privately reported for the module_name project.
, with this wording D.O. is actively encouraging public uncoordinated disclosure for those not 'opted in'.
Any action D.O. takes to remove the ability for maintainers to compel private disclosure is an affront to security. D.O. requiring a test and to remain in the 'good blessings' of the Security Team would be conflict with security industry norms.
To put this in another light, how would the community feel if I demanded the Core Team pass an arbitrary test that I personally create, with a drawn out process of months to approve with the ability to revoke based on a condition I decide,, before I would cease disclosing vulnerabilities publicly? I presume the community would be very upset, especially in the case of a Drupelgeddon level vulnerability, and yet that is the standard applied to contrib maintainers today that this suggests extending.
Whatever that occurs for a policy regarding AI generated code there should not be considering the compromising of security privileges in the equation. Consider LLM code spam, consider LLM code unacceptable for commit due to copyright laws, consider it unwanted and target the user on those grounds, whatever is done, do not make a policy that would compromise the ability to request private disclosure.
Could you please let me know if there's anything I'm missing that would enable me to provide a file path located on a different server or outside the Drupal project scope?
Using the S3fs module with the file at s3://test.txt would be an example that works key when this patch is not applied. Once this patch enforces realpath I would expect the following to no longer function.
> file_put_contents('s3://test.txt', 'testkey');
= 7
> file_get_contents('s3://test.txt');
= "testkey"
> Drupal::service('key.repository')->getKey('s3key')->getKeyValue()
= "testkey"
@srtmdev I suggest reading the links in #3
Specifically in the what to expect:
Unfortunately, the application queue does occasionally experience a large backlog, and applications may sit in the queue up to a year before getting reviewed.
There is no Service Level Agreement(SLA) in this process.
The application will be processed, when is a 'wait and see', it could be today, it could be several months before anyone makes another review. Following the priority steps listed in the links given may reduce the amount of time waiting, although there is no gurantee it will.
Note that the process does not enroll your project in the security advisory opt in process, it tests a developer(yourself) to have the permission to do so. It is not the module being tested here, it is yourself as an individual and if/when process completes you will opt the project in not the reviewers in this queue.
I'm assuming based on your issues that your primarily doing this so that the company module is not subject to D.O. suggesting users publicly disclose security vulnerabilities, if so you are unfortunately caught in a portion of D.O. that is not very corporate or security industry friendly and will just have to 'wait it out'.
I will suggest something along the lines of paratest may be better option for the performance gain and that we phase out the use of run-tests.sh in a v2 in the templates, (or add its as a 3rd option that is preferred).
Especially for those of us who use code coverage that is our primary reason for not using the run-tests.sh job.
I've heard of some issues with paratest in the past though I do not believe I hit any game stoppers in my testing. IIRC I did run across some phpunit options not supported however I believe they all had workarounds. I need to restore working on that project again.
Long term in my eyes run-tests.sh is non-viable, as it is harder to push fixes for.
As a site owner, I'm going to miss Parentheses, Ampersand and Space., A little less Brackets, Pound Sign and Exclamation Point though I encounter them.
As a security engineer: This is not a bad hardening, however it is certainly not a fix for wherever the faults actually occur.
As someone who has used this in the past to perform sample RCE's against site setups; I can say yes this would make it much harder to exploit.
I also wish to reiterate that for these characters to cause issue someone has had to make some very serious mistakes down stream, mistakes that this change likely does not actually remove the vulnerabilities , just at most makes it significantly harder to exploit (move from Basic to Complex, saving 1 point on the Drupal Vulnerability Scoring ) and maybe move the Impacted Environment from All to Uncommon (2 points on the Drupal Score scale).
I agree with @smustgrave the 'excluded' ideas would be nice to have listed for historical purposes.
IIRC the reason to keep only latest image is security because distros getting updates
Although the image is avaliable doesn’t mean a site has to use it. We see this all the time on DockerHub older releases are kept so they can be referred to in the future.
This is analogous to arguing all the Drupal 9 core releases should be deleted due to the fact they are known vulnerable.
and PHP has monthly release cycle, so no reason to bloat storage...
First off, retention and semver are different.
Semver would be about ensuring images are generally the same and compatible between build, PHP majors and minors. This could be done even with the current system of never retaining anything except the latest build.
Additional one could set a retention window for images (6 months of same “major” and at least the last major of a semver for a few years?) to reduce storage bloat and yet still allow retaining images.
I will also add that shared storage of being in the same repository could allow image layer reuse further reducing size (this would require streamlining the builds a bit).
Pulling note in from Slack:
I recommended evaluate including both versions of the package on the images (since these are PHP Extensions this should be fairly straight forward) and possibly add a helper script to allow switching versions.
DDEV does similar in its image design that all the versions of PHP are present and a script sets the active version.
I will note that this is an example of what #3404082: Adopt (semver) versioning for DrupalCi images → is trying to address.
Major changes to the CI images that are hard for contrib to work with should generally not be done to existing images without some way to differentiate them. Including both versions and allowing a test run to 'opt in' to the newer version would better align with having well designed and versioned images.
We can use the realpath() method of FileSystem.
Given the following documentation of the API Method can you provide more justification?
* The use of this method is discouraged, because it does not work for
* remote URIs. Except in rare cases, URIs should not be manually resolved.
*
* Only use this function if you know that the stream wrapper in the URI uses
* the local file system, and you need to pass an absolute path to a function
* that is incompatible with stream URIs.
*
— FileSystemInterface::realpath()
Does the key module know that a path is a local filesystem? Is this a rare case? If so why? Does the module need to pass an absolute path to a function? If so which function only accepts absolute paths?
Triaging older open Merge Requests:
I'm inclined to believe this, as currently proposed, should be a 3rd party module and not included in the base code.
I tend to align that plugins should be kept outside of TFA to allow more flexibility and allow TFA to focus primarily on API.
Is there a strong use case that the overwhelming majority of sites would use this feature?
Side note: Also consider IPv6 when this is implemented.
Only sets the status to enabled in the user settings if a default validation plugin is configured and the plugin has been configured
See #5. We do not want to force that a single plugin must always be used.
Only shows the number of times validation skipped text if TFA is enabled for the user's account
See #3. We absolutely do need this to be present when TFA is disabled for a user.
Duplicate of 🐛 Deprecations PHP 8.4 Active
Additionally, the patch file workflow is no longer utilized on D.O., submissions should be made using MR’s. See https://www.drupal.org/docs/develop/git/using-gitlab-to-contribute-to-dr... → for more details
While at it we can cleanup the PHPCS, disable the nullability rule on the 1.x branch as we may not be able to implement it, see 🌱 PHP 7.0 Support Active .
And cleanup 2 warnings caused by PHPCS not recognizing a cspell line
Committed to dev on 2.x and 1.x.
Thank you again for the report on this issue.
Given the data from #3 (I was under the impression packagist would use releases) it seems reasonable to won't-fix this and instead find (or create) a separate meta to remove packaging customization.
Please take that into account that while the comments below are not intended to be an 'alarmist' reaction, I am well aware that this particular incident is minor at the moment (and overall is one of the least significant I've encountered in my history with software), the goal is to ensure it does not become more significant and to ensure any future incidents also have a well grounded base in place to avoid them possibly becoming significant.
Should we not seek an opinion from a the licensing working group?
I would suggest no matter what the Core Team/The Project Lead (Dries) will want to run this past legal to ensure they properly unwind this to ensure no future legal entanglement.
I don't think so. This is an existing issue that has already been released. If it would be the first release with it in maybe but that's not the case.
The phrase "Knowingly, willfully, and repeatedly" could be used to describe publishing additional releases or commits after the core team is made aware of the issue. It is one thing for something like this to happen without knowledge there is some room for compassion and more lenient mitigation.
The Linksys GPLv2 case is one of the most notable example of case law on this subject. The platform went from being Closed Source to Open Source GPLv2. An equivalent scenario for the Drupal Project in this case could be that the code no longer qualify as GPLv2 and would be GPLv3 only, not the worst situation that could happen, however I also suspect this is not desired by the Core Team.
Agree, let's just remove the offending code for now, we don't strictly need it here.
Seems reasonable.
Saw blog post that the next Alpha will be released in the near future.
Should this be classified as an Alpha blocker and release blocker?
The monthly security window I believe is tomorrow, doss this have any impact on that window?
CR created https://www.drupal.org/node/3519175 →
Responded to MR comment.
wondering whether this needs to be a deprecation (or unsilenced E_USER_WARNING) changing to an exception in 12.x
My personal opinion is no. I can understand why this would be asked since this can be disruptive and given by the code being corrected there is clear proof that some implementations have misused this
However the API is well defined, the proper response is an exception. Code that has properly implemented SerilizationInterface::decode would already have a try/catch block surrounding it.
Today 3rd parties could replace the decoder using a class alias at the auto loader meaning the code that is broken by us fixing this can already be broken by a contrib module following reasonable design standards.
The FALSE return was/is an internal component of the JSON decode method that leaked and should not have been utilized.
As noted in the issue summary, this failure to throw an exception as documented on the API can lead to undetected data loss for those applications that properly follow the documented API.
This bug fundamentally breaks the ability to use DI of the serialization services.
I have no proven case to show this however a worst case fear is that silent data corruption could lead to security incidents.
Note: Some answers have been provided to this request in Slack #contrib-tfa https://drupal.slack.com/archives/C7SR7TWMS/p1744252987515989 and are duplicated below.
My personal recommendation is that 2.x should not be run by anyone except though who are developing for the next release.
My personal opinion is also that we should not be cutting releases for 2.x until it both has all the known security fixes/hardening committed and any major API overhauls have been completed. Some of the other maintainers have expressed differing opinions encouraging more frequent releases, some of this can be seen in #3416791-6: Resolve SA-CONTRIB-2024-003 in 2.x branch → Comment #6 and newer.
I would likely want to see 🐛 Insufficient entropy in loginHash generation Active completed as the security evaluation for it was done before committing 📌 Use an EventSubscriber to process one time login links Needs work , and while I'm not sure there is any more risk now I haven't actually done a full in depth review, rather than auditing the line by line just hardening the hash is preferred.
That said there is starting to become a good argument that it may be worth cutting a 2.x before a full API is established with the intent that a 3.x branch be opened up to continue API breaking changes on the roadmap 🌱 Roadmap for 2.0.0 release Active to get away from fundamental implementation flaws that can not be fixed in 1.x.
Added a few still open issues into the roadmap
No, there have not been 19 approvals in two hours. There have been 19 issues whose status was Fixed which have been edited.
Fair, point.
I had performed a random spot audit on the list and all the ones sampled appeared to be NR->RTBC->Fixed in that window.
I have gone through the list and there may have been a 5th that was already resolved prior to yesterday.. Additionally while reviewing I did notice (however did not account for time on every single issue) that one of those may have been reviewed several hours prior as well before being marked fixed in that ~2 hour window.
See also comment #17, to which none of the reviewers replied. It is not that an application is hold on because no reviewer has been able to make a correct review.
I am a bit confused as to what is being said here. Can you clarify for me?
I'm not sure if you are saying saying that it is policy that an applications will be automatically approved if its sits for an extended period of time without review, or if you were saying that applications should be held until they are thoroughly reviewed.
Apologies for the noise.
Adding a tracking flag for auditing when modules have been reviewed by the queue only for it to be announced after approval that reviewed code contained a security vulnerability.
SA-CONTRIB-2022-051
Apologies for the noise.
Adding a tracking flag for auditing when modules have been reviewed by the queue only for it to be announced after approval that reviewed code contained a security vulnerability.
SA-CONTRIB-2022-046
Per SA-CONTRIB-2025-030 versions older than 1.1.0 posses an Access Bypass and DoS. These versions would have been part of the review process.
Tagging as “Vulnerable when approved” for tracking purposes.
It appears this has self resolved.
I assume it was some form of library that was updated inside of the mkdocs plugins.
I usually do not like to object in this queue as it is used to provide access to the security opt in feature which should be a common feature of all projects on D.O.
I am flagging as a questionable approval for future tracking purposes.
As noted in #17 the module is essentially an exact duplicate of an existing module.
Looking at the number of approvals in the queue today (19 approvals in 2 hours) I am concerned that the queue maintainers may have rushed through the process after the queue was allowed to develop a sizeable backlog.
I am uploading a diff between the two modules for posterity.
The majority of the changes are:
- Twig changes of image sources from
/modules/contrib/a11y/...
to/modules/contrib/accesstools/...
(itself likely being wrong since it presumes the module will be installed at the root of the server and under modules/contrib) - PHPCS Changes
- Additional lines in hook_help()
- A couple of variants of using the logger to record errors.
Nothing further needed. Just had sent it to tests last night intending to followup after they finished.
No need to port to 2.x as we do appear to use use the library there.
Committed to Dev.
Thank you for tracking this down.
I have put some comments on Slack however making them more formal:
As a Contrib maintainer I do not see the value.
For me PHPStan exists to identify bugs in code, not future compatibility problems. Not having a return type when the parent method itself does not have a return type is not a bug.
PHPStan scans your whole codebase and looks for both obvious & tricky bugs. Even in those rarely executed if statements that certainly aren't covered by tests.
Looking at a few of the projects I maintain there is a good chance there will be a different branch for D12 and my current branches will never need to implement a hard return type (although in many cases I already added strong code typing as part of implementing Level 9 PHPStan).
A 'feature' like this will just be noise to me, and distract from working on issues more pressing (security hardening, bug fixes, and features).
My opinion is that 🌱 Consider always having the next major branch open Active is the better method to implement, it would allow modules to opt-in to scanning against the next major core branch early to identify where they need to make changes, and reduce creating future technical debt (no need to come back in D12 to remove the type-hint warning).
As an aside. I have suggested in Slack that if this is felt to be useful it might be worth considering putting this in its own third party repository to allow it to be available to others as a way for the Drupal Community to "give back" for all it receives. If this does not make sense to be an independent project I would suggest that should be treated as a warning that indicates other methods should be preferred.
Looking at the MR itself: I note the presence of // @phpstan-ignore phpstanApi.method
, should code that uses PHPStan off-api be allowed considering the maintenance burden that could create?
Edit: Correction on the provenance, it started as CC BY-SA 3.0 not 2.5 (no change in statements above regarding compatibility concerns)
The motivation to release the information should be separated from publishing CVEs.
CNA rules
5.1.7 a CVE MUST identify the type of Vulnerability.
5.1.1 SHOULD contain sufficient information to uniquely identify the Vulnerability and distinguish it from similar Vulnerabilities.
5.1.7 is the one that likely puts us at most risk It requires we disclose at least a vulnerability type in the CVE. We can get away with just saying a basic (sample I have not looked at this particular advisory lately) "Remote Code exploit", "XSS" , "Authentication Bypass" or other similar category to maintain that compliance, however even to do that we have to disclose that specific bit of information which should likely be done under D.O. coordinating its disclosure of more details (why hide on the D.O. SA that it is a specific type when an attacker can go to the CVE to find that out).
5.1.1, we need to have a very good reason to refute why we do not align ourselves with what we are instructed that we should do. Will the CNA loose its contract for failing to comply with a should, no, will they loose respect in the industry, possibly, will they waste other organizations resources, likely.
We published sa-contrib-2024-075 as an unsupported module with CVE-2024-13311. This report does not appear to uniquely identify the vulnerability or indicate vulnerability type.
Looking at the enrichment data for CVE-2024-13311 if I am reading it correctly it appears that CISA was unable to provide full enrichment due to "not enough information".
Being flagged as "not enough information" should itself be a warning to us that we need to work on our disclosure process.
it's possible to create CVEs without specifying the CWE/CAPEC/risk score, so the title and some of the description could be updated to reflect that. It may be desirable for some to know that information.
Can you clarify what you are trying to say? You want this issues title updated? Or do you mean that that the title for modules previously published as "unsupported module" should be updated?
lostcarpark → credited cmlara → .
SA-CONTRIB-2025-023:
Curious how documented in the Security Advisory by the reporter and fixer as CAPEC-115 became CAPEC-85?
Disables users' ability to set up validation plugins except for the default validation plugin if the default validation isn't set up.
I believe we ruled that out in #5.
Even if we were to implement it, that is purely a TFA issue, the plugins should not be involved. Especially not injecting the @internal TfaPluginManager. While TFA could call that service adding it to the TfaBasePlugin would put the internal service into external plugins.
Last I understood we viewed putting default plugin first in the list as reasonable and we were open for researching if it would be possible for Recovery Tokens to not meet the "tfa is configured" provision (without breaking the edge cases such as a user needing to run on recovery tokens while awaiting for new tokens to arrive)
Does it mean that we can't merge https://git.drupalcode.org/project/rabbitmq/-/merge_requests/58 ? These changes are quite straightforward and allow the module to work in Drupal 11 as well.
It means that at this time I do not feel comfortable merging any more commits into the repository without a clear approval from maintainers added through a less ethically questionable process.
I've reached out to the current owner (who was seen active a few days ago) and have started looking at the fork process as an option if needed.
Could we start by looking at how we would change \Drupal\Core\File\FileSystem::tempnam() to not use ::getDirectoryPath()?
::realpath()
of the scheme would accomplish the same outcome.
Do we need more strict type checks for LocalStream?
If using ::realpath()
no, just validate the return is not FALSE. This allows classes that may implement local storage, however not extend LocalStream (such as extending a 3rd party framework to interface with core) to still function.
This sounds like you've identified a new Feature request for the module. Maybe a test encryption plugin that only gets enabled via a drush command to make testing in general easier?
One already exists as a testing module and we use it as part of unit Functional tests, however Drupal must be configured to use such modules on 'production' sites (not UI accessible to my knowledge).
After this is merged, on UX issues it can help people see the change visually or on multiple devices which screenshots can't really do.
We have very little in the way of UX issues directly, and those that do touch UX do not have any real customization to them, renderings would generally be controlled by site themes and the Drupal Core API meaning we have little need for multi-platform testing on a daily basis, if something is broken it is (in the current code) most likely going to be transferred to core as a FormAPI flaw.
When you are working in your "lab" setup, how easy is it for you to preview the changes on your mobile device?
Fairly easily, un-comment a port setting on the lab config and the site is directly exposed, even easier when its on one of internally routed dev machines where it has an internal hostname assigned to it (although that is rare).
Tugboat previews makes it very easy. It makes it easy for others, without extensive lab set-up to see it too.
That is the part I am wondering how useful it is, I'm involvded in many of the deeper issue so perhaps my vision is a bit narrow. I haven't kept any statistical logging of how often I reached an issue that would be acceptable to approve on just a visual without a lab setup.
Doing so means all co-maintainers and any new co-maintainers would have access to the power and benefit of your local lab type setup.
Fair, though these would likely involve a little more complex of a setup as some of the data is to my knowledge not exposed via direct commands. Nothing impossible to do, just another maintenance point to consider if the burden is worth the return, which is the root question, how often tugboat would actually help move issues forward, if it is is frequent enough it may be worth the maintenance burden
I think you are right, usefulness will differ module to module so it really up to you and the folks who are invested in this module.
The simple community question would be "What issues have you worked on in the TFA queue in the past year or two that Tugboat would have allowed you to avoid a local lab and expend less time/effort testing , and in what way would it have done so?"
Critically there is no encryption plugin installable through the UI which renders TFA impossible to setup and test inside of Tugboat. NW for resolution
As a maintainer it would be great to have MR environments so I can save tons of time evaluating MRs without needing to pull it locally.
As an actual (co)maintainer, I wonder how useful this is:
With previews only existing 5 days and most issues taking longer will we see an increase in issue noise? (although that may have just increased to 30 days in 📌 Increase Tugboat Preview expiration to 30 days Active , better but still possible noise for longer lived issues if users are closing/opening mr's to generate previews).
Is code updated when new code is pushed? If so are these new environments or continuations of the previous environment?
In this basic form there is a decent amount of setup that still needs to be done from an admin before they can test much/any of TFA's code. Add an encryption key, setup an encryption profile (which requires an encryption module), before TFA can be enabled, add a user, add a token, etc.
As a maintainer In the amount of time that takes I can (if at a terminal) easily checkout the MR and use an existing deployed lab to test (for context in my local lab I have multiple users, multiple tokens, multiple token types, user roles, and requirements setups configured to test most usage scenarios). There may be an advantage to this when I'm on a mobile device without being near a development terminal.
Presumably some of that setup time could be reduced by configuring the environment inside the template.
Beyond that I'm trying to think how often would I use this. I often review code first and then load into the IDE to validate API usage, sometimes with a requirement for a breakpoint in XDEBUG. To perform those operations I would already have the code in my local lab where I can view the changes.
How often would users use this? How many that do use this it will perform an actual test rather than a "it loaded" test which provide us little to no validation. We are not a theme, our issues are not usually visual they are technical. How much of our code is testable without edge cases that require server changes?
I don't have a good answer for those questions. Leaving this open for opinions from other maintainers and community on if tugboat adds value for our workflow.
Unable to reproduce and the reporter has not provided more information.
I will also add on that had this been able to be traced to an actual bug it likely should have been reported as a private security issue.
This project is in a bit of a weird spot right now.
I was added as a maintainer through the accelerated adoption process some time back.
That process (along with the entire D.O. adoption process) is subject to some rather significant ethic questions as it is essentially a social engineering and supply chain attack.
Factor in that one maintainers returned for a short period of time last yesr (before disappearing again) raising questions with some of my design choices and it raises even more ethical questions about my right to commit code to the project.
The project has been heavily on hold since those incidents.
I’ve been pondering my options, at the moment the following two look to be the most likely paths:
- Owner transfers the project of their own free will (do not use the Project Ownershop Queue) (may not happen due to non responsive maintainer)
- Fork into a new project.
It’s a situation that needs to be sorted out for this project to be able to continue on.
Side note:
I’ll note no one has actually tested the code above and reported back.
The user rajan kumar@2026 is well known for not testing code/credit farming leading to disciplinary action on D.O.
The other options beyond rolling back the API is setting the environment variables or placing the configuration parameters in the shared config file (not the credentials config file).
I have read that CloudFront R2 is also impacted
Minio and Ceph appear to have support for this header (as does LocalStack that we use for the GItLab pipelines) which means the impacted hosts should be limited in general.
Merged to 8.x-1.x
For 2.x we do not have the same concerns for design, however we can learn from 8.x-1.x fix and consider adding hook_requirements () checks for our core security concerns.
TfaUserSetSubscriber::class would be our prime candidate to monitor, to warn if another subscriber is before it so that site owners can validate security. We would likely want to implement this with an 'approve' list so site owners can approve a subscriber to move it from warning to notice.
We may also wish to monitor the logon routes unless 📌 Evaluate restoring using a form alter instead of extending UserLoginForm Active is adopted, however unlike 8.x-1.x where that monitoring is security critical in 2.x it (should) be only advisory as TfaUserSetSubscriber provides the backend security.
Was primarily allowing a little more time for anyone who might want to provide feedback since this decently significant code.
This is one the areas that has had security vulnerabilities in the past, including 2 weeks ago (note: the report that triggered two weeks ago would not have been allowed either with existing 2.x branch or the 2.x branch and this code.
That said I believe this is ready to go in and anything else can be followups.
Letting final test run occur after removing unused code and will merge.
Regarding Simple Password Reset:
Recent data indicates Password Reset Landing Page (PRLP) may be better aligned to API and provides a similar feature set. Currently PRLP does not work with this code however PRLP may be in a better spot than Simple Password Reset to modify their process to work with TFA.
Looks like this was sitting on my machine waiting for the deprecation bug fixes.
Uploading current progress.
I’m not seeing any explicit notes about cached items expiring and being cleaned up, we will want to make sure it's not a runaway cost.
IIRC Gitlab currently expects that the storage layer will handle this (add lifecycle rules in S3).
Considering the propose config change has been waiting since September of 2023:
What is infras opinion on this? Is this something they actually plan to enable or not given its known positives and negatives?
The currently posted policy only strictly talks about ownership.
Should the policy be updated to note that if a Trademark owner raises objections thah site moderators may not add maintainer or co-maintainers?
It’s been suggested in #3304661-8: Change how modules and submodule dependencies are handled to have more consistent namepsaces [DRAFT]. → that this should be changed.
Or does this only happen when the submodule has its own composer.json?
It impacts all sub-modules.
I wan to reiterate this just essentially opens the 'next major' split early, we do so every major release so the process isn't abnormal in that regard. IMHO this tends to align with having a MAIN branch which is intended to eventually be done for core.
Pros:
New API can be implemented immediately.
No need to postpone API breaking change or major system overhauls to branch split (issues can be worked on when devs have the time rather than crushed into a shortened release window)
Ecosystem can begin testing with the new major, as it will exist, much sooner (possibly reducing the number of critical bugs to fix during alpha/beta). This goes beyond just phpstan
Less followup tickets (possibly less cluttered issue queue).
Cons:
An additional branch to maintain.
One way to avoid having two versions of an MR would be to just not require it for the next major commit
Considering we have a fairly standardized deprecation format, It may be possible to work the issues with deprecation and let rector strip them to create the mainline MR. Could be tested and possibly become an "on commit" task if it proves reliable.
Addtionaly a healthy workflow inside MR can make it easier if developers separate deprecation from non deprecation it could be a simple re-base with excluded commits. This would take a bit of training however is reasonable to expect long term we could accomplish this.
We almost somewhat have the work already separated, I frequently see issues get through review to only later get tagged "need deprecation" before the issue can be committed. For those issues it is not much more effort to cherry-pick the current work based on the mainline branch on top of the released branch and add the deprecation.
Bumping back to the top of the queue to remind myself to work on this issue this week.
Sounds like this may be a new default in recent SDK versions
https://docs.aws.amazon.com/sdk-for-php/v3/developer-guide/s3-checksums....
Version 3.337.0 is January 15th. Rolling back should work as an initial fix.
This sounds like BackBlaze being non-compliant with the Full S3 protocol that they admit to:
https://www.backblaze.com/docs/cloud-storage-s3-compatible-api
This isn’t a new feature, it has been around since at least the 2.x SDK, just added as a default enabled now. I would not want to try and disable this feature for all systems, and would like to avoid adding a configuration option for something that seems reasonable to expect to be standard.
I wonder if this can be disabled via a few of our hooks and this can go into documentation?
I am unclear on what you are trying to say in #5. All that was posted was an error message indicating an AWS Client configuration error.
teacher.-bucket</bucket> I've checked with AWS and they confirm <code>Bucket name must not contain dash next to period
. This appears to be the first error based on the details provided.
for any reason, the region saved in the sf3fs config form is eu-south-2.
https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketLocation.html
As noted in #2 We use the AWS API to obtain the location. That value comes directly from Amazon Web Services.
Still awaiting answer to the other questions from #2 to attempt to move this forward.
Note: I am assuming you are not using a custom host, if so that should be removed first.
the region is blocked and it is showing eu-south-2 after saving the configuration.
We obtain this from AWS. AWS believes the bucket is in eu-south-2, if this is incorrect than AWS internally is going to have a significant problem.
teacher.-bucket
Is this the actual bucket name? IIRC a hyphen is not permitted at the start of a hostname portion, was this suppose to be teacher-bucket
or is this an older path based only bucket?
I don't know why the enpodint is pointing to eu-south-2
I've seen AWS redirect across regions. I suspect we are connecting to us-east-2 and AWS is redirecting us to eu-south-2 (as that is where they believe the bucket is located) where the region field (set in the API locally) now indicates a mismatch and AWS rejects the request.
Other possible thoughts: is this a multi-region access point or an accelerated access point?
Should fix src/S3fsFileService.php at the same time.
And might as well fix all references to 'the the' as it appears to have slipped into the code in a few places.
tests/src/Functional/S3fsTest.php: $this->assertNotFalse($s3_file2, "Copied the the first test file to $test_uri2.");
tests/src/Functional/S3fsTest.php: $this->assertNotFalse($img_file, "Copied the the test image to $img_uri1.");
tests/src/Functional/S3fsImageStyleControllerLockdownTest.php: $this->assertNotFalse($img_file, "Copied the the test image to $img_uri1.");
In our case we login via Rest and not the normal Drupal login form.
I'll note per SA-CONTRIB-2023-030 REST should be disabled in 8.x-.1.x deployments. The rest login protection in 8.x-1.x is a failsafe to block access not to allow access.
2.x has the foundations of allowing rest to function (however it is not yet suitable for deployment outside of development labs)
I am not sure why in isTfaDisabled the code only checks for status and plugins in the user_tfa_data.
A number of actors involved.
IIRC I have seen times where the data did not get set for a plugin meaning we can't trust the user list and have needed to fall back to the plugins (it creates a mess for the rest of the code).
From a 'fail secure' standpoint (ref:
🌱
[META] Convert fail-open code executions to fail-secure alternatives
Active
) the we should not operate on an assumption that 'no data means not enabled'. Longer term I've theorized we likely need to have a specific record on all users that explicitly states "no data here" (eg for this case it would be something like a tfa_disabled = TRUE
record, however that would need to be done in 2.x as a data model change.
As jq is instalkled as dependency here's MR
I generally recommend to be explicit.
For all we know next year yq could change and not use the jq package (highly unlikely however if they do we loose jq and don’t realize it)
I mean adding yq requires to add python interpreter
The python interpreter is already on the images and is used by gitlab_templates to provision the mkdocs stage (in other words: we are not likely to accept the removal of the python interpreter anytime soon) .
with few dependencies,
Fair it may add more dependencies into python.
If there is a binary XML->JSON->XML and YAML->JSON->YAML converter that could be a reasonable alternative to the yq package.
IMO it's useless for CI image
https://git.drupalcode.org/project/gitlab_templates/-/blob/main/scripts/... is an example of where we are creating a 51 line script for what could likely be done more efficiently and reliably a 1 line yq command. I've had other discussions in Slack where we are making the jobs harder to maintain, more fragile (dependent upon another HTTP GET, and can be broken by mismatch in versions downloaded vs version of the gitlab template) (these are incidents that have occurred in the past) by using external PHP Scripts.
Downloading extra scripts is somewhat of an anti-pattern in GitLab, and the tricks that gitlab_templates team currently use do not currently exist for GitLab Components meaning the ability to inline commands can become even more important long term.
The script alone proves that we need to make changes to XML files as part of the CI process which renders the argument 'useless' as moot and only leaves 'is it worth the cost vs savings'.
I think jq is the most common and it should be enough
The yq package adds support for XML which is likley just as important given we now have been modifying the phpunit config to support differences in core versions. As noted above yq uses jq for the parsing engine and is primarily a converter.
who can review dependencies and security consequences
A good question, to my knowledge this has not been requested before in this queue and there is no documented review policy. Given the lack of policy it would likley need to be done by a queue admin (yourself?) as none of us can speak to an undefined standard.
Just a note this should likley have been done as a v2 only change as it still can break pipelines.
As noted in 📌 Ignore all git files in artifacts Active I have used jobs in the past that depend upon git being present in the modules folder (my workflow involves copying all of the modules code to the custom folder and working out of the custom folder for all later steps as it’s better aligns to core design and avoids symlink faults)
and the CI artifacts can grow really big so it fails.
The better solution for this would likley be to not depend upon the artifacts being built until needed. We need to deal with the lack of caching on d.o however over in Quasar I’ve been designing (in loca testing) so that my phpunit and phpstan stages can be built “on demand” without storing the large asset files. Similar can be done as a v2 for gitlab_templates and is somewhat a design expectation if the templates ever move to components.
Quick glance, it appears the tests run with just a single plugin, they would not hit this condition. which is reasonable, the tests as written are plugin specific and should not be testing the rest of the general usage.
Ideally yes this would be a dedicated test that performs a mulit-step setup to validate the multi step form. Ideally built off the test plugins and not the production plugins.
This is true but it gives you the option of not continuing with the install in the first place if it was an actual dealbreaker,, and a hint that it's possible to uninstall things later.
This seems like it could be the best method, if the owner does not consent throw an error that advises them it is required.
The checkbox could be removed making it a non-option, however does that actually fill the role of consent? (the user will be warned, however if they were not compelled to agree did they actually knowingly consent?)
If you're willing to roll a stable release of https://www.drupal.org/project/site_key_mutator → , could we not just include that with Drupal CMS
I can do this if it ends up being the route that is needed/useful.
Re #5 missing library 🐛 ParagonIE\ConstantTime\Encoding dependency not installed from TFA Needs review
The 2.x branch was tagged as alpha way back when I was an active maintainer. Back then, the objective was to tag version 8.x-1.0 and then never release anything anymore in that branch. Then life got in the way...
My recollection is that part of this was that you were unsure how to add D10 support to 8.x-1.x and believed it was not possible due to core API changes. Sites (for better or worse) adopted this early alpha into production eager to deploy D10 with unfinished code knowing there were risk to doing so. Once the solution was added to support D10 in 8.x-1.x the critical need to release a 2.x (and for sites to run 2.x) went down allowing us to spend the time on 2.x to 'get it right' and find the 'best' code fixes rather than the 'simple quick' fixes we have been putting in 8.x-1.x.
When I tagged alpha3 (and alpha4 for a quick fix after) there were 1500 sites using the alpha2 tagged version that would otherwise be blocked from upgrading to Drupal 11. Not tagging those releases is actually a maintenance problem.
Personally, I wanted to get those users off 2.x, that was why the warnings in the releases, in the project pages, the desire to have the DST flag alpha 2 as insecure. At the time of Alpha3 being tagged there was (to my knowledge) nothing preventing a rollback to 8.x-1.x, see 💬 Can we switch site from 2.0.0-apha2 to 8.x-1.7? Active . Going off my notes in that issue the Alpha3/Alpha4 may now complicate that scenario (although that is the responsibility of a site owner for using the alpha).
For actively developed projects, periodic releases make sense to me. They encourage testing and help with the QA process, as they allow for a named version to be tested.
There were many commits added to the 2.x branch that were not available in a tagged release and should be.
I disagree with this, tagged releases are for when you want more general public usage, 2.x has however been in a "DO NOT USE" state. The only users who should have it deployed are those who are developing TFA 2.x, those types of users should be running off GIT HEAD as that is where the merge targets will be, how code interacted 2 commits prior is not relevant, it is how it interacts with the branch HEAD.
Yes we eventually want more active usage, however that is once all the security issues are closed out, any major architectural decisions are made, and general use is believed to be working.
I would suggest the 2.x branch is removed from the project page then, as it might be expected that users see a D11 version and they read only that.
I had considered that option in the past and have been on the fence about it. Its generally custom on D.O. to include the dev branches out as 'supported' indicating they are being worked on, and that they should be the target of new changes for backport to an older release. I would be a bit concerned about the signal it sends regarding the fact that 2.x is indeed intended to be the eventual solution, especially since I consider 8.x-1.x non-viable long term.
If you do decide that the 2.x changes will never happen,
Currently no plans for 2.x not to happen. It is just been in a state where it has needed larger arhictecutlar changes, some of them quite massive to get rid of what I see as significant design fault/limitations of 8.x-1.x. That said I see 8.x-1.x as non-viable long term, given changes found in other issues and the triggering of yet another Security Advisory for 8.x-.1.x that 2.x was not vulnerable to. There could be a decent argument for releasing 2.x with the known vulnerabilities, rapidly marking 8.x-1.x as unsupported, and very soon after spin off a 3.x as we still have more API breaking changes on the roadmap. I had avoided that
Should this issue be moved there?
I don't see why it should be.
The code is available in standard drupal/core however there is a prompt to reject the tracking while Drupal CMS removed the prompt and disclosure.
Users download Drupal CMS, Drupal CMS enable the modules in its default install without user consent. It follows Drupal CMS (and the Drupal.org/Drupal Association as the collectors of the tracking data and sponsors of Drupal CMS) holds the responsibility for the feature being enabled.
Linking a few issues related to discussion of undisclosed trackers being considered a breach of trust. While these discussed contrib and were more significant in how they were implemented I would maintain Starshot is required to be held to the same , or even higher, standard.
Was reviewing this a week ago, the main change I wanted was to make classes final. I have pushed those up today.
This could use additional eyes.
If fixing this will take a while then I guess the project page should be updated and maybe link here.
Indeed. Although the roadmap issue may be better 🌱 Roadmap for 2.0.0 release Active as the needs continue to evolve (looks like I need to update that issue, will give it a review this week).
I am not sure why a generally inactive maintainer felt compelled to push out 2 new Alpha releases on the 2.x branch when the branch is not suitable for use anywhere except a development lab (which should be deployed as 2.x-dev not a tagged release)
A cursory glance indicates that "VWO" may be a registered wordmark with the USPTO (registration number 4663137).
As the project appears it may be currently maintained by VWO representatives I am adding the following information to aid VWO in locating the tools D.O. provides them to enforce their brand protection if they should object to this change in the future:
As @mandclu was only added as a co-maintainer the existing maintainers (with permission to manage maintainers) may remove them at any time through Manage VWO Maintainers → .
Addtionaly a representative authorized to enforce the VWO Trademark may utilize the policy located in How to become project owner, maintainer, or co-maintainer - Transferring a project that used a trademark in its short name → to instruct Site Moderators to not assign privileges to other users in the future.
@mandclu: Given the above you (or Acquia) may wish to (unless you have consent from VWO) consider forking the project instead of committing code as a co-maintainer. Addtionaly I will note any commits made to the repository on D.O. could be considered a derivative work and that the GPL license does not grant derivatives a license to use Trademarks (I have not reviewed the code to evaluate if there is any Trademark usage in the codebase that would not comply with fair-use exemptions).
@cmlara have you reviewed what's auditing logs are gitlab's codebase for events like this? That seems like a useful area of research.
The basics are the project activity event log (sample: https://git.drupalcode.org/project/commerce_purchase_order/activity) which is also available via the API.
It is not an indefinite log (removed after 3 years for performance). It was mentioned in ✨ Add log of mainatiner level changes to project pages and provide notice when the Project Ownership queue adds/promotes a user Active there are some limitations in the activity feed as compared to D.O. permissions mapping where (based on how we use GitLab) not everything on D.O. side will register a change on GitLab such as change of owners.
There are webooks for pushing data into an external system, however I have not investigated the specific data it could push in relation to permission changes at the project level.
Note for context: I suspect @fathershawn did have the rights requested, however as a security engineer concerned about the overall safety of the Drupal ecosystem, and the lack of logs that can prove otherwise I do wish to raise the following points in regards to #12
We can look at commits... and at releases... and see that fathershawn had the access described in the issue summary<,
Neither of those would confirm that fathershawn had:
- Access to modify maintainers
- Access to edit the project page
again as recently as this month
That was after @avpaderno granted them permission. Those need to be ignored.
We can look at the history of the project page and see the previous edit was 5 Apr 2024, that leaves around 10 months where a maintainer, or another user with enhanced powers on D.O. could have revoked the access to edit the project page. I do not have access to confirm the maintainer has not actually logged in, however in 💬 Request to become project owner Active mentions an admin reaching out which makes me wonder if their last login was <1 year ago ?
Even assuming ability to edit the project page that would not prove the user had the ability to modify users.
We can look at https://git.drupalcode.org/project/commerce_purchase_order/-/project_mem... and notice there is a disconnect that some of the maintainers are still listed as full maintainers while their D.O. permission is no commit privileges, however that does not confirm that those changes to the user were not done sometime in the past (2020 when @guillaumev had their last activity?)
@fathershawn shawn being listed as 'maintainer' does imply at some time they were elevated to have the ability to modify users, however does not confirm they still had the D.O. permission at the time of the request (changes on D.O. downgrading permissions are not populated to GitLab and maintainers have been known to miss downgrading users in GitLab, which if @fathershawn is to be taken at their word may have proven himself by the other users still having maintainer level access)
Without an aduit log I would suggest that is not a reasonable level of proof when it comes to supply chain integrity to prove @fathershawn had all the permissions listed at the time of the incident.
Note: I'm not saying undo this as the permissions needed up being granted under a different policy rule allowing changes, I am just indicating that #12 has some flaws in the assertions the external data proves the user had the permissions claimed on the day of the incident.
Overall looks good.
Did provide a small suggested change in the MR on the UI language that may more accurately describe the situation regarding the field and suggest we add the field to the config form test to avoid a regression of it being deleted.
Moving this over to a task as that was a Drush beta 'bug'.
This is something we will need to deal with eventually. It might be possible to avoid working on it at all in the 8.x-1.x branches depending upon timelines.
Thank you @avpaderno
I am not seeing the errors on just a drush cr
. I am testing on PHP 8.4 and Drupal 11 receiving Drush 13.3.3.
Looking at Drush source the old interface definition still exists for compatibility https://github.com/drush-ops/drush/blob/660d7ec654d3e37472ec479a993ea643... and has been present since Drush 13.0.0-beta4.
What version of Drush is being installed on your deployment?
Crediting reporter.
We appreciate that this issue was submitted through the private tracker for evaluation prior to public disclosure.
The scenario I believe you are alluding to where an owner agrees to an unqualified individual is being discussed in 📌 [META|POLICY] Think of a way to make it less easy to become a (co-) maintainer Active .
We test this as part of the CI runs in S3fsTest::testImageDerivative. I find it unlikely to be inside the module if file access is otherwise working.
My initial thoughts would be is there any other changes as part of any other upgrade to modules the site uses as part of image style generation?
A plugin accidentally using $streamWrapper->realPath() would be a not so obvious change that can cause issues with non-local streamWrappers
+1 on switching as well.
This would also help the CVE process as it is preferred CVE’s be scored with CVSS (although they can be scored with a priority metric it is just not preferred).
As well as documentation there is also an entry level training course https://learn.first.org/catalog/info/id:126,cms_featured_course:1
I was intending to circle back after the last contrib SA I worked on and post an example score:
SA-CONTRIB-2024-043: TFA: Session Fixation:
Under Drupal’s scoring system this scored Critical 15 ∕ 25 AC:Complex/A:None/CI:Some/II:Some/E:Theoretical/TD:All
Cross scoring to CVSS:4.0 the score would be closer to Low: 2.1 CVSS:4.0/AV:N/AC:H/AT:N/PR:N/UI:A/VC:L/VI:L/VA:N/SC:L/SI:L/SA:N/AU:N/R:U/V:D/RE:L/U:Clear
This is an extream example, and I’ve seen some SA’s become higher under the CVSS, however in all cases CVSS told a more complete story.