Account created on 31 January 2012, almost 13 years ago
#

Merge Requests

More

Recent comments

🇺🇸United States cmlara

Believe I have already merged Vault MR19, however feel free to create a new one to test with.

(Haven’t looked at any of the proposed code fix yet)

🇺🇸United States cmlara

Looking at this, and keeping 📌 Document swapping out the client cache for increase performance Active in mind.

If we allow changing the cache time with the already (undocumented) ability to change the cache storage, the ability to manage cache is also necessary.

Adding Cache clearing support as part of Drupal Cache Clear, and via module config page.

🇺🇸United States cmlara

I thought all GitLab related things were under Infrastructure

The Merge Request GitLab message templates for Core can be set by any user with the GitLab Maintainer permission for the Core project (all full core committers), no need to involve anyone outside the Core project.

I did forget that there might be an on-commit web hook. Does that enforce a commit message format for core?

Adding the needs IS update tag for clarification on what is being targeted for change.

🇺🇸United States cmlara

I believe this should be D.O. Customization not infra.

Updated title to reflect the decision was only for Core.

🇺🇸United States cmlara

Visually looks good to me, with tests from #15 appears safe to call this RTBC.

Note: To clarify this is not intended to stay long term, ideally the Infra team finds some solution to our timeout issue in the near future, as soon as they do we should revert this commit.

This is intended as a tradeoff in potentially higher CI runtime on valid failures in attempt to reducing the amount of maintainer/contributor labor due to failures outside their control

🇺🇸United States cmlara

Looks good to me, merged to dev. Thank you.

🇺🇸United States cmlara

While in theory the concept makes sense, this likely does not provide as much benefit as Facilitate 2FA+MultiFactor compatibility (2FA/two-factor -> MFA/multi-factor) Active would for the majority of the ecosystem.

SSO is the solution that primarily benefits from this proposal, where all forms of authentication are handled outside of Drupal itself there is no need to keep the existing user/login system. However if the site needs to keep an admin 'backdoor' password login they would still likely want the core password module rendering the solution less than ideal.

WebAuthn(passkey's root implementation protocol) in theory should be very easy to implement in the existing system, decorate the authentication service, and have the login ui submit the login as

User: webauthn:tokenid
Pass: webautn:cryptohere

The webauthn validation service knows to intercept it and validate the credentials. The biggest limitation in the current design being no ability to 'plug-in' when core/contrib want to re-validate an already authenticated user (sensitive area re-authentication).

To solve the 'prompt' problem we likely need a standardized 'create identity re-validation' widget that can be embedded whenever we want to confirm identity. A rip-and-replace would absolutely need to provide this feature as part of its API.

Unless the site is going to go 100% webauthn only (anything except specially sites will likely not) you will still want to keep the user login workflow around, leaving the password plugin as a necessary feature for quite some time into the future.

Other solutions, like TFA, benefit from not fully replacing the password login system. TFA would not want to remove the password plugin. While we currently override a number of core classes in TFA we are working to move to form alters to reduce maintenance burden). The last time TFA decided to override core classes it was due to being under pressure crunch of having tagged a stable release when a known vulnerability was present and publicly disclosed. We overrode core as the fast fix to allow time to resolve the deeper fundamental flaws (which have since been fixed).

The biggest problem I hit as a TFA co-maintainer is that ordering means another module might try and authenticate the user before TFA processes leading to access bypasses. We now have an event subscriber to handle this in TFA however this would not be compatible with the example I have given of webauthn authentication. In TFA we would not know that webauthn did a second factor validation, or that it was a 'non-password' login. TFA's architecture can work with webauthn as solely a 'second factor'. Facilitate 2FA+MultiFactor compatibility (2FA/two-factor -> MFA/multi-factor) Active would (if designed well) render our need for a security event controller moot as Core would be always providing that validation before login.

Webauthn itself is (sometimes) a form a two-factor and fits in with an MFA workflow as a second token validation and can assert MFA validation has occurred to an MFA API.

TLDR:
We likely want a standardized extendable API rather than a rip-and-replace system as currently proposed in this issue. Perhaps the 'red-carpet' module provides that and 'password' builds off it.

🇺🇸United States cmlara

This issue is a bit of a convoluted problem.

Personally I view this issue as having a root shared blame, TFA should have prevented the login from occurring on its own (we now have that fixed in the 2.x branch) Miniorange never should have been able to bypass us, and it isn’t security failing on our side that they could. Equally Miniorange should not have bypassed TFA’s login checks.

More than anything I primarily have been leaving this open in our queue as a marker of a known fault in 8.x-1.x that can’t be fixed without the major architecture changes of 2.x.

Miniorange (and other login modules) across the Drupal Ecosystem need to be fixed up to work together, and it ultimately might require core embracing Facilitate 2FA+MultiFactor compatibility (2FA/two-factor -> MFA/multi-factor) Active .

🇺🇸United States cmlara

cmlara created an issue.

🇺🇸United States cmlara

D7 branch as well ?

It's also annoying that GitLab uses script_failure for two things:

Thankfully to my knowledge we have not seen these in production. If we get to this level of failure we likely have a very significant infrastructure issue.

Although if the root cause of composer script failures is a GitHub block we may eventually see issues with loading 3rd party images from the GitHub registry.

🇺🇸United States cmlara

As the "current" composer job is a dependency on other jobs ran in the pipeline, I'd say that this works as designed.

I’ll note there is no reason that has to be the case for linting jobs, static analysis and tests(nightwatch/phpunit) yes they need a successful composer run, linting stages can be designed to operate standalone.

🇺🇸United States cmlara

I doubt we want to set the script failure globally, composer-base would likely be better.

Leaving as NR. Change will technically work however increases probability of re-running on other stages. See this downstream pipeline with phpunit failures https://git.drupalcode.org/project/api/-/pipelines/328919/builds

🇺🇸United States cmlara

Adding a requirement check might be a bit excessive for this change so decided against it for the time being. Long term we may want to add such a check and deprecate the cleartext plugin.

While we could technically do so now as we are in Alpha, the plugin has been around since the original 1.x alpha branches, it would be aggressive to remove it at this juncture when trying to bring the module to a stable release, especially with consideration that the Lockr module will go unsupported at the end of November when the service shuts down.

Added a new static memory plugin that can be used instead of encrypted storage for sites that do not expect to use Leases frequently (some/many? sites may never use them) to avoid the need for setting up the Encrypt module.

🇺🇸United States cmlara

I agree, given there is no official standard at the moment and the final standard is still very much undecided removing the sniff completely appears to be the best choice.

Updating IS/Title to reflect new proposal

🇺🇸United States cmlara

Given no arguments given as to why we should not require updating the site certificate store I'm going to close this as won't-fix.

Addtionaly as we use an injected client, this could be changed outside Vault by overriding the core http_client service (possibly by core itself as it provides the http_client service).

🇺🇸United States cmlara

#18 makes me realize, isn't the access policy API a reason to NOT implement this? If I understand correct if this was committed we would have a system (on the User Entity) that is outside the access policy API ability to control permissions.

What currently makes a role that has all permissions (an admin role) special is the UserRolesAccessPolicy.

while I wanted to pass the 2nd argument to CalculatedPermissionsItem constructor (which is “is admin”), I realized that the user account is missing this simple method.

I am not an expert at the new PolicyAPI system that has been deployed, however the constructor documents states bool $isAdmin: (optional) Whether the item grants admin privileges. and the linked documentation page says You can add an array of permissions names or, alternatively, flag that the account should have admin access by setting the second parameter to TRUE. (See SuperUserAccessPolicy in core).

To me this reads as if you are defining the permission item as an admin permission explicitly. I would expect that normally you would leave this as FALSE, and only set it it if you yourself are declaring a user an admin based on your own unique factors similar to SuperUserAccessPolicy. Usually the isAdmin in this case would have been set by UserRolesAccessPolicy.

RefinableCalculatedPermissions::mergeItems() would appear to back up this assertion that it merges the admin tags from many sources into one and if any asserts the user is an admin the permissions hold true. You would not need to 'mint' them to an admin, unless you revoked all the existing permissions.

Yes previous policies can be completely overridden, however I'm having trouble imagining a scenario where a site would want to override UserRolesAccessPolicy's isAdmin determinations just to re-implement the same check?

🇺🇸United States cmlara

if ($account->id() == 1) {...} was horrible, and thankfully will now slowly start going away.

However just because we can add isAdmin() does that mean that we should?

If a user is an admin they will already have all the permissions the module grants. Would it not be better for modules to check in context of module permissions compared to an over encompassing special exit for isAdmin()? I believe this was somewhat implied with #4.

I am of the mindset that if your falling back to "check if it is the god role" compared to "is the user authorized to do this" there is likely something wrong with the access control implementation (either in design, assignment, or validation).

🇺🇸United States cmlara

Everything except script_failure is already configured as a global default to retry two times.

https://git.drupalcode.org/project/gitlab_templates/-/blob/e8fa5b6816adf...

All I am suggesting is adding scripts_failures to the existing list for the composer jobs.

Ideally we might have been able to limit it to a specific exit code however I am not an aware of a specific code for the issues we encounter with timeouts.

🇺🇸United States cmlara

We have a passing test stack now. Setting needs review to get eyes on below:

$ grep -R Json::decode *|grep -v Test

The following entries are already converted to handle the exception (based on failing tests):

core/modules/system/src/SecurityAdvisories/SecurityAdvisoriesFetcher.php:        $json_payload = Json::decode($response);
core/modules/jsonapi/tests/src/Traits/GetDocumentFromResponseTrait.php:      $document = Json::decode((string) $response->getBody());
core/modules/media/src/OEmbed/ResourceFetcher.php:        $data = Json::decode($content);
core/modules/media/src/OEmbed/ProviderRepository.php:      $providers = Json::decode((string) $response->getBody());

We may want to convert the following areas of code with no apparent failing tests to catch the exception. If so do we need to do that here or is that a followup? If we need to do that here can we skip the test gate? These are well out of scope of the bug being reported here.

core/modules/announcements_feed/src/AnnounceFetcher.php:        $announcements = Json::decode($feed_content);	
core/modules/jsonapi/src/Controller/EntityResource.php:      $document = Json::decode($request->getContent());
core/modules/jsonapi/src/Controller/EntityResource.php:    $body = Json::decode($request->getContent());
core/modules/ckeditor5/src/Plugin/Editor/CKEditor5.php:    $toolbar_items = Json::decode($json);

Unsure on if we want these to be patched, or if we want the exceptions to rise through the stack:

core/modules/package_manager/src/PathExcluder/UnknownPathExcluder.php:    $repositories = Json::decode($repositories);
core/modules/package_manager/src/PathExcluder/UnknownPathExcluder.php:    $extra = Json::decode($this->composerInspector->getConfig('extra', $packages['drupal/core']->path . '/composer.json'));
core/modules/package_manager/src/Validator/AllowedScaffoldPackagesValidator.php:    $extra = Json::decode($this->composerInspector->getConfig('extra', $path . '/composer.json'));
core/modules/package_manager/src/Validator/ComposerPatchesValidator.php:      $extra = Json::decode($this->composerInspector->getConfig('extra', $working_dir));
core/modules/package_manager/src/Validator/PhpTufValidator.php:    $repositories = Json::decode($repositories);
core/modules/contextual/contextual.module:    $element['#links'] = Json::decode(rawurldecode($encoded_links));

package_manager: We might want those errors to show, in either case as the module is experimental can we deffer that to package_manager project to figure out before it makes it to stable?

contextual.module: This appears to be developer facing, we may want this to rise through the stack.

Suggest we do not consider patching:
core/modules/media/src/OEmbed/ResourceFetcher.php: return Json::decode($data);
This is decoding data that just went through encode(), if this fails something is significantly wrong with the deployment.

🇺🇸United States cmlara

I can confirm I'm no longer seeing 404's on the web sockets and ping messages appear to be processing unobstructed allowing the UI to adjust as needed to changes in an MR pipeline.

Appears reasonable to call this RTBC as is. Any other errors can be raised as a new issue.

🇺🇸United States cmlara

Removing ['allowed_classes' => FALSE] fixed the majority of the failures.

On a cursory glance many of the remaining failures appear to be test design failures in JSON API related tests:

Json::decode((string) $response->getBody()); Appears to be called in several places where getBody() may be an empty string (which is NOT a valid JSON string).

ResourceTestBase::assertResourceResponse() (a basic helper method) appears to trigger this as it will attempt to cast the response body as a decoding string to be used in the message without any knowledge of what the string value will be.

Later in assertResourceResponse() it appears the test (incorrectly) was written to depend upon GetDocumentFromResponseTrait::getDocumentFromResponse() returning NULL on invalid JSON which is off-spec for the API.

In Oembed:
core/modules/media/src/OEmbed/ProviderRepository.php:116 appears to be checked by looking at the value being empty rather than looking for an error (another off-spec use of the API).
core/modules/media/src/OEmbed/ResourceFetcher.php:67 appears to check the internal use of json_decode with json_last_error(). #3186415: Make oEmbed resource fetcher more tolerant of unexpected Content-Type headers called out that the JSON Decoder failed to throw an exception.

Essentially: Legitimate test failures caused by off-api usage of methods.

🇺🇸United States cmlara

https://www.cve.org/Resources/General/End-of-Life-EOL-Assignment-Process... was written for v3 of the CNA rules however it covers logic of why a CNA should consider publishing CVE’s for EOL releases. It also discusses how if the Drupal CNA chooses to not include them in its scope that they can still be published through the CNA of Last Resort (MITRE) taking the ability to control the message out of the hands of the Drupal community.

In short: The DST does not have to, however it may regret not publishing the CVE if it goes through alternative channels.

Linking 💬 Publication of CVE-2024-45440 by MITRE Active as an example of what happens in a worse scenario (not EOL software) when a vulnerability is published outside of DST control.

Suggestion:

- including the Drupal 7 End Of Life EOL code.
+ including End Of Life (EOL) code.

There is really no reason to call out Drupal 7 explicitly and will just lead to a need to update the scope again.

I have more opinions about the scope I will post latter.

🇺🇸United States cmlara

#15 brings up a good points regarding balancing protections vs user experience and adjusting those protections as needed to meet the current situation.

This fingerprint detection also blocks the Internet Archive 📌 Whitelist the Internet Archive so it can archive Drupal documentation Closed: works as designed .

The first time I saw this service in the request logs it flagged in the incident response plan to ban the host under concern for the amount of data it was pulling.

I need to run the URL’s through the enterprise lists later to see how they are currently categorized on the enterprise side. I have in the past run in to cases similar to this where a service is dual categorized. In such cases I have seen network security teams decided the keep the block leaving internal teams to either arrange for the protections to be disabled or utilize a different service. May not be the case for this service, however it is at least worth knowing the protections could impact at multiple levels, some the users may not have control over.

I once had issues with the blocking, making D.O. unusable. I provided D.O. Infra with the request ID’s and they were unable to determine why. This raises questions about D.O. visibility into the service.

If memory is correct I believe we also had this service block one reasonable sized Drupal event due to the influx of visitors from a shared connection in the event hall.

🇺🇸United States cmlara

#12 was meant to mean the permissions bceyssens gave ivinish on D.O.

🇺🇸United States cmlara

We have debugs in our plugins as well, adjusting IS.

🇺🇸United States cmlara

cmlara created an issue.

🇺🇸United States cmlara

Here ere are some high level thoughts on #9:

I will note if there is ever any question on how to handle these fields https://github.com/CVEProject/cvelistV5/ is an official copy of the CVE database that we can observe to get an idea how other CNA's handle these values to help guide us in interpreting the official definitions.

Skip Metrics (CVSS) for now

I would like to encourage us to avoid skipping the CVSS if possible. Especially if we publish under the new modern V4. The supplemental metrics are an incredibly useful to help site owners determine how to prioritize their time on the vulnerability.

At a minimal we should include the D.O. score as an "OTHER" type.

Solution: "Upgrade to a version that is not affected. View referenced advisory for more details."

Any reason not to include the upgrade to versions and other relevant content here by default? We already generate this as part of creating a Drupal Security advisory, no original thought required to include this.

CVE data is a reference set for security tools, whatever we place her may be visually shown to the site owner when using automated tooling to deploy fixes (helping them determine if the automated fix is sufficient or if they need to take additional action).

Vendor or project: Drupal
Product name: either Drupal core or "Drupal X [module|theme|distribution]"
Package collection URL: Link to the project page

Lets get some definitions from Vulnogram out first (as shown in the UI):

Instruction: Enter a vendor and product OR a package and a collection
Vendor or project; Vendor or org or project
Product name: Name of the affected products
Package or Collection url: URL identifying a package collection (determines meaning of packageName). Example given: https://www.wordpress.org/plugins. 
Package name: Name or identifier of the affected software package as used in the software collection.

However in conflict to the instructions, the CVE schema says product+vendor is mandatory.

Pulling the CVE Schema definition which is more authoritative:

Vendor: Name of the organization, project, community, individual, or user that created or maintains this product or hosted service. Can be 'N/A' if none of those apply. When collectionURL and packageName are used, this field may optionally represent the user or account within the package collection associated with the package.
Product: Name of the affected product.

Contrib modules are generally not created by Drupal as a vendor, and they are not part of the Drupal project (this is Drupal core only), they are independent of Drupal as an organization and are only part of the Drupal Ecosystem.

I would suggest that for Vendor we:
If a (single) company is primarily responsible for the project (such as commerce) we list the vendor name.
If a single individual is primarily responsible for the project we use their name.
If no single vendor or user we consider either using the project owner account name, or we use N/A.

Package or collection url: We do not have a single url to search all project types (module/theme/etc) we could create a single landing page or just use D.O homepage. I would suggest we consider the Drupal Packaging repository url. If we use the packages server URL we can be deterministic about which versions to avoid overlaps. Eg 7.x-1.0 is published as 1.0.0 on the 7.x endpoint which is not the same as 8.x-1.0 which also is published as 1.0.0 on the 8.x endpoint.

Note: the Schema examples do list just "https://drupal.org", however to my knowledge we are not bound to use that.

For me the deciding factor for the collection URL is do we want to be able to show a difference between overlapping versions and provide the information for security scanning software to differentiate the two. I would say yes and that pushes me towards the repo url. If we don't care about differentiating (see the history on why FrinedsOfPhp can't publish Drupal advisories) a fallback to just https://drupal.org/ appears reasonable at the risk of making this harder for automated tools.

Package name: I suggest we use the machine_name (identifier), project names can be changed and will not stay relevant in perpetuity, the machine_name leaves no doubt as to the CVE target identity, it also pairs well if we use the packagist server as that is where the project metadata will be found.

🇺🇸United States cmlara

/drimage/2720/1530/674/-/ looks like it is a control system of some kind for the drimage module to function correctly especially since the info page says It is meant to take the pain away from configuring and maintaining responsive image styles by simply not needing any configuration..

If you bypass this I suspect you will not actually have a working deployment with that module, content may be delivered based off the drimage_1024_ image style without customization.

Nginx could likely be configured to rewrite to s3fs's path, however if at that point if the module is not functioning there would be no need to keep it installed, which would remove the need to rewrite the URL at Nginx.

This isn't just a prefix, this is the drimage module provide its own independent image style controller to modify the response. S3fs needs to provide a controller controller for the s3fs provided schemes provide the redirect.

This unfortunately is an area where Drupal Core is not well equipped to handle multiple providers at the moment. Long term the solution is likely Split ImageStyle into the config entity and a separate event-based image processing service Needs work . Short term you may need to investigate using Core built in responsive image styles. DrImage might be able to fix this in an agnostica manner by checking if the image is on a remote stream wrapper, and redirecting to the target URL.

For context this is a similar to the following issues that involve competing controllers.
Support webp module Closed: won't fix
Support webp image style generation via imageoptimize_webp module Closed: won't fix

🇺🇸United States cmlara

We have released D11 support, the bot will no longer run, closing out.

🇺🇸United States cmlara

Committed to dev.

Thank you for the report and the fix.

Setting needs port as the commit does not appear to cleanly cherry-pick to 2.x and we will want to get it there too.

🇺🇸United States cmlara

To my knowledge hard coded HTML tags should be avoided in the From API (FAPI) whenever possible.

My first thoughts are this is the FAPI responsibility to handle nesting levels on the page, or perhaps the Theme.

I won't rule out that this is actually our fault and possible due to either missing some normal header or from nesting without a title somewhere in the FAPI layout (though even than, shouldn't FAPI be responsible for that too?). Second thought is maybe this is related to use of '#theme' => 'item_list',

Sending back to needs work for more investigation as the suggested fix (modifying multiple plugins and needing to add manually defined H levels) implies there should be a different solution.

🇺🇸United States cmlara

Needs work for real minor changes, otherwise looks good to me.

🇺🇸United States cmlara

The recent events going on in the Wordpress community involving the ACF/SCF module brings quotes that are relevant to this discussion. I will admit there are some differences in the current WP situation where the maintainers were active (prior to being blocked making them inactive) we have had the same situation in our past when the CWG has banned users #3094854: Plan for dealing with projects that may be abandoned and to my knowledge is the currently prescribed process to happen again.

I will note before the recent situation have said Wordpress.org had a better (written) policy that did not allow as much room for takeovers. The recent situation calls into question their commitment to the policy. We do have the following quote from ACF implying unlike D.O. where this is common, is rare in the WP community.

A plugin under active development has never been unilaterally and forcibly taken away from its creator without consent in the 21 year history of WordPress.

https://x.com/wp_acf/status/1845169499064107049

Automattic’s actions also echo a supply chain attack in some ways. In a plugin directory, it’s considered a supply chain attack if an actor silently takes over a plugin and ships functional changes without disclosure, which is exactly what happened in this case.

https://blog.pragmaticengineer.com/did-automattic-commit-open-source-theft/

This incident is a nightmare scenario for companies serious about supply chain security. Automattic has shown it’s ready to take over plugins as it pleases. It has also shown that it rolls out quiet changes with little to no testing. This would make it irresponsible for enterprises and government organizations to rely on the WordPress.org plugin directory.

https://blog.pragmaticengineer.com/did-automattic-commit-open-source-theft/

Enterprise customers will not trust *any* managed WordPress provider that was part of this supply chain attack silently replacing a major plugin (ACF)

https://x.com/GergelyOrosz/status/1845883272192049156

🇺🇸United States cmlara

Just a note: this was a composer crash.

Normally composer will revert the composer.json back if it is unable to require a new package due to resolution issues.

Likely it is this Composer bug:
https://github.com/composer/composer/issues/12151

Especially since the module has the following in its .gitlab-ci.yml:

COMPOSER_EXTRA: '--ignore-platform-req=ext-sodium'

Arguably this failure may be correct as the situation implies composer had a serious problem and the system is in an inconsistent state, composer could have written files into vendor/ and than crashed corrupting the rest of the results (I’m not sure when the lock file is written, before or after file writing).

Would suggest considering this works as designed.

Side note: the drush job needs to ignore extensions as well for it to ever work in this scenario.

🇺🇸United States cmlara

Adding onto this: those of us developing Gitlab components will will want this feature to be available on GitLab per https://docs.gitlab.com/ee/ci/components/#publish-a-new-release

This may be another reason to embrace 🌱 [META] Use GitLabCI for packaging Postponed unless we wish to have edge processing to determine if a redirect should occur.

🇺🇸United States cmlara

Basic TFA configuration details are documented in the README. https://git.drupalcode.org/project/tfa/-/blob/8.x-1.x/README.md#tfa-conf...

I will admit they are not significantly in depth, however the majority of the TFA options are self-documenting on the setup page while the dependcy (KEY/Encrypt) may need reference to their documentation.

so that this works with Microsoft or Google Authenticator?

If you are asking for the app side this would usually be documented by the app maintainer.

From a TFA side the most important aspect is which plugins you enable. Time Based tokens are the TOTP plugin while counter based are the HOTP plugin.

I believe the Microsoft app only supports TOTP while Google Authenticator I believe supports both types.

TOTP is likely the most common used plugin type among deployed sites, however the choice of plugin is left to admins based on their needs. Counter based are most useful for systems that do not have a reliable clock which is rare in a web based environment.

It may help to indicate a specific question

🇺🇸United States cmlara

Hiding repeat patch in #22 to avoid future confusion.

🇺🇸United States cmlara

, our tests now fail because "current" now refers to a different version.

I have a lot of issues with how gitlab_templates is being run, however in their defense the default was always suppose to be "the latest release in the latest major" the use of 'current' was a bit of a misnomer that came out of them adding the 'previous major' and 'next major' jobs.

The value was also (originally) suppose to be updated the day of the release (in other words failures should have started months ago unless you chose to change what was originally $_TARGET_CORE and is now $DRUPAL_CORE iirc.

It is possible this was not communicated well although I know blog announcements about the changes being made. It is possible the base documentation needs to make this more clear that the 'current' job is always suppose to be "the latest in the highest released major".

I suggest that if OPT_IN_TEST_SUPPORTED is set to 1, then it tests on all versions of core that are not EOL and that match core_version_requirement.

I've been pondering this lightly in the back of my brain (especially to implement a dynamic min/max test) however I have not sat down to seriously consider the implication details.

The "back of the napkin" design is to do this would require a single job that runs, reads the composer.json, and fires off individually triggered jobs for each combination, or a matrix for every single branch available with a manual trigger (this would have some bad UI)

There are some complexities I have not looked at yet (such as permissions to trigger the job, how we get the data back into the MR if we dynamically generate the jobs etc).

The easier solution for this as noted above to 'lock down' the tested version to what the module supports and not depend upon the existing variables ( use DRUPAL_CORE: ^10 to always get the latest 10.* release, combine with prefer-stable in composer.json to avoid dev releases or alpha/betas)

🇺🇸United States cmlara

With no additional details linking nah issues to s3fs and the fact we are no longer involved in assert storage I’m closing this as works as designed.

🇺🇸United States cmlara

Adjusting tags as this needs updates per #14 and #17

Not treating UID:1 as special becomes even more relevant now that 📌 Add a container parameter that can remove the special behavior of UID#1 Fixed is into core.

Switching the form to use the Core authentication service and offering an OTP when one is configured seems like our best solution.

SSO users we can't really solve via the UI and fallback to a drush command seems best.

Provide drush command to reset a user's TFA data Fixed was already added as one example of administrative commands in Drush.

As an aside, D.O. has migrated away from the patch workflow. More details on contributing to issues on D.O. using MR's may be found at https://www.drupal.org/docs/develop/git/using-gitlab-to-contribute-to-dr...

🇺🇸United States cmlara

Can you advise what version of Drupal Core and Drush are involved (as well as confirm the TFA version? I am trying to determine how high I need to prioritize this issue.

I see we have not updated the Drush commands to use the create() method. There is work that has to be done here however even with Drush 13.3 I am not able to reproduce the error.

This could possibly be an update from 1.6 issue as the extra service was added in 1.7. Drush sometimes is unable to fully purge its caches requiring using core/rebuild.php, although I'm not sure if that would resolve this with the service in the Drush container.

The only other scenario I can think of for why I can not reproduce this is that the Drush developers have reported in the past if the cache is cleared via the UI the drush.services.yml will not be reloaded which might cause this.

All that is an aside as to be "future proof" we would want to use create() for the tfa.commands service as drush.service.yml was suppose to be removed in 13.0. https://www.drush.org/11.x/dependency-injection/#create-method

For tfa.token_managerI'm on the fence, This service is only useful to Drush, I would prefer not to pollute the Global container, though equally avoiding doing so may be excessively complex. The primary reason for the service existence is unit testing.

As an aside to help your contributions gain acceptance on TFA and other projects:
Drupal.Org has dropped the ability to work with patch files. Contributions need to use MR's going forward to be tested. More details can be found at https://www.drupal.org/docs/develop/git/using-gitlab-to-contribute-to-dr...

🇺🇸United States cmlara

think I just made an assumption that coder only includes things that are part of the coding standard

Coder checks your Drupal code against coding standards and other best practices
https://www.drupal.org/project/coder

We could maybe argue that “best practice” allows this however I would expect that to be in the DrupalPractice rule set not the Drupal rule set.

But I guess they can't always happen simultaneously?
report

The last step in the coding standard process is creating the Coder rule.
https://www.drupal.org/project/coding_standards

Even more interesting is the Coder team tried to use committing the rule as grounds to expedite the standards change https://www.drupal.org/project/coding_standards/issues/1624564#comment-1... 📌 Coding standards for "use" statements RTBC rather than removing the rule they had previously incorrectly added.

I’m not sure where I stand on what the order should be (it is a trivial standard) however I don’t agree with this being released alongside bugfixes given the rigorous debate still ongoing.

Hopefully coder in the future will be careful about tightening rules.

As an aside it would be nice to see semver adopted with new standards and standards changes as “minor” releases and bugfixes as patches for those who may wish to adopt standards at a set time.

🇺🇸United States cmlara

As a common user I can not see the actual config.

The GitLab UI was showing the “protected” icon only on the latest branches before I posted.

Now all but the 10.1 branch appear to have the flag and when I look on the branches page it appears to show 10.1 as protected while the drop-down does not.

Possible false alarm by a cached response. Realizing I also was not logged in when looking.

🇺🇸United States cmlara

Should the older <= D10.0 branches be protected as well?

While they are an older release series the branches have similar risk as the currently supported branches.

🇺🇸United States cmlara

@bceyssens:

Just a note, while your D.O permissions do not grant the ability to manage maintainers by granting @ivnish the maintainer role in Gitlab you have effectively given them that permission.

GitLab maintainers can add and remove other maintainers and D.O. will sync back from GitLab.

No action needed if you find that situation acceptable, just wanted to make sure you are aware of the permission system implications. This request in one line turned from “co-maintainer” to “full maintainer” as if it was an insignificant change.

🇺🇸United States cmlara

The UI has a lot packed into collapses sections. You can click on “type” and “read more” where you will be provided a more detailed description.

Type or role of the entity being credited (optional). finder: identifies the vulnerability.
reporter: notifies the vendor of the vulnerability to a CNA.
analyst: validates the vulnerability to ensure accuracy or severity.
coordinator: facilitates the coordinated response process.
remediation developer: prepares a code change or other remediation plans.
remediation reviewer: reviews vulnerability remediation plans or code changes for effectiveness and completeness.
remediation verifier: tests and verifies the vulnerability or its remediation.
tool: names of tools used in vulnerability discovery or identification.
sponsor: supports the vulnerability identification or remediation activities.

I can see how 'coordinator' could be argued to be the correct classification, and perhaps it is just me, however I perceive the “coordinator” position as being more involved. I envision the role as a person that is actively managing timelines and the developers, checking on them every few days to see how code is progressing, interfacing with the vulnerability reporter as a single point of contact to all internal team members, cross coordinating with the internal QA team, essentially a project manager. I can not speak to this specific SA, only in general, that my experience is the DST is generally not that active in issues which is what raised the question in my mind initially.

If we exclude folks who were "performing their normal duties" then isn't fixing a bug in the code a normal duty of a maintainer? And for security researchers I would also say that reporting a bug to the vendor is also the normal duty. That doesn't seem like the right hurdle to use to judge the situation.

Advisory publication is primarily a low effort administrative duty compared to a developer that must envision the solution and implement it, or a reporter that expended technical effort in discovering and analyzing the fault, as well as choosing to publicize the fault rather than sell the fault.

I can not speak to the Diff issue as I was not a participate to it. My personal experience with the Drupal Security Team is they tend to fill a role analogous to the power company keeping my office on the grid, my ISP keeping the backbone online, and the systems engineer that keeps my email server online to receive messages from the reporter. Critical roles that need to be fulfilled for me as a developer to perform my dustiness, and yet we would not consider crediting them on the security advisory. "Performing their normal duties" is meant simply as a shorthand for "it is expected, it is common, it is not exceptionally note worthy as it relates to this specific vulnerability report", we need the DST members to perform their role to publish the advisory, it is a critical position, and yet at the same time it is the entire reason for the DST to exist making it not noteworthy.

I maintain that following the Drupalism of "credit everyone for everything no matter how insignificant" would degrade the acknowledgment the 'credits' section gives which is disrespectful to the reporter and the fixer.

I see opportunities for mapping and automation, but I also don't think it will be possible to have module maintainers fill it in during the creation of the advisory…
I imagine that we'll get to a point where there are ~10 different CWE entries we end up using and it will be easier to pick among them.

There certainly is a fair amount of variability to it. Yes we will likely get to a point of common mappings for routine issues that we pull from.

Long term, Maintainers should be able to understand the classifications. These identifiers are as much about classifying what the vulnerability is as they are about educating maintainers to constantly evaluate code for common flaws before those flaws ever make it into the repository.

I’ll note that this is much harder to classify when working off the public report vs actually being the developer who created the fix. Significant context is not included in the advisory as written that would help narrow down exact classification which is part of the reason I gave two options in #5 as the correct of the two depends upon internal detail I do not have.

I will also note that these mappings are not a required field for CVE’s. The inclusion is appreciated and with time we can build up maintainers to have the knowledge to include this context without assistance.

🇺🇸United States cmlara

CWE-200 is discouraged for CVE mapping per https://cwe.mitre.org/data/definitions/200.html and should be avoided if at all possible.

Looking at just the SA without reading the fix code to me the issue sounds closer to CWE-862 or CWE-863 if we want to go under the 200 scope (200 -> 285 -> 862 || 863)

CAPEC-113: Feels a a bit off, it looks to be aimed more at API's (either software or hardware implemented) and a bit more aimed to exploiting them (I'm more thinking SQL Injection or similar) . I wonder if CAPEC-212 might be a slightly better fit? Specifically by leveraging functionality with design flaws that enables the adversary to gain access to unauthorized, sensitive data. since this uses the UI that already exists and its a design flaw that the user can select two points and see the differences between them without filtering. I'm not as well rounded on the CAPEC list and could be mistaken here.

Regarding credit for 'others' that the DST lists as 'Coordinated By' (different from the CVE definition of response coordinator) . While this does appear to be technically valid per the definition given in https://cveproject.github.io/cve-schema/schema/docs/#oneOf_i0_containers... I wonder if it is well aligned to industry practices to list to credit those who are performing their normal duties as part of the CVE process?

I looked through the CVE's for 2023 and 2024 YTD and only found the following references to "other"

CVE-2023-0669:-- Appear to be credit for media publicity.
CVE-2023-24477: Acknowledging an organization that confirmed the bug after report from a customer.
CVE-2023-37895: Unknown reason, both individuals are listed as committers on the Apache Project page.
CVE-2024-29732: The advisory notes discovered by reporter with special thanks to the individual listed as 'other'. Quick glance the individual appear to be a penetration tester. I was not able to link them as staff of the advisory organization.

Is this a trend we want to set when no other CNA appears to do so?

🇺🇸United States cmlara

Post #7 confirms that this code is functioning as designed.

As noted in #4 when accessing these files you should use public://path/to/file not s3://s3fs-public://path/to/file.

You need to check your views_data_export configuration and adjust as needed.

🇺🇸United States cmlara

I suggest you re-read the procedure on how to become a Maintainer of a module marked unsupported for security reasons.

🇺🇸United States cmlara

Committed to dev to allow a few more days of testing usage before tagging a release.

🇺🇸United States cmlara

I tried this earlier and this broke functionality to skip TFA for administrators

8.x-1.1 by default disabled admin ability to skip TFA . It needs to be enabled in settings manually (reset_pass_skip_enable). This setting does not exist in 2.x. UI or schema. That was SA-CONTRB-2023-030 which 2.0.0-alpha2 is known vulnerable to

That could be the cause of your experience.

🇺🇸United States cmlara

If an MR is opened by a core committer then no one but a core committer can run the test-only feature

https://drupal.slack.com/archives/CGKLP028K/p1713458778208449?thread_ts=...

TLDR: we tested and confirmed they would be the case even with permissions added.

interface to allow all maintainers to edit or something

GitLab uses the rule on if a maintainer has access to commit code.

ci_allow_fork_pipelines_to_run_in_parent_project = false might solve this.
I’m not sure if it would cause pipelines to fail (GitLab UI tries to create the pipeline in the parent project and rejects) or if it coerces jobs back to the issue fork (in which case all you need to do is get push access to the fork).

Ref: https://docs.gitlab.com/ee/ci/pipelines/merge_request_pipelines.html#pre...

This all boils down to a simple “pipelines run by maintainers execute in the protected core project”. The project the pipeline runs in is determined at pipeline creation and stays with the pipeline and is not impacted by trying to run a manual job later. At the point your requesting to run the test job you do not have commit access to the protected branch and thus do not have the permission to run the manual job.

This could probaly be worked on as a follow-up discussion as at least one minor negative I can think of exists although it could be worked around.

Either way as noted in the above Slack thread this situation is still progress!

Production build 0.71.5 2024