greggles → credited mcdruid → .
I'm afraid this is definitely a Support Request rather than an obvious problem with the Drupal module.
The issue you're having could be quite specific to your environment.
Making sure StreamMaxLength isn't too low would be one of the first things I'd recommend you check, but looks like you've done that already.
If you really want to know what's happening, it might help to look at the clamd logs which might be at /var/log/clamav/clamav.log
or perhaps going into syslog?
If that doesn't help, you can perhaps use tcpdump or similar to capture the network traffic between Drupal and clamd when a scan fails, in order to see if there are any clues there.
It would be good if the module did a better job of "surfacing" errors if that's possible; for example if the daemon responds with an error message.
Not something I can work on right now, but if there's not already an issue for that in the module's queue it'd be good to file one.
Have you made sure the module is in verbose mode to try to capture as much detail as you can about what's going on?
longwave → credited mcdruid → .
larowlan → credited mcdruid → .
Tested this with up-to-date 11.x and 10.3.x
I do see an error message at /core/authorize.php but the full path is not displayed e.g.
The website encountered an unexpected error. Try again later.
PDOException: SQLSTATE[HY000] [1044] Access denied for user 'db'@'%' to database 'some_invalid_db_name' in Drupal\Component\DependencyInjection\PhpArrayContainer->createService() (line 77 of core/lib/Drupal/Component/DependencyInjection/PhpArrayContainer.php).
update.php just shows a simple "The website encountered an unexpected error. Try again later." message.
I do not agree that this needs a CVE.
bigpipe explicitly sets the secure and httponly attributes to false.
What's the justification for that to be changed?
https://developer.mozilla.org/en-US/docs/Web/HTTP/Cookies#block_access_t...
The big_pipe_nojs cookie is just a boolean flag, the purpose of which is explained in \Drupal\big_pipe\Render\Placeholder\BigPipeStrategy
The HTTPonly and Secure attributes are intended to protect potentially sensitive information contained in cookies, and to prevent JS/client side code from being able to manipulate sensitive cookies.
None of that applies to this cookie, so it's legitimate for it not to have those attributes set.
Is the only reason to change them to prevent false positives from security scans?
@ram4nd I think you've pushed a couple of unrelated branches to the MR remote? No harm done, but just wanted to make sure you're aware :)
@poker10 thanks for the review, and yes I think the testing looks okay as it is (I've removed the tag).
I don't love the fallback:
else {
// As a fallback, make sure DRUPAL_ROOT's value is not in the path.
$error['%file'] = str_replace(DRUPAL_ROOT, 'DRUPAL_ROOT', $error['%file']);
}
...but my thinking was that if the value of DRUPAL_ROOT appears somewhere other than at the very beginning of a file path for some reason, it'd be pretty confusing to just make that part disappear. I'm not sure when this would ever happen, but let's say for some reason the docroot path is reflected in a longer path to a network mount or something like that..
I think it's probably worth leaving it in, but I've made a little tweak to the conditional to avoid the fallback running at all when DRUPAL_ROOT is not present in the string in the first place.
A wise man once told me to avoid assignments inside if conditions, but I'd argue it's worth doing here to avoid doing unnecessary work for every error.
If you're happy with this little tweak @poker10 (and assuming tests all pass) I think it's ready to commit.
larowlan → credited mcdruid → .
Looks like the test fail that @catch originally flagged up was under PostgreSQL:
SIMPLETEST_DB = pgsql://drupaltestbot:drupaltestbotpw@database/drupaltestbot?module=pgsql
Not sure how helpful this will be, but debugging MySQL and PostgreSQL side by side, this seems to be where it goes wrong for postgres:
\Drupal\Core\TypedData\DataReferenceDefinition::getTargetDefinition successfully finds a targetDefinition in MySQL but that property doesn't seem to be set in Postgres AFAICS.
I'm not sure why.
The stack is:
DataReferenceDefinition.php:49, Drupal\Core\TypedData\DataReferenceDefinition->getTargetDefinition()
EntityReference.php:52, Drupal\Core\Entity\Plugin\DataType\EntityReference->getTargetDefinition()
EntityReference.php:73, Drupal\Core\Entity\Plugin\DataType\EntityReference->getTarget()
DataReferenceBase.php:37, Drupal\Core\TypedData\DataReferenceBase->getValue()
FieldItemBase.php:154, Drupal\Core\Field\FieldItemBase->__get()
FieldItemList.php:116, Drupal\Core\Field\FieldItemList->__get()
Comment.php:356, Drupal\comment\Entity\Comment->getCommentedEntity()
CommentAccessControlHandler.php:36, Drupal\comment\CommentAccessControlHandler->checkAccess()
EntityAccessControlHandler.php:109, Drupal\Core\Entity\EntityAccessControlHandler->access()
ContentEntityBase.php:738, Drupal\Core\Entity\ContentEntityBase->access()
EntityAccessCheck.php:68, Drupal\Core\Entity\EntityAccessCheck->access()
AccessManager.php:160, call_user_func_array:{/var/www/html/core/lib/Drupal/Core/Access/AccessManager.php:160}()
AccessManager.php:160, Drupal\Core\Access\AccessManager->performCheck()
AccessManager.php:136, Drupal\Core\Access\AccessManager->check()
AccessManager.php:93, Drupal\Core\Access\AccessManager->checkNamedRoute()
Url.php:813, Drupal\Core\Url->access()
StringFormatter.php:132, Drupal\Core\Field\Plugin\Field\FieldFormatter\StringFormatter->viewElements()
FormatterBase.php:91, Drupal\Core\Field\FormatterBase->view()
EntityViewDisplay.php:268, Drupal\Core\Entity\Entity\EntityViewDisplay->buildMultiple()
EntityFieldRenderer.php:257, Drupal\views\Entity\Render\EntityFieldRenderer->buildFields()
EntityFieldRenderer.php:143, Drupal\views\Entity\Render\EntityFieldRenderer->render()
EntityField.php:869, Drupal\views\Plugin\views\field\EntityField->getItems()
FieldPluginBase.php:1195, Drupal\views\Plugin\views\field\FieldPluginBase->advancedRender()
views.theme.inc:231, template_preprocess_views_view_field()
ThemeManager.php:261, call_user_func_array:{/var/www/html/core/lib/Drupal/Core/Theme/ThemeManager.php:261}()
ThemeManager.php:261, Drupal\Core\Theme\ThemeManager->render()
Renderer.php:446, Drupal\Core\Render\Renderer->doRender()
Renderer.php:203, Drupal\Core\Render\Renderer->render()
FieldPluginBase.php:1796, Drupal\views\Plugin\views\field\FieldPluginBase->theme()
StylePluginBase.php:769, Drupal\views\Plugin\views\style\StylePluginBase->elementPreRenderRow()
DoTrustedCallbackTrait.php:107, call_user_func_array:{/var/www/html/core/lib/Drupal/Core/Security/DoTrustedCallbackTrait.php:107}()
DoTrustedCallbackTrait.php:107, Drupal\Core\Render\Renderer->doTrustedCallback()
Renderer.php:825, Drupal\Core\Render\Renderer->doCallback()
Renderer.php:387, Drupal\Core\Render\Renderer->doRender()
Renderer.php:203, Drupal\Core\Render\Renderer->render()
Renderer.php:120, Drupal\Core\Render\Renderer->Drupal\Core\Render\{closure:/var/www/html/core/lib/Drupal/Core/Render/Renderer.php:119-121}()
Renderer.php:593, Drupal\Core\Render\Renderer->executeInRenderContext()
Renderer.php:119, Drupal\Core\Render\Renderer->renderInIsolation()
StylePluginBase.php:711, Drupal\views\Plugin\views\style\StylePluginBase->renderFields()
StylePluginBase.php:574, Drupal\views\Plugin\views\style\StylePluginBase->renderGrouping()
StylePluginBase.php:462, Drupal\views\Plugin\views\style\StylePluginBase->render()
DisplayPluginBase.php:2177, Drupal\views\Plugin\views\display\DisplayPluginBase->render()
ViewExecutable.php:1592, Drupal\views\ViewExecutable->render()
DisplayPluginBase.php:2467, Drupal\views\Plugin\views\display\DisplayPluginBase->preview()
ViewExecutable.php:1721, Drupal\views\ViewExecutable->preview()
CommentUserNameTest.php:158, Drupal\Tests\comment\Kernel\Views\CommentUserNameTest->testUsername()
This one seems to fail consistently on PostgreSQL as far as I can see; I've reproduced the failure manually with postgres:16 (in ddev).
mcdruid@drupal-10:/var/www/html$ git status
Refresh index: 100% (18217/18217), done.
On branch 11.x
Your branch is up to date with 'origin/11.x'.
mcdruid@drupal-10:/var/www/html$ vendor/phpunit/phpunit/phpunit -c core core/modules/comment/tests/src/Kernel/Views/CommentUserNameTest.php
PHPUnit 10.5.35 by Sebastian Bergmann and contributors.
Runtime: PHP 8.3.10
Configuration: /var/www/html/core/phpunit.xml.pgsql
E 1 / 1 (100%)
Time: 00:02.411, Memory: 4.00 MB
There was 1 error:
1) Drupal\Tests\comment\Kernel\Views\CommentUserNameTest::testUsername
Error: Call to a member function access() on null
...snip...
AFAICS doing composer self-update 2.8.0
looks like it should be a good workaround until this is resolved upstream.
...that is assuming composer does revert the change. If not, we may have to make more significant changes.
I think another test is also affected:
---- Drupal\Tests\Composer\Plugin\Scaffold\Functional\ComposerHookTest ----
Status Group Filename Line Function
--------------------------------------------------------------------------------
Exception Other phpunit-1.xml 0 Drupal\Tests\Composer\Plugin\Scaffo
PHPUnit Test failed to complete; Error: PHPUnit 10.5.35 by Sebastian
Bergmann and contributors.
Runtime: PHP 8.3.12
Configuration: /builds/security/drupal-security/core/phpunit.xml.dist
E.. 3 / 3
(100%)
Time: 00:03.088, Memory: 8.00 MB
There was 1 error:
1)
Drupal\Tests\Composer\Plugin\Scaffold\Functional\ComposerHookTest::testComposerHooks
RuntimeException: Exit code: 1
Creating a "fixtures/drupal-drupal" project at "./create-project-test"
Installing fixtures/drupal-drupal (1.0.0)
In PathDownloader.php line 51:
Source path "./drupal-drupal" is not found for package
fixtures/drupal-drupal
mcdruid → created an issue.
Cool, thanks - I've posted in #drupal-infrastructure in slack as I wasn't sure if it was coincidence that the composer version was bumped here..
Thanks for filing an issue in the right place.
I think this may have broken some tests?
See e.g. https://git.drupalcode.org/issue/drupal-3479411/-/jobs/2990908
Drupal\BuildTests\Composer\Template\ComposerProjectTemplates 0 passes 17s 1 fails <===
Drupal\BuildTests\Composer\Component\ComponentsTaggedRelease 4 passes 41s
Drupal\BuildTests\Command\GenerateThemeTest 16 passes 88s
Drupal\BuildTests\Composer\Component\ComponentsIsolatedBuild 24 passes 134s
Test run duration: 2 min 13 sec
Detailed test results
---------------------
---- Drupal\BuildTests\Composer\Template\ComposerProjectTemplatesTest ----
Status Group Filename Line Function
--------------------------------------------------------------------------------
Fail Other phpunit-1.xml 0 Drupal\BuildTests\Composer\Template
PHPUnit Test failed to complete; Error: PHPUnit 10.5.35 by Sebastian
Bergmann and contributors.
Runtime: PHP 8.3.12
Configuration: /builds/issue/drupal-3479411/core/phpunit.xml.dist
..FF 4 / 4
(100%)
Time: 00:16.442, Memory: 14.00 MB
There were 2 failures:
1)
Drupal\BuildTests\Composer\Template\ComposerProjectTemplatesTest::testTemplateCreateProject
with data set "recommended-project" ('drupal/recommended-project',
'composer/Template/RecommendedProject', '/web')
COMMAND:
COMPOSER_HOME=/tmp/build_workspace_142b3cf5f42e57cba5237822b73defbcu4lXMx/composer-home
COMPOSER_ROOT_VERSION=11.0 composer create-project --no-ansi
drupal/recommended-project test_project 11.0 -vvv --repository
/tmp/build_workspace_142b3cf5f42e57cba5237822b73defbcu4lXMx/test_repository/packages.json
OUTPUT:
ERROR: Running 2.8.1 (2024-10-04 11:31:01) with PHP 8.3.12 on Linux /
5.10.225-213.878.amzn2.x86_64
Reading ./composer.json
(/tmp/build_workspace_142b3cf5f42e57cba5237822b73defbcu4lXMx/composer.json)
Loading config file
/tmp/build_workspace_142b3cf5f42e57cba5237822b73defbcu4lXMx/composer-home/config.json
Loading config file
/tmp/build_workspace_142b3cf5f42e57cba5237822b73defbcu4lXMx/composer-home/auth.json
Loading config file ./composer.json
(/tmp/build_workspace_142b3cf5f42e57cba5237822b73defbcu4lXMx/composer.json)
...snip...
/tmp/build_workspace_142b3cf5f42e57cba5237822b73defbcu4lXMx/composer-home/cache/repo/file----tmp-build-workspace-142b3cf5f42e57cba5237822b73defbcu4lXMx-test-repository-packages.json/packages.json
into cache
Installing drupal/recommended-project (11.0)
In PathDownloader.php line 51:
[RuntimeException]
Source path "composer/Template/RecommendedProject" is not found for
package
drupal/recommended-project
I can reproduce this by e.g. replacing ddev's current composer (2.7.9) with the phar for 2.8.1 then trying to run just that one test manually:
mcdruid@drupal-10:/var/www/html$ vendor/phpunit/phpunit/phpunit -c core core/tests/Drupal/BuildTests/Composer/Template/ComposerProjectTemplatesTest.php
PHPUnit 10.5.29 by Sebastian Bergmann and contributors.
Runtime: PHP 8.3.10
Configuration: /var/www/html/core/phpunit.xml.mysql
..FF 4 / 4 (100%)
Time: 00:08.780, Memory: 14.00 MB
There was 1 PHPUnit test runner deprecation:
1) Your XML configuration validates against a deprecated schema. Migrate your XML configuration using "--migrate-configuration"!
--
There were 2 failures:
1) Drupal\BuildTests\Composer\Template\ComposerProjectTemplatesTest::testTemplateCreateProject with data set "recommended-project" ('drupal/recommended-project', 'composer/Template/RecommendedProject', '/web')
COMMAND: COMPOSER_HOME=/tmp/build_workspace_558dd1b0a4860999e23c68f3b3390b60dlYMeC/composer-home COMPOSER_ROOT_VERSION=11.0.6 composer create-project --no-ansi drupal/recommended-project test_project 11.0.6 -vvv --repository /tmp/build_workspace_558dd1b0a4860999e23c68f3b3390b60dlYMeC/test_repository/packages.json
OUTPUT:
ERROR: Running 2.8.1 (2024-10-04 11:31:01) with PHP 8.3.10 on Linux / 6.8.0-40-generic
...snip...
Installing drupal/recommended-project (11.0.6)
In PathDownloader.php line 51:
[RuntimeException]
Source path "composer/Template/RecommendedProject" is not found for package drupal/recommended-project
...snip...
Installing drupal/legacy-project (11.0.6)
In PathDownloader.php line 51:
[RuntimeException]
Source path "composer/Template/LegacyProject" is not found for package drupal/legacy-project
(sorry the test output is quite verbose and it's hard to know exactly what might be significant)
It might not be easy to test the exact scenario from the IS, but I'd think that we should be able to add a basic test that verifies this is working as expected when an error is displayed..
Thanks, new branch / MR which adds a helper function to strip DRUPAL_ROOT from errors that will be displayed.
I'm not sure if replacing the actual path with the text DRUPAL_ROOT
is less confusing than just removing it completely, but it could be a bit misleading if e.g. instead of:
/var/www/html/sites/default/settings.php
..we output:
/sites/default/settings.php
I don't think we want to use something like /path/to/drupal
which would probably also confuse some people as it looks real but is not.
Open to suggestions but as it stands this will show an error like:
No such file or directory in include_once() (line 848 of DRUPAL_ROOT/sites/default/settings.php).
I had a bit of time to tidy this up, but I'd still like to see some new tests added to verify that e.g. the checkbox / variable for the shim works as it's supposed to.
So leaving at NW + Needs tests for now.
benjifisher → credited mcdruid → .
There are a few things to address in the comments, plus needs tests.
Great, thanks for suggesting this @mustanggb, and your help getting it done.
Yeah, I hadn't noticed any problems with those booleans.
However, it does seem like we could make it all a bit simpler and reduce the requirement for mental contortion with those ternaries :)
I think we can do:
'migrateMute' => !(bool) variable_get('jquery_update_jquery_migrate_warnings', FALSE),
'migrateTrace' => (bool) variable_get('jquery_update_jquery_migrate_trace', FALSE),
It's good if we're future-proofing this against the conversion of Drupal.settings to JSON, so it was lucky you had the patch in place.
Thanks for the review.
I've tweaked the MR as we're only adding a new .js file when we need to apply the settings for migrate (I had initially thought the file may apply several different settings).
This hopefully means it's less of a significant / global change, and we're able to stick with the very specific weighting etc.. for the file that's being added to $libraries
.
However, in doing this I've had to add to the js file so that it's doing this:
var Drupal = Drupal || { "settings": {}, "behaviors": {}, "locale": {} };
...otherwise I was getting errors about Drupal
not being defined, and then settings/behaviors/locale
not being defined etc..
As far as I can see this is all working okay; I've tested manually with the jQuery Update defaults and checked with the admin and front-end theme with jQuery Migrate enabled.
My testing hasn't gone far beyond just checking for errors in the console and clicking around a bit to try to verify that nothing's obviously broken.
I would appreciate a review and/or some more manual testing before I commit this.
@mustanggb are you able to take a look please?
dan2k3k4 → credited mcdruid → .
The 3 remaining inline JS libraries are all "backups" for when the original CDN functionality is used.
So for example:
switch ($cdn) {
case 'google':
$cdn = '//ajax.googleapis.com/ajax/libs/jqueryui/' . $jqueryui_version . '/jquery-ui' . $min . '.js';
jquery_update_jqueryui_cdn($cdn, $javascript, $path, $min, $names, $jqueryui_version);
jquery_update_jqueryui_backup($javascript, $path, $min);
break;
The inline JS is added by jquery_update_jqueryui_backup()
and does something like:
'data' => 'window.jQuery.ui || document.write("<script src=\'' . base_path() . $path . $js_path . '\'>\x3C/script>")',
.. so that if the jQuery.ui library isn't successfully loaded from the configured CDN, an extra script tag is written which should pull in the local backup copy.
In other words, if you don't use the older CDN functionality - including if you use the newer "custom paths" settings to bring in libraries from 3rd party CDNs - then these snippets of inline JS will never be added to the page.
On that basis, I'm not so sure we need to change those.
If you want to implement CSP and inline scripts added by jQuery Update are an obstacle to that, one approach would be to use custom paths to load libraries from a CDN instead of original CDN functionality.
(That is assuming that we move the migrate settings we've already looked at successfully out of inline JS).
I've made a start on this; initially moving the pair of jQuery Migrate settings mentioned in the IS.
Couple of things to note..
The module's tests are not going to be much help here; we could try to add some basic tests to check what's appearing in the Drupal.settings JS (which may break if/when we convert that to JSON).
One of the original reasons these settings were implemented using inline JS may have been because of the weighting and ordering of libraries and they way they get loaded when jQuery Update is overriding core JS etc..
There are comments near where the Migrate settings were added inline to say:
// Configure the jQuery Migrate plugin.
// Note: This must be done after jQuery has loaded, but before the jQuery
// migrate plugin has loaded.
..and the weights used in the hook_library_alter()
are very specific.
I wouldn't be too surprised to find that this looks like it's working but it's actually happening too late / too early to work as properly intended..
..and tests won't help us figure that out. Which is fun.
If we really need to preserve the weights / ordering.. I suppose we could have a separate static file for each setting which we add in the library_alter and have that pick up each setting from Drupal.settings
. I'm not entirely sure that makes any sense though.
Setting to NR for early feedback, but there are still 3 more inline settings to move across.
AFAICS there are at least 4 places where jQuery Update uses inline scripts:
$ grep -C2 -n 'type.*inline' jquery_update.module
398- $javascript['jquery']['js'][] = array(
399- 'data' => 'window.jQuery || document.write("<script src=\'' . base_path() . $path . '/replace/jquery/' . $version . '/jquery' . $min . '.js\'>\x3C/script>")',
400: 'type' => 'inline',
401- 'group' => JS_LIBRARY,
402- 'weight' => -19.999999999,
--
460- $libraries['jquery.migrate']['js'][] = array(
461- 'data' => $data,
462: 'type' => 'inline',
463- 'group' => JS_LIBRARY,
464-
--
516- $javascript['jquery.migrate']['js'][] = array(
517- 'data' => 'window.jQuery && window.jQuery.migrateWarnings || document.write("<script src=\'' . base_path() . $path . '/replace/jquery-migrate/' . $migrate_version . '/jquery-migrate' . $min . '.js\'>\x3C/script>")',
518: 'type' => 'inline',
519- 'group' => JS_LIBRARY,
520- 'weight' => -19.7999999999,
--
681- $javascript['ui']['js'][] = array(
682- 'data' => 'window.jQuery.ui || document.write("<script src=\'' . base_path() . $path . $js_path . '\'>\x3C/script>")',
683: 'type' => 'inline',
684- 'group' => JS_LIBRARY,
685- 'weight' => -10.999999999,
I think we should aim to re-implement all of these using Drupal.settings
Then if/when 📌 [D7] Convert drupalSettings from JavaScript to JSON, to allow for CSP in the future RTBC lands, a decent CSP should be more realistic for D7 sites.
Could the jQuery Migrate settings be done via Drupal.settings ? (just a thought with zero research behind it)
Thanks!
I'm not sure if this was a perfect solution, but I've tweaked the admin form so that it tries to generate the example relative paths with the actual path to the public files dir.
I've also added a comment to suggest that the examples are just that and may not be correct esp. when Drupal's in a multisite or a subdir.
This looks great, thanks for fixing the test - turned out to be useful that we'd described what the URLs look like in the test env in the comments.
mcdruid → created an issue. See original summary → .
All that would have to be done would be to invoke the original logic for those with the following permission:
"Administer book outlines"
For everyone else, the ... *new* optimised logic would be invoked making relevant db queries correctly targetted.
I'm not sure about this; it may be the case that only users with certain admin permissions can add nodes of any arbitrary type to book outlines in the first place, but book_node_load()
would need to check nodes of all types for all users every time it's invoked just in case that functionality has been used.
It's not just the admins that should see the nodes outside of book_allowed_types
in books.
That's slightly different to ✨ Allow the book outline functionality on non-book-enabled node types to be hidden from users with the "Administer book outlines" permission Needs work where the change was to hide some of the admin UI for node types not specifically marked as relevant to books, IIUC.
I think if we did want to re-introduce this optimisation, we'd need to put the functionality whereby it's possible to add nodes not included in book_allowed_types
behind a feature flag.
We'd then only be able to apply the optimisation in book_node_load()
if that feature is disabled.
You'd need to be careful about adding that feature flag to existing sites as you'd probably want to check if there are already nodes not included in book_allowed_types
associated with books before disabling that functionality.
A simpler approach would be to add the optimisation but make it opt-in, so that sites that know they only use book_allowed_types
in books could turn it on (and you'd want to remove the ability to add nodes outside of book_allowed_types
to books when the optimisation is enabled).
I don't think this is something we're very likely to introduce at this stage in D7's lifecycle.
MRs always welcome though.
Also, happy to be corrected if I've misunderstood any of this.
Interesting; doing the validation before writing to the db broke the test for the other issue mentioned in #8
We can fix that using the opt-out within that test.. which maybe is okay, but goes against the principle of only testing one thing at a time (I think D7 probably breaks that principle all over the place).
I suspect the reason the test is failing is that the validation does this:
function drupal_validate_watchdog_variables($variables) {
if (!is_array($variables)) {
return array();
}
...snip...
...which would interfere with the way the test is trying to write 'BAD SERIALIZED DATA'
into the variables field when it calls dblog_watchdog
directly.
D10/11 don't do exactly that validation step, but the placeholders do have an array typehint in the function signature of parseMessagePlaceholders
.
I suppose we could just return $variables
instead of the empty array, but would we want to? I can't think of many legitimate use cases OTOH.
Okay the new test fails with that most recent version because the test calls watchdog with non-string data in the placeholders, then queries the db directly to verify that only the stringable variables were stored.
That's copied from dblog's tests in D11:
https://git.drupalcode.org/project/drupal/-/blob/11.x/core/modules/dblog...
So that must mean that dblog validates the placeholders before writing to them to the db.
Let's go back to doing that in the MR.
This is still specific to syslog and dblog so any other module implementing hook_watchdog can do whatever it likes with the variables; including using core's new drupal_validate_watchdog_variables()
if that makes sense.
I think if we're validating / filtering before writing to the watchdog table (in dblog) we should keep the opt-out.
Only remaining question from me is whether we need to filter / validate before writing to the db AND also when theming / parsing log messages on output.
I don't think doing so should be particularly costly in most cases, so I'd vote for belt-and-braces (i.e. do it in both places), especially as we're providing an opt-out.
Closing as a duplicate of 🐛 Missing container invalidation update from issue modifying services Active
Ah, sorry I made that change in a bit of a rush.
I've removed the validation from dblog_watchdog
as you could argue we should just write whatever's been logged to the db, and only sanitise it when we're trying to use the data (similar to the way that content filtering works). I think that's also the approach that was eventually taken in D10/11 (after quite a bit of discussion in the parent issue about whether to e.g. throw an exception when something weird is included in the original log message).
I think it makes sense to limit the scope of this backport to ensuring that core's two logging modules don't throw errors if they encounter something they can't easily parse in a log message.
If we're doing that, I wonder if we even need the opt-out flag? I think it's less likely to be required than if we were validating data on its way into the db or even before hook_watchdog
is invoked.
there is already a check if the variables can be unserialized
This issue is slightly different to the case where the data cannot be unserialized; in this case it's perfectly possible to unserialize non-string data for the placeholders that have been stored.. but there will be errors if you then try to pass e.g. an array or object to a function like strstr
or t
which expects a string it can swap into the message.
Tweaked the test (which is in the system module) so that it enables both syslog and dblog.
Without the validation syslog hits errors when it tries to process non-string variables:
---- WatchdogTest ----
Status Group Filename Line Function
--------------------------------------------------------------------------------
Pass Other system.test 3393 WatchdogTest->setUp()
Enabled modules: dblog, syslog
Exception Notice syslog.module 117 syslog_watchdog()
Array to string conversion
Exception Uncaught e syslog.module 117 strtr()
Error: Object of class stdClass could not be converted to string in strtr()
(line 117 of /var/www/html/modules/syslog/syslog.module).
Seems like a good suggestion to move the validation to core's two implementations of hook_watchdog
.
For dblog I've put it both in the hook and the theme function, so that validation happens before a log entry is written to the db, and when an entry that's already in the db is parsed.
You could perhaps argue that just doing it once would be sufficient.
I've also added an opt-out variable; I'd vote not to add a stanza for this to default settings as I would hope that not many sites will need it.
https://www.php.net/manual/en/class.stringable.php was only added in PHP 8 but as far as I can see it doesn't cause a problem to use it like this:
if (is_scalar($variable) || is_null($variable) || $variable instanceof \Stringable) {
$validated[$key] = $variable;
}
In PHP < 8 the previous two checks should suffice in almost all cases for a D7 site.. I'd assume.
However, do we want to have an opt-out variable or similar for this validation, in case it causes problems for some sites?
The other thing that may cause a change in behaviour is validating that the placeholder keys actually follow the spec and begin with one of the three valid characters.
I'd think that's much more likely than the check to ensure that the variable is "string-like" to change the way some modules / sites logging behaves.
Perhaps an opt out flag would be prudent.
First pass at a backport.
Manually running tests for dblog and syslog seems fine (with PHP 7.4)
I've not looked at older (or newer) PHP yet.
We should also add new tests which try to pass invalid placeholders to watchdog()
.
mcdruid → created an issue.
@dpi thanks for this.
Sounds like perhaps it'd be good to consider a new 2.1.x branch / set of releases?
Also, would you consider volunteering as a co-maintainer?
Hopefully clarification has been provided, so closing as we don't need more open issues in the queue :)
As for which D10/11 core releases include the improvements to log parsing which mean that this issue doesn't affect them...
https://www.drupal.org/project/drupal/issues/2481349#comment-15427053 🐛 Prevent the use of placeholders that cannot be converted into strings when creating logs Fixed
The commit was 4th February 2024 and was...
.. Backported to 10.2.x as a low risk bug fix.
Committed and pushed 367e57e7bc to 11.x and a335588be2 to 10.2.x.
Looking at the list of core releases (not sure if there's a better place to see a listing along the with dates for each, but one way of seeing it is to look at a specific release node e.g. https://www.drupal.org/project/drupal/releases/10.3.4 → ), the relevant ones seem to be:
10.2.7 - 6 June 2024
10.3.0-rc1 - 5 June 2024
10.3.0-beta1 - 17 May 2024
10.2.6 - 1 May 2024
10.4.x-dev - 30 April 2024
10.2.5 - 3 April 2024
10.2.4 - 6 March 2024
10.3.x-dev - 21 February 2024
10.2.3 - 7 February 2024
10.1.8 - 17 January 2024
10.2.2 - 17 January 2024
So we might assume that all stable 10.3.x releases have the change, and it was most likely included in 10.2.3
Checking that: https://git.drupalcode.org/project/drupal/-/blob/10.2.3/core/lib/Drupal/...
Yes, it was included in 10.2.3 - so that release any anything more recent should be unaffected.
At the time of issuing the SA, the most recent Security Release of 10.2.x is still 10.2.2 from a few weeks before that change was committed, which is why the 2.x branch of seckit was still included in the SA.
.. a site must have seckit's CSP reporting functionality enabled
...isn't necessarily as simple as one checkbox, unfortunately.
Generally if you don't have seckit's overall CSP functionality enabled at all, the endpoint for receiving violation reports will be switched off.
In the 7.x branch, reports will not be processed (even if CSP if enabled) if:
- The overall "disable seckit" option is enabled.
- The report-uri value is empty.
See: https://git.drupalcode.org/project/seckit/-/blob/7.x-1.x/seckit.module?r...
In the 2.x branch it doesn't look like the reporting endpoint gets disabled regardless of the settings; I have a feeling there's an issue open to change that.
(editing broken img)
I don't think many people would argue with the suggestion that CVE is a far from perfect system.
Drupal Security Advisories are not perfect either.
CVSS scoring has its problems, as does the risk calc. that the Drupal Security Team uses.
There have been multiple proposals to replace CVE with something better, and as laudable as those efforts are, https://xkcd.com/927/ comes to mind.
The swiss cheese metaphor makes sense in many cases; it's illustrative of the ideas of "defense in depth" and "chaining attacks" etc..
I cannot speak for all stakeholders in the Drupal community, but I personally do not believe that it would be an improvement for those that manage Drupal sites to receive a significantly increased volume of security alerts (where the delta would mostly be issues that have been judged as lower priority) which they have to evaluate and triage.
To my mind part of the job of the Drupal Security Team is to evaluate the severity and applicability of submitted vulnerabilities in order to filter out some of the noise and only present Drupal site owners that subscribe to the feeds of Security Advisories with alerts that are worthy of their time and attention.
This is always a balancing act.
When SAs are published, the team is often asked "how bad is this?" / "how urgently do we need to patch?" / "am I affected if I only use X?" etc.. It's another balancing act to include relevant contextual information including details of potential mitigations etc.. in the SA without spelling out the steps to exploit a vulnerability for potential attackers.
Any highly security conscious site owners who wish to subscribe to the full fire-hose of any and all potentially security adjacent information could keep an eye on all d.o issues tagged with "security improvement".
If that's not a sufficiently high-fidelity way to solve that use case, the community could come up with other conventions / workflows.
There are different incentives which don't necessarily align very well when it comes to publishing SAs and/or CVEs.
For a researcher / reporter getting a SA/CVE published can be quite desirable; it's arguably part of the currency of that ecosystem and it's definitely understandable that it can be disappointing if the decision is made not to publish an advisory and to handle an issue in the public queue instead.
On the other hand, https://www.drupal.org/psa-2023-07-12 → mentions some of the benefits of handling certain issues in the public queues. Foremost is that bugs can often be fixed more quickly and with more scrutiny / testing in public than is practical in private.
I am not an expert in MITRE's processes for issuing a CVE "over the head" of a CNA. However, I do know that typically there's very little (if any) validation of the technical aspects of reports made to MITRE, with the unfortunate consequence that there are a lot CVEs in the system that have little or no technical validity. There are several where the report was rejected as invalid by the upstream project maintainers, and these are of questionable benefit to anyone other than the researchers who got to list the CVE on their resume/CV. MITRE is not in a position to be able to judge the technical merits of specific reports.
If the overall aim of the Drupal Security Team is to keep Drupal and its users secure, and there are limited and finite resources available to do this work, it makes sense to "deflect" issues / reports which do not represent a real security risk to Drupal sites to the public queue, especially if the alternative is for lower priority bugs to languish in a private tracker.
If MITRE decides that it will issue CVEs for Drupal issues when the Drupal Security Team - as the CNA for the project - has decided doing so would not be appropriate, I don't know that there's much that the Drupal Security Team can do about that, other than discuss the matter with MITRE and ask that they not do this in future.
Who would benefit if the Drupal Security Team started to issue SAs/CVEs for reports that had been judged as "can be handled in public"? The researcher gets the reward for their work, but does the overall Drupal community benefit?
(I am definitely not trying to be judgemental about researchers wanting to get the tangible recognition for their work; I have been in that position myself several times and certainly understand the incentives.)
How would we expect individual sites to handle SAs for issues that were deemed as appropriate to handle in public? There's already a maintenance burden with keeping up with security releases for SAs which were judged as severe enough to warrant handling in private. How would sites decide whether they should prioritise updates that were fixed in public but got an SA? Presumably the SA would only be published once the fix was committed?
.. the policies of the Drupal Ecosystem do not absolve us of responsibility to properly report vulnerabilities to the community.
Silent fixes do not help us maintain a secure ecosystem.
There will always be a "grey area" as to whether certain bugs do represent a vulnerability, however I don't think it makes sense to argue that all issues that might be labelled as "security improvements / hardenings" should get an SA/CVE. Doing so would arguably "debase the currency" from the point of view of someone maintaining a Drupal application that has to decide how to prioritise what and when to update.
Letting lower priority fixes sit neglected in a private issue tracker doesn't benefit anyone. Setting deadlines on those so that the researcher resorts to "full disclosure" isn't really much different to the issues being marked as "can be handled in public", depending on your point of view.
If the issue really is important to fix, that can be expedited in the public queue. If the details are made public and there's little activity, is it really a high priority bug? Who would benefit from added fanfare around those?
@TR let me know if you need anything else to get this reviewed and committed (other than an 8th day of the week etc.. :)
Thanks!
https://www.drupal.org/project/seckit/releases/2.0.2 →
Thanks again everybody!
Changes still apply.
Neat that cspell spotted a little typo in the test, which I've fixed.
@TR do we need anything else to get this committed?
Would you like an MR, or is the patch sufficient?
https://www.drupal.org/project/seckit/releases/2.0.2-rc2 →
Only change is additional test coverage.
If I have time, I'll promote this to a full release by the end of this week.
Added some basic tests for both 2.x and 7.x-1.x
One difference in functionality was highlighted along the way:
// Note: The 2.x branch returns a 404 when the submitted content-type is
// invalid, but 7.x-1.x just quietly returns a 200.
We could change D7's _seckit_csp_report()
so it would also return a 404.
I am going to call this Fixed for now though.
If I understand correctly, the main issue being discussed here would be addressed by implementing this:
✨ Allow files on un-saved submissions to be stored in temporary file scheme. Needs work
...in Drupal core.
One significant problem with that is there isn't really (yet) a single shared file upload functionality in core where this could be applied.
The reasons for that are outlined in detail here:
📌 [META] Modernise file upload logic Active
...along with plans and proposals for improving core file upload handling overall.
If we wanted to suggest that "file uploads should be stored in temporary:// until they are ready to be moved to a permanent location" I think the META to modernise the upload logic would probably be the right place to bring that up.
As far as I can see, the changes that the bot is proposing are already in 2.x (the version constraint in the info file is not exactly the same, but D11 has been added).
The changes were committed a few weeks ago: https://git.drupalcode.org/project/seckit/-/commit/39d3d64cf9b64f2c41fec...
So two questions:
1) Is there actually anything left to do before we can create a D11 release for seckit?
2) Is the problem mentioned in #10 that the Project Update Bot is not compatible with a branch called 2.x as opposed to {major}.{minor}.x
?
If there are good reasons to create a new branch following the z.x.y naming convention for D11 support, we can do that.. but I don't necessarily want to create maintenance burden and potential confusion for the project's users if the only reason is that the bot doesn't understand the naming convention.
@VladimirAus could you please explain why a new 2.0.x branch would be needed?
According to: https://www.drupal.org/docs/develop/git/git-for-drupal-project-maintaine... →
...it's not mandatory to use {major}.{minor}.x
Thanks!
The MR still has the "in early testing" wording in the admin UI (see #9) and IMHO putting *** into the admin text for emphasis looks quite messy. Let's try to keep everything as clean and simple as possible.
We also mentioned adding a checkbox for the shim rather than relying on detecting a string in the custom path.
Finally... should the shim still be based on Drupal 9.5? Is it still in D11? If so, can we update the shim and any references, or explain why it's specifically a D9 shim we're including if there's a reason for it to be "locked" etc..
Nice - great to see all the green ticks!
Thanks @ankitv18
Looking good thanks.
At a glance, the stylelint failures look like they should be quite easy to fix?
eslint looks a bit more of a mess, sadly.
Okay, thanks for the explanations and fixes.
Here's the CR about the typedConfigManager / ConfigFormBase deprecation:
https://www.drupal.org/node/3404140 →
At some point we should update the constraints to remove D9, but perhaps that should be 2.1.x
Let's get this merged and then we can look at the other test / static checks fixes in 📌 Fix validate pipeline Fixed (thanks for working on that too!)
Thanks everyone that contributed.
Added a gitlab-ci template.
MR26 is failing most if not all of the tests like this:
29) Drupal\Tests\seckit\Functional\SecKitTestCaseTest::testEnabledReferrerPolicy
Behat\Mink\Exception\ElementNotFoundException: Button with id|name|label|value "Save configuration" not found.
/var/www/html/core/tests/Drupal/Tests/WebAssert.php:158
/var/www/html/core/tests/Drupal/Tests/UiHelperTrait.php:78
/var/www/html/modules/contrib/seckit/tests/src/Functional/SecKitTestCaseTest.php:630
-----
To make this easier to review, can we have some comments / references about where things like the "Handling the BC for typedConfigManager." commit comes from?
Is the solution of removing the constructor completely and calling the parent from the create method a novel solution or is it following a pattern documented elsewhere?
Does this ensure that the deprecation is solved for D11 but the module still works / passes tests for D10.2/10.3?
In order to review these commits properly, I'm having to do quite a bit of digging and research.
It'd be a lot easier if you could give me some clues as to what I'm looking at, and why certain changes are being made. Thanks!
The MR was hard to review with the gitlab template + lots of test changes on top.
I've committed the default template directly as a start. Can we please have a separate issue for any other changes?
MR is currently failing tests AFAICS.
longwave → credited mcdruid → .
@itamair I don't want to interfere in your issue queue, but I'd vote for you to get issue credit on this, and other issues where you've done some great work addressing this incident.
Either way, thank you!
Yeah it's a balance, and I don't think there's a clear "right answer" at present.
Putting a static copy of the library in the module mitigates supply chain risk, but on the other hand if/when a vulnerability is found and fixed in the (upstream) library... we then need to find all copies of it everywhere and update them (or backport the fix).
Which problem is worse?
Referencing dependencies with tools like composer (and perhaps npm in the future?) has the advantage that it makes automated auditing easier (dependabot etc..) and potentially sites can update themselves without relying on Drupal projects doing new releases (that's subject to appropriate constraints being in place, and everyone's interpretation of semver agreeing etc.. etc..)
Apologies - I didn't mean to hide the patch in #3 - I was just adding some tags to try to link related issues together.