@smustgrave could you help with the title update please. I'm not sure what it needs to change to as the current title explains the problem aptly.
This method is similar to Json::encode & Json::decode wrappers implemented in Drupal. These wrappers provide a "preferred way" in Drupal that developers don't need to remember all the flags to use each time. This method on similar lines has a "preferred behaviour" which is allowing administrators to allow increasing the time limit without changing the default behaviour.
IMO the current changes can be committed and issue closed. Additional function and deprecation is not required. The promise of settimelimit is to allocate additional time. The changes does exactly that. Since the developer cannot guarantee the amount of time needed when calling this function, it can be increased by setting the php.ini in production by administrator to ensure the job is done. Adding more time than called for by the developer does not impact the site since the process will end once the task is complete irrespective of the remaining time.
MR comments resolved.
required in #states only places a * label against the field. It does not do any validation. It's designed as such. see #65 🐛 #states not affecting visibility/requirement of managed_file Needs work . Please open a new ticket for implementing mandatory check when required is set in #states.
@alexpott the sarcasm is not appreciated.
No developer sets the exact time they need. Excerpt from core
// Try to allocate enough time to run all the hook_cron implementations.
Environment::setTimeLimit(240);
They hope that the process will end in the given time. The intent is not to run for another 240 seconds, but to have the task completed. When that does not work, the site administrators increase the time in php.ini which has no effect due to the behaviour of settimelimit.
Enhancing the current function will strength the execution of Drupal core. I do not agree that a new method is required for this. And additional education required for all developers to use the new method instead of the standard function.
If the time limit is set to 500 by your php.ini... and after 499 seconds the code calls \Drupal\Component\Utility\Environment::setTimeLimit(100)... you'll get 100 more seconds as per the PHP manual
With the current change, it does allocate more time instead of 100 more seconds, it allocates 600 more seconds. This solves the problem that if the time limit set in php.ini was 500 and after 2 seconds code calls \Drupal\Component\Utility\Environment::setTimeLimit(100), the time limit reduces from 498 to 100 seconds. The fix allocates always the maximum possible so that the process can complete. This allows site administrators to increase the time limit in php.ini for timeouts occurring on their site where reduction in time happens due to settimelimit in code.
smustgrave → credited sukr_s → .
The test case testBlockPlacementIndicator
in BlockUITest.php has the issue described in
🐛
Domain-based language negotiation strips "destination" URL query argument, causing BigPipe error
Closed: duplicate
. This has been fixed. I don't think we need additional test case for this.
Review comments resolved.
IS updated.
@catch
So I would just directly process the queue items in the terminate event listener without trying to deal with async http requests
With this suggestion, I'm assuming that processing in the event listener will be non-blocking, otherwise async call is better. Will go with your suggestion for now.
Why not a maximum number of queue items to process,
Do you mean not to process more than the maximum number of items in the queue at a time. I was thinking of setting the time limit to zero so that all the items in the queue will be processed. Otherwise in each terminate event we will have to check the remaining number of items in the queue and trigger again, which would mean additional db calls in the terminate event.
and if there's one or more queue item added during the request, process them at the end?
Yes that's the current thought as well that the process queue call would be done in the terminate event to avoid multiple calls in cases where multiple items are added to the queue.
I don't see what a minimum number of items gets us.
If there was a need to process in a batch instead of immediate processing, perhaps an unwanted frills.
I'm proposing the following implementation in automated_cron module
- Instant queue processor will have the following configurations
- Min number of items to trigger the queue. Setting to zero will disable instant queue processing
- Max number of concurrent queue processing
- Implement a queue.database manager extending the core queue.database which will track the items being added to the queue. All queuing function will be delegated to the core implementation.
- Add a new route with no access check that will process the queue
- At end of a request if the number of queue items configured is reached then an async queue processing will be triggered using the mechanism in https://github.com/biznickman/PHP-Async. I've been using this approach and works quite well.
This will not use any browser based implementation or 1x1 gifs.
Any thoughts, suggestions, edits or objections? I'll try to implement the same shortly.
In 10.2.5 the Account Edit provides a link to Reset password if the user has forgotten their current password. Therefore changes to the one time login link may not be really needed especially considering #108
"commit db83ea46 - Better names and no need to pass translations around" broke the fix, which has now been resolved by reverting the relevant changes.
IS updated and review comment resolved.
@Iarowllan the analysis was provided in #14. Copying the same below for convenience.
Modules like big_pipe invoke attachBehaviors that results in multiple attachment of behaviors for a given context. So maintain semaphore to ensure that the behavior is attached only once.
If the attachBehavior is called multiple times, then in states multiple states.Dependent objects are created for the same elements. However the initialise is not called in Trigger function for all the states.Dependent object since it checks if it was already initialised for an element. Therefore it's best to avoid multiple invocation of attachBehaviors to ensure that the behviors work as intended.
batch_set function accepts the batch definition with progressive as a parameter. #41 introduces a new parameter to the batch_process. Having a new parameter in batch_process can cause conflict between the value set when creating the batch with batch_set vis-a-vis calling the batch_process.
The proposed solution uses the value set in the batch definition so that it's consistently available throughout the processing of the batch.
- Queries answered
- Tests changed to use API instead of browser call for setting up data.
Created a related ticket for better authorisation 🐛 File entity update is allowed only for user who has uploaded the file Active
Looks like outdated issue. I've tried this in a custom module and works just fine in 10.2.5
This is a functional test - the link I sent to copy (PathElementTest) is a kernel test.
For future use, can you help me understand on how to choose between functional & kernel test?
I'm leaning towards closed works as designed here
In principle I would also agree. A password field should ideally have min length and not max length restriction. Since multiple people were following the issue, I went ahead and took a shot at it.
- Added functional form based tests as required in #98
- Confirmed the need for fix in formvalidator.php as required #98. See job https://git.drupalcode.org/issue/drupal-1344552/-/pipelines/179107 which was run after reverting the form validator changes.
Not an issue any more. On crash (wrong function call or db crash) in Ajax call, the following error message is displayed
Oops, something went wrong. Check your browser's developer console for more details.
Tested on 10.2.5
FYI @sukr_s it is not accepted issue-queue etiquette to RTBC your own issue. Please use the needs review status.
@larowlan I changed it to RTBC since it was already reviewed and set to RTBC in #90. Since you asked for changes, I assumed that it needs to be set back to RTBC for your attention. It was not a self endorsement of the solution.
Will check the test that you pointed out.
This is the only place in the core where maxlength check is being done.
I've simplified the code to check only the first field's value if there are multiple fields (or values) since confirm_password is the only known use case and both values need to match. Hope that helps.
Doesn't that happen automatically?
Do you mean that the UI blocks after #maxlength? If so yes.
We could remove the code from formvalidator and depend on the UI to not allow more than #maxlength, though that would not be a good practice to depend on the front end to do the validation. Someone using the REST api say from mobile app will have to do the validation themselves.
Let me know if you want to go with UI only validation, I'll revert the changes in formvalidator.
Replied to MR suggestion.
- Moved the patch to MR
- Update the test for comment #30
without correcting the assertion, the test will fail. do confirm if you want this to be removed. will the merge happen with test failing?
changed to postsave and invoking node_reindex_node_search($this->id());
only if the translation was not deleted.
An alternate solution if multiple forms with different constructor values are rendered on same page #2821852-41: The same form twice on one page with different arguments may process the wrong form when submitted →
With this fix progressive value can be set in batch definition. if progressive is false then it run in a single run else it uses multiple run. usage e.g
$operations = ['my operations'];
$batch = [
'title' => 'My batch runs in single run',
'operations' => $operations,
'progressive' => FALSE,
];
batch_set($batch);
batch_process();
sukr_s → changed the visibility of the branch 638712-make-non-progressive-batch to hidden.
I'm not sure I understand. After introduction of the permission, the tests have to be updated otherwise the tests will fail. There is no change in the migration code.
Entity count change for action has changed, not sure why that is so. If you check the history of changes, the action count has been changed multiple times without change in test data. May be subsystem owner can comment on this.
User count will change since we have created new user with the appropriate permission.
Since a new user has been created a conflict will occur in migration and thus an extra step is required.
Let me know if something is not addressed.
- resolved regression issue
- updated IS for proposed solution
Changing the proposed solution to check only for permission and not for uid 1 since there is an initiative to remove all dependency / hardcoding on uid 1.
Current MR is based only on permission, so setting back to NR
IS updated
Thanks for updating the screenshots in IS. Noted for future.
Test updated and locally tested resulting following error without changes
PHPUnit 9.6.19 by Sebastian Bergmann and contributors.
Runtime: PHP 8.3.6
Configuration: /var/www/html/web/core/phpunit.xml.dist
Testing Drupal\Tests\Core\Form\FormValidatorTest
...................E 20 / 20 (100%)
Time: 00:00.116, Memory: 6.00 MB
There was 1 error:
1) Drupal\Tests\Core\Form\FormValidatorTest::testPerformRequiredValidation with data set #5 (array('confirm_password', 12, array('oo06xzt742t4mcm', 'cv4vdy3f6xnxk6g')), 'Test cannot be longer than <e... long.', false)
TypeError: mb_strlen(): Argument #1 ($string) must be of type string, array given
/var/www/html/web/core/lib/Drupal/Core/Form/FormValidator.php:332
/var/www/html/web/core/lib/Drupal/Core/Form/FormValidator.php:246
/var/www/html/web/core/lib/Drupal/Core/Form/FormValidator.php:238
/var/www/html/web/core/lib/Drupal/Core/Form/FormValidator.php:118
/var/www/html/web/core/tests/Drupal/Tests/Core/Form/FormValidatorTest.php:397
/var/www/html/web/vendor/phpunit/phpunit/src/Framework/TestResult.php:729
/var/www/html/web/vendor/phpunit/phpunit/src/Framework/TestSuite.php:685
/var/www/html/web/vendor/phpunit/phpunit/src/Framework/TestSuite.php:685
/var/www/html/web/vendor/phpunit/phpunit/src/TextUI/TestRunner.php:651
/var/www/html/vendor/phpunit/phpunit/src/TextUI/Command.php:146
/var/www/html/vendor/phpunit/phpunit/src/TextUI/Command.php:99
ERRORS!
Tests: 20, Assertions: 35, Errors: 1.
- Not sure if the test only changes is working. found
📌
Add support for 'test only' changes to gitlab CI
Needs review
- Uploading screenshot for before fix "before-fix-max-length-15.png"
- Screenshot uploaded in #85 "confirm_password maxlength 15.png" is after the fix
- fork has been updated.
sukr_s → made their first commit to this issue’s fork.
Taking from commercial #3, in ./vendor/doctrine/annotations/lib/Doctrine/Common/Annotations/ImplicitlyIgnoredAnnotationNames.php change runTestsInSeparateProcesses to false and debugging works.
duplicated by 🐛 #states not working correctly when built from a logical combination of multliple fields Needs review
removing "needs manual testing" tag. done in #28
w.r.t to alternate approach to solve this, it would be far more expensive to find what was deleted in a cron run e.g. run though list of languages configured for the site, then the list of languages currently available to for the node and then to clear the index for missing languages. So I believe it's best done at the time of deletion.
I also think that this should be in search module. However this would result in hard coding the type node_search (used in clear method) in search module which would not be right since search module does not own the 'node_search' plugin. I could not find a method to dynamically determine the search plugin type based on entity type. Therefore chose to place it in the node module.
is it possible to find search plugin type based on entity type or is it ok to hard code the type in search module?
suggestions applied
When a node is deleted, it's removed from the search index in the node::preDelete event. It's not handled by NodeSearch or content_moderation. On the same lines when a translation is deleted, it must be removed from the search index using the translation_delete hook.
unable to change the draft status in MR so reverted the changes and created a new mr.
I'm not seeing an option to remove the draft status from MR.
- reverted performance tests. worked now. looks like it was a random error in test the first time
- removed one extra assert
- keeping one assertion. If this assert is removed and the permission 'administer account settings', is also removed, the test passes but it would not be correct since the page that opens is access denied which doesn't have local tasks.
Out of scope changes reverted
Tests are failing since the batch fails to load due to sudden change in CSRF token. Not sure though why the token changes during the batch process
comments responded / resolved
This issue is resolved if solution in #3424701-30: Cannot delete file when using language negotiation domains → is accepted.
- IS updated
- Fix updated
- also fixes issues
🐛
Domain-based language negotiation strips "destination" URL query argument, causing BigPipe error
Closed: duplicate
After further analysis, I've found that the URL generated for delete without language module or language with url detection is
https://d10n.ddev.site/file/1/delete?destination=/admin/content/files&_wrapper_format=drupal_modal
and with language module with domain detection is
https://d10n.ddev.site/file/1/delete?destination=https%3A//d10n.ddev.site/admin/content/files&_wrapper_format=drupal_modal
the issue is that in the second case the destination is a fully qualified url and in the first case it's relative path. Since we have fully qualified URL, the request sanitize function removes the destination URL and thus it ends up in getCancelUrl and subsequently throws the error.
Results of manual testing
10.0.8 - View does not provide the option to delete.
10.1.0
- Without language module enabled, Delete confirmation is shown and clicking Cancel, closes the dialog
- With language module enabled, Error as per screenshot error-on-delete.png
A proper fix for this will not be possible since the browser will not send a value when the user deselects a checkbox. Therefore it will not be possible to determine if the default value should be used or the user's input should be used.
In cases like this where the application is setting a default_value in the Ajax call, you can unset the user input value explicitly like $form_state->unsetValue(['checkboxindex']);
in D10. Then the form builder is forced to use the default_value.
In D7 use
unset($form_state['input']['checkboxindex']);
This issue is a duplicate of 🐛 #states not working correctly when built from a logical combination of multliple fields Needs review
@smustgrave the correct solution needs to be confirmed by @catch. once done can update the same
putting it back to NR for @catch attention.
- Updated the IS
- Updated the proposed changes
If the views module does not exist, then this screen is not accessible and therefore it's not possible to reach the error.
I tried to add the collection route conditionally but the REST tests fail.
1. If the exception is commented out, it tries to create URL with entity.file.collection and ends up throwing an exception. If that is commented out too and URL to is created, then the dialog pops up. Click on cancel, it remains on the same page.
2. The problem occurs only when the language module is enabled and multiple languages are configured. If it's a single language site then the functionality works fine. The cancel stays on the same page, which would be the expected behavior. If multiple languages are configured then the delete confirmation popup doesn't open, it ends up with exception.
If the original fix of adding 'delete-form' is enabled then the delete confirmation pops-up and clicking on cancel, it remains on the same page.
- IS updated
- fix as per #20 done
The issue it seems that the block loads the configuration for the first block found and therefore the arguments of the first form are used. However if you check the user input in $form_state, the values sent from the browser are correct. I'm assuming that the arguments are stored in hidden fields. If so, you can check for user input for these values and use them if found, otherwise use from arguments
e.g.
public function buildForm(array $form, FormStateInterface $form_state, $id) {
$input_values = $form_state->getUserInput();
$form['id_value'] = [
'#type' => 'hidden',
'#value' => isset($input_values['id_value'])? $input_values['id_value'] : $id,
];
$form['txt'] = [
'#type' => 'textfield',
'#title' => 'Textfield',
'#default_value' => $id,
'#required' => TRUE,
];
$form['submit'] = [
'#type' => 'submit',
'#value' => 'Submit'
];
return $form;
}
with this approach the submit and validate have the correct values for the argument fields.
- Issue summary has been updated.
- Root cause of
#1118016: conditional visibility of a managed_file using #states attribute does not work →
not working has been identified and fixed. Different from the patch.
- Test case reused from
#2847425-65: #states not affecting visibility/requirement of managed_file →
and included.