Account created on 18 July 2009, over 15 years ago
#

Merge Requests

More

Recent comments

🇮🇳India sukr_s

Functionality is dependent on Add an 'instant' queue runner Needs work as mentioned in #102 . Once that's Fixed and tests pass, will remove the draft on MR.

🇮🇳India sukr_s

resolved last 2 review comments. signature made consistent with new Cron class signature.

🇮🇳India sukr_s

Checked in 10.3.2 as per #22. Issue did not occur. Setting the status to closed. If the issue still occurs, reopen the issue with the necessary steps to reproduce.

🇮🇳India sukr_s

Related issue #3197264: Node searches fail if index is not complete, and users do not know why . If the node indexing is being moved entirely to queues, then this scenario also needs to be handled.

🇮🇳India sukr_s

performUpdateIndex indexes the given nids. Alternate is to use the updateIndex function in the queue. However this function does a left join on node to determine all the new nodes. This is very expensive operation on large sites. To avoid this expensive query updateIndex was refactored and performUpdateIndex was created. The queue runner invokes performUpdateIndex as the nid is already known saving an expensive db call.

I don't see any risk from BC perspective since it accepts the nids from the updateIndex function does the same steps as before.

🇮🇳India sukr_s

removed invalidated test case and added new test case to cover different mime type upload.

🇮🇳India sukr_s

I'm unable to reproduce the issue on 10.3.2. So closing it as outdated. If the problem persists, pls share detailed steps to reproduce the issue.

🇮🇳India sukr_s

@alexander tallqvist I'm unable to reproduce the problem. I tried with Entity reference - media and with media reference. With entity reference it worked fine, though I had to make changes in the alter function. The field selector needed to be
field_main_media[0][target_id] instead of field_main_media[target_id]

With media reference, it did not work. The field selector generated with target_id doesn't exists.

Can you share some more details on how to reproduce the problem.

🇮🇳India sukr_s

#504012: Index content when created uses queue to index content upon creation or updation. This works only when automated_cron is used. Otherwise the indexing is still done in cron.

After changes in #504012: Index content when created is accepted, the cron function in node can be removed and the check for automated_cron module in Node::postSave can be removed. This will completely make the node indexing via queues.

🇮🇳India sukr_s

Changes committed. Tests cases will pass only after Add an 'instant' queue runner Needs work is accepted / merged to 11.x / and updating the fork.

Setting to NR for reviewing the code changes.

🇮🇳India sukr_s

sukr_s made their first commit to this issue’s fork.

🇮🇳India sukr_s

@smustgrave: I thought about this and did not add a test case as adding a test case would take up at least 6-8 secs of test time on each run, which would increase the gitlab cost without a significant benefit. There is no logic change per se. It's a typical database admin task. Not a major issue in my perspective. So is a test coverage really needed?

🇮🇳India sukr_s

@smustgrave
1. umami issue: For the files created by the install, the UID column in file_managed is null. That coupled with issue 🐛 File entity update is allowed only for user who has uploaded the file Active results in Edit button not shown in umami profile
2. All files are not getting edit button: This is due to the current permission implementation. Check 🐛 File entity update is allowed only for user who has uploaded the file Active
3. w.r.t. needing the same file name: It's a bit of a learning curve. So let's await for usability review.

@rkoller
yes it's a caching issue. That's fixed too. If you pull the latest code, kindly clear cache before testing.

🇮🇳India sukr_s

Not sure why the length 191 was chosen. The core database maintainers may have better knowledge to make an impact assessment of the change and comment if this should be fixed. and thus a new issue.

🇮🇳India sukr_s

@smustrave: the Problem / motivation states

There seems to be no way in Drupal to overwrite a file, and keeping the same filename.

🇮🇳India sukr_s

further experimentation. Creating a index
alter table file_managed add index groupidx(fid, filename, filemime,filesize,status, created, changed);
reduces the initial load to 3 seconds and subsequent pagination to 1.5secs

🇮🇳India sukr_s

The issue persists in 11.x
1. updated the steps the reproduce
2. created file_managed with 524288 and file_usage with 482306 records
3. initial page load takes upto 6 seconds and every pagination click takes about the same time.

🇮🇳India sukr_s

Addressed MR Comments. Moved common code to a trait as discussed with catch on slack.

🇮🇳India sukr_s

Checked on 11.x. This issue is fixed. If an update fails, subsequents updates don't run.

🇮🇳India sukr_s

@smustgrave there are two issues here
1. visibility i.e. hide / show does not work like other fields. This is a bug and has been fixed in the MR.
2. required does not perform the mandatory check. This is mentioned as "works as designed". Therefore I suggested to open a new ticket for performing required validation in 88 🐛 #states not affecting visibility/requirement of managed_file Needs work .

if it's agreed to open a new ticket for required validation, the title & problem needs to be updated and a new ticket needs to be opened.

🇮🇳India sukr_s

I've updated the comment. Splitting it into two methods will not help since the $args should not have expression placeholders. so the first loop will have to be repeated again or additional checks done to exclude the expression placeholders.

🇮🇳India sukr_s

- Changed the title.
- Couldn't locate the MR comment. Nevertheless added a comment at the place of change.

🇮🇳India sukr_s

noted. I've closed the other ticket.

🇮🇳India sukr_s

@alexpott I understand that. As per the current design the progressiveness is a part of the batch_set / batch definition. I'm suggesting that we make the current design work as it was designed by fixing the current bug.

The improvement of moving the progressive definition from batch definition to batch_process can be taken up in a new ticket.

🇮🇳India sukr_s

My suggestion is that the current fix is commited so it's immediately usable. In addition open a new ticket for implementing the approach suggested in #12 🐛 Make non-progressive batch operations possible RTBC . So setting back to RTBC.

🇮🇳India sukr_s

When any translation is removed, the original content is also added for index. This behaviour has not changed. Pervious to this fix, after the cron run the original article would get reindexed but the removed translation would be left hanging and still gets reported as "1 item requires indexing".
After this fix, the translation is removed from the index therefore before the cron run it shows 1 item which is the original article and after the cron run, it's zero items.

🇮🇳India sukr_s

An issue summary update removed a large block of testing and explanation that I found useful to help understand this issue. Maybe that should be restored?

restored original proposed solution

🇮🇳India sukr_s

@smustgrave could you help with the title update please. I'm not sure what it needs to change to as the current title explains the problem aptly.

🇮🇳India sukr_s

This method is similar to Json::encode & Json::decode wrappers implemented in Drupal. These wrappers provide a "preferred way" in Drupal that developers don't need to remember all the flags to use each time. This method on similar lines has a "preferred behaviour" which is allowing administrators to allow increasing the time limit without changing the default behaviour.

🇮🇳India sukr_s

IMO the current changes can be committed and issue closed. Additional function and deprecation is not required. The promise of settimelimit is to allocate additional time. The changes does exactly that. Since the developer cannot guarantee the amount of time needed when calling this function, it can be increased by setting the php.ini in production by administrator to ensure the job is done. Adding more time than called for by the developer does not impact the site since the process will end once the task is complete irrespective of the remaining time.

🇮🇳India sukr_s

required in #states only places a * label against the field. It does not do any validation. It's designed as such. see #65 🐛 #states not affecting visibility/requirement of managed_file Needs work . Please open a new ticket for implementing mandatory check when required is set in #states.

🇮🇳India sukr_s

@alexpott the sarcasm is not appreciated.

No developer sets the exact time they need. Excerpt from core

 // Try to allocate enough time to run all the hook_cron implementations.
    Environment::setTimeLimit(240);

They hope that the process will end in the given time. The intent is not to run for another 240 seconds, but to have the task completed. When that does not work, the site administrators increase the time in php.ini which has no effect due to the behaviour of settimelimit.

Enhancing the current function will strength the execution of Drupal core. I do not agree that a new method is required for this. And additional education required for all developers to use the new method instead of the standard function.

🇮🇳India sukr_s

If the time limit is set to 500 by your php.ini... and after 499 seconds the code calls \Drupal\Component\Utility\Environment::setTimeLimit(100)... you'll get 100 more seconds as per the PHP manual

With the current change, it does allocate more time instead of 100 more seconds, it allocates 600 more seconds. This solves the problem that if the time limit set in php.ini was 500 and after 2 seconds code calls \Drupal\Component\Utility\Environment::setTimeLimit(100), the time limit reduces from 498 to 100 seconds. The fix allocates always the maximum possible so that the process can complete. This allows site administrators to increase the time limit in php.ini for timeouts occurring on their site where reduction in time happens due to settimelimit in code.

🇮🇳India sukr_s

The test case testBlockPlacementIndicator in BlockUITest.php has the issue described in 🐛 Domain-based language negotiation strips "destination" URL query argument, causing BigPipe error Closed: duplicate . This has been fixed. I don't think we need additional test case for this.

🇮🇳India sukr_s

@catch

So I would just directly process the queue items in the terminate event listener without trying to deal with async http requests

With this suggestion, I'm assuming that processing in the event listener will be non-blocking, otherwise async call is better. Will go with your suggestion for now.

Why not a maximum number of queue items to process,

Do you mean not to process more than the maximum number of items in the queue at a time. I was thinking of setting the time limit to zero so that all the items in the queue will be processed. Otherwise in each terminate event we will have to check the remaining number of items in the queue and trigger again, which would mean additional db calls in the terminate event.

and if there's one or more queue item added during the request, process them at the end?

Yes that's the current thought as well that the process queue call would be done in the terminate event to avoid multiple calls in cases where multiple items are added to the queue.

I don't see what a minimum number of items gets us.

If there was a need to process in a batch instead of immediate processing, perhaps an unwanted frills.

🇮🇳India sukr_s

I'm proposing the following implementation in automated_cron module
- Instant queue processor will have the following configurations
- Min number of items to trigger the queue. Setting to zero will disable instant queue processing
- Max number of concurrent queue processing
- Implement a queue.database manager extending the core queue.database which will track the items being added to the queue. All queuing function will be delegated to the core implementation.
- Add a new route with no access check that will process the queue
- At end of a request if the number of queue items configured is reached then an async queue processing will be triggered using the mechanism in https://github.com/biznickman/PHP-Async. I've been using this approach and works quite well.

This will not use any browser based implementation or 1x1 gifs.

Any thoughts, suggestions, edits or objections? I'll try to implement the same shortly.

🇮🇳India sukr_s

In 10.2.5 the Account Edit provides a link to Reset password if the user has forgotten their current password. Therefore changes to the one time login link may not be really needed especially considering #108

🇮🇳India sukr_s

"commit db83ea46 - Better names and no need to pass translations around" broke the fix, which has now been resolved by reverting the relevant changes.

🇮🇳India sukr_s

@Iarowllan the analysis was provided in #14. Copying the same below for convenience.

Modules like big_pipe invoke attachBehaviors that results in multiple attachment of behaviors for a given context. So maintain semaphore to ensure that the behavior is attached only once.

If the attachBehavior is called multiple times, then in states multiple states.Dependent objects are created for the same elements. However the initialise is not called in Trigger function for all the states.Dependent object since it checks if it was already initialised for an element. Therefore it's best to avoid multiple invocation of attachBehaviors to ensure that the behviors work as intended.

🇮🇳India sukr_s

batch_set function accepts the batch definition with progressive as a parameter. #41 introduces a new parameter to the batch_process. Having a new parameter in batch_process can cause conflict between the value set when creating the batch with batch_set vis-a-vis calling the batch_process.

The proposed solution uses the value set in the batch definition so that it's consistently available throughout the processing of the batch.

🇮🇳India sukr_s

- Queries answered
- Tests changed to use API instead of browser call for setting up data.

🇮🇳India sukr_s

sukr_s made their first commit to this issue’s fork.

🇮🇳India sukr_s

Looks like outdated issue. I've tried this in a custom module and works just fine in 10.2.5

🇮🇳India sukr_s

This is a functional test - the link I sent to copy (PathElementTest) is a kernel test.

For future use, can you help me understand on how to choose between functional & kernel test?

I'm leaning towards closed works as designed here

In principle I would also agree. A password field should ideally have min length and not max length restriction. Since multiple people were following the issue, I went ahead and took a shot at it.

🇮🇳India sukr_s

- Added functional form based tests as required in #98
- Confirmed the need for fix in formvalidator.php as required #98. See job https://git.drupalcode.org/issue/drupal-1344552/-/pipelines/179107 which was run after reverting the form validator changes.

🇮🇳India sukr_s

Not an issue any more. On crash (wrong function call or db crash) in Ajax call, the following error message is displayed

Oops, something went wrong. Check your browser's developer console for more details.

Tested on 10.2.5

🇮🇳India sukr_s

FYI @sukr_s it is not accepted issue-queue etiquette to RTBC your own issue. Please use the needs review status.

@larowlan I changed it to RTBC since it was already reviewed and set to RTBC in #90. Since you asked for changes, I assumed that it needs to be set back to RTBC for your attention. It was not a self endorsement of the solution.

Will check the test that you pointed out.

🇮🇳India sukr_s

This is the only place in the core where maxlength check is being done.

I've simplified the code to check only the first field's value if there are multiple fields (or values) since confirm_password is the only known use case and both values need to match. Hope that helps.

🇮🇳India sukr_s

Doesn't that happen automatically?

Do you mean that the UI blocks after #maxlength? If so yes.

We could remove the code from formvalidator and depend on the UI to not allow more than #maxlength, though that would not be a good practice to depend on the front end to do the validation. Someone using the REST api say from mobile app will have to do the validation themselves.

Let me know if you want to go with UI only validation, I'll revert the changes in formvalidator.

🇮🇳India sukr_s

- Moved the patch to MR
- Update the test for comment #30

🇮🇳India sukr_s

sukr_s made their first commit to this issue’s fork.

🇮🇳India sukr_s

without correcting the assertion, the test will fail. do confirm if you want this to be removed. will the merge happen with test failing?

🇮🇳India sukr_s

changed to postsave and invoking node_reindex_node_search($this->id()); only if the translation was not deleted.

🇮🇳India sukr_s

With this fix progressive value can be set in batch definition. if progressive is false then it run in a single run else it uses multiple run. usage e.g

$operations = ['my operations'];
  $batch = [
    'title' => 'My batch runs in single run',
    'operations' => $operations,
    'progressive' => FALSE,
  ];
  batch_set($batch);
  batch_process();
Production build 0.71.5 2024