Out of PHP memory when running aggregator_post_update_add_item_uuids()

Created on 12 October 2023, about 1 year ago
Updated 10 November 2023, about 1 year ago

Problem/Motivation

I'm running a website with over 60k aggregator item entities and during testing the update from 1.0.2 to 1.1.0 the aggregator_post_update_add_item_uuids() runs into PHP out of memory issues.

I'm using a hook_ENTITY_TYPE_presave() for aggregator_item entity which use additional memory during update process, but in general this update should be fail safe with high number of aggregator_item entities.

Steps to reproduce

Proposed resolution

Use $sandbox batch processing in aggregator_post_update_add_item_uuids() to create UUIDs for all existing aggregator item entities.

Remaining tasks

User interface changes

API changes

Data model changes

πŸ› Bug report
Status

Fixed

Version

2.0

Component

Code

Created by

πŸ‡©πŸ‡ͺGermany IT-Cru Munich

Live updates comments and jobs are added and updated live.
Sign in to follow issues

Comments & Activities

  • Issue created by @IT-Cru
  • Status changed to Needs review about 1 year ago
  • Open in Jenkins β†’ Open on Drupal.org β†’
    Core: 9.5.x + Environment: PHP 8.1 & MySQL 8
    last update about 1 year ago
    172 pass
  • πŸ‡¦πŸ‡ΊAustralia larowlan πŸ‡¦πŸ‡ΊπŸ.au GMT+10

    give this a crack, untested but modeled on similar code

  • πŸ‡©πŸ‡ͺGermany IT-Cru Munich

    @larowlan works better, but not perfect yet. When batch process is starting a new thread PHP memory limit error pops up again.

    >  [notice] Processed 11200 items of 57843.
    >  [notice] Processed 11250 items of 57843.
    >  [notice] Batch process has consumed in excess of 60% of available memory. Starting new thread
    > PHP Fatal error:  Allowed memory size of 201326592 bytes exhausted (tried to allocate 1908736 bytes) in /var/www/html/docroot/core/lib/Drupal/Core/Database/StatementWrapper.php on line 145
     [success] Finished performing updates.
    

    Drush version: 11.6.0

  • πŸ‡¦πŸ‡ΊAustralia larowlan πŸ‡¦πŸ‡ΊπŸ.au GMT+10

    πŸ€” Hmm that is odd, it says starting new thread, but then dies

    Does that happen if you run it from update.php?

  • πŸ‡ΊπŸ‡ΈUnited States dcam

    I've been reviewing #2. If there's something wrong with it, then I haven't noticed either.

    I did some research to try and find out if there's an existing issue involving memory leaks when running post_update hooks. I didn't find much. There's one issue from a couple of years ago, https://www.drupal.org/project/drupal/issues/3254403 πŸ› system_post_update_sort_all_config can exhaust PHP memory in 9.3.0 because processing is not batched Fixed . Even after Core batched the post_update hook people still reported problems, usually dealing with various contrib module configurations.

    I know it would be really strange to have a batch of 50 Aggregator Items whose total loaded size is 200MB, but it's not impossible. @IT-Cru can you try setting entity_update_batch_size to a lower limit in your settings.php file? Unless it's just one Item that's really, really big, then maybe that will avoid the issue.

    If I were experiencing this problem on one of my sites, I'd query the database and see if the data is causing a problem.

    SELECT * FROM aggregator_item ORDER BY iid LIMIT 50 OFFSET 11250;
    

    As for solutions to fix this with the patch...

    +++ b/aggregator.post_update.php
    @@ -16,10 +20,28 @@ function aggregator_post_update_delete_queue_items(&$sandbox = NULL) {
    +  foreach (\Drupal\aggregator\Entity\Item::loadMultiple($ids) as $item) {
    

    I know it would be less efficient, but we could try not loading the full batch in the foreach statement. Instead we could just iterate over the IDs and load them individually in the body instead. If it is one or more large Items causing a problem, then I think that will avoid the problem in most cases.

     foreach ($ids as $id) {
      $item = \Drupal\aggregator\Entity\Item::load($id);
      ...
    }
    
  • πŸ‡ΊπŸ‡ΈUnited States nmangold United States

    Patch #2 resolved this issue for me.

  • πŸ‡©πŸ‡ͺGermany IT-Cru Munich

    @larowlan: Could it be, that we always load only first 50 items with this code and never unset previously loaded item ids from $sandbox variable? Because offset starts always with 0.

    $ids = array_splice($sandbox['ids'], 0, (int) Settings::get('entity_update_batch_size', 50));

  • πŸ‡¦πŸ‡ΊAustralia larowlan πŸ‡¦πŸ‡ΊπŸ.au GMT+10

    array_splice operates by reference so its basically like running array_shift 50 times

  • Status changed to RTBC about 1 year ago
  • πŸ‡©πŸ‡ͺGermany IT-Cru Munich

    @larowlan: Yeah your are totally right.

    I've debugged a lot and found now the issue in my project which also runs into PHP memory limits during batch processing. It was related to some custom purge queue handling which I now disabled during maintenance mode.

    So +1 RTBC from me.

    Optional suggestion

    Only thing which we should maybe improve in patch is to re-use $entityStorage variable for loadMultiple($ids) call.

    Instead
    foreach (\Drupal\aggregator\Entity\Item::loadMultiple($ids) as $item) {

    Use
    foreach ($entityStorage->loadMultiple($ids) as $item) {

  • Open in Jenkins β†’ Open on Drupal.org β†’
    Core: 10.1.x + Environment: PHP 8.1 & MySQL 8
    last update about 1 year ago
    172 pass
  • Status changed to Needs review about 1 year ago
  • Open on Drupal.org β†’
    Core: 10.1.x + Environment: PHP 8.1 & MariaDB 10.3.22
    27:26
    26:43
    Queued
  • πŸ‡ΊπŸ‡ΈUnited States dcam

    Agreed about reusing the $entityStorage variable. Here's a new patch with that change. But otherwise this still looks good to me. Once it passes I'll commit the changes.

  • Open on Drupal.org β†’
    Core: 9.5.x + Environment: PHP 8.1 & MariaDB 10.3.22
    25:35
    25:35
    Queueing
    • dcam β†’ committed 62a98822 on 2.x
      Issue #3393690 by larowlan, dcam, IT-Cru: Out of PHP memory when running...
    • dcam β†’ committed 6320bf73 on 1.x
      Issue #3393690 by larowlan, dcam, IT-Cru: Out of PHP memory when running...
  • Status changed to Fixed about 1 year ago
  • πŸ‡ΊπŸ‡ΈUnited States dcam

    Thank you for reporting back with your results, @IT-Cru.

    I already added tags for new releases. I'll go make them now.

  • Automatically closed - issue fixed for 2 weeks with no activity.

Production build 0.71.5 2024