Performance problem saving nodes in multilingual site

Created on 16 April 2025, 11 days ago

Problem/Motivation

We have multilingual site with 37 languages and nodes with paragraph module included. Saving a node in drupal 11 lasts over 8 seconds.

Steps to reproduce

Edit a node in english language, make some changes - all the same, which field is edited - and save the node.

Already tried to limit the problem

  • As the database is migrated from drupal 10, we tested the same node, the same changes in drupal 10 and it works as expected
  • Disabled all custom modules and use Olivero as theme and Gin as administration theme: poor performance too
  • Delete all languages until english and german: quick performance

What to do?

Is this problem known?
What can we do, to narrow done the cause of the problem?
Any help or hint is welcome.

๐Ÿ› Bug report
Status

Active

Version

11.1 ๐Ÿ”ฅ

Component

language system

Created by

๐Ÿ‡ฉ๐Ÿ‡ชGermany walterp

Live updates comments and jobs are added and updated live.
Sign in to follow issues

Comments & Activities

  • Issue created by @walterp
  • What can we do, to narrow done the cause of the problem?

    Do you have experience with profiling Drupal โ†’ , or profiling PHP generally? xhprof, for example, is easy to use with DDEV. Any profiler will reveal which functions are slow or called excessively.

  • I am making this a support request pending specific steps to reproduce as a bug.

  • ๐Ÿ‡ณ๐Ÿ‡ฟNew Zealand quietone
  • ๐Ÿ‡ฉ๐Ÿ‡ชGermany walterp

    I have no experience with profiling - but I installed xhprof in my ddev environment and get the xhgui site and can inspect get, post and so on.

    For example I see "Profile data for POST https://drupal11-ddev.ddev.site/node/1380/edit" and get a Drupal\Core\Database\StatementWrapperIterator::execute = 50,475,930 ยตs (a very high bar in the chart in comparison with the other values)

    If you can give me some hints/instructions, which values you are interested, I can post them. Or can I export something?

    P.S.: My docker environment has assigned 8 CPUs and 16 GB memory of a MacBook Pro M1.

  • Even a screenshot of the top time-wasting functions will do. At a glance based on that limited information it seems like the database server could be overwhelmed. Just giving a MySQL instance a lot of resources will do little unless its configuration is tuned for the use-case, and there could be full table scans happening. I really have no idea. Additionally, does MySQL's slow query log contain anything?

  • ๐Ÿ‡ฉ๐Ÿ‡ชGermany walterp

    I added 2 screenshots: One from get and one from post on saving a node.
    Here the last slow query logs of the database:

    # Time: 250418 10:02:16
    # User@Host: db[db] @  [172.19.0.2]
    # Thread_id: 75  Schema: db  QC_hit: No
    # Query_time: 0.002493  Lock_time: 0.000038  Rows_sent: 55  Rows_examined: 2924
    # Rows_affected: 0  Bytes_sent: 6603
    # Tmp_tables: 1  Tmp_disk_tables: 1  Tmp_table_sizes: 24576
    # Full_scan: Yes  Full_join: No  Tmp_table: Yes  Tmp_table_on_disk: Yes
    # Filesort: Yes  Filesort_on_disk: No  Merge_passes: 0  Priority_queue: No
    #
    # explain: id   select_type     table   type    possible_keys   key     key_len ref     rows    r_rows  filtered        r_filtered      Extra
    # explain: 1    SIMPLE  p       index   PRIMARY parent_target_id        4       NULL    404     404.00  100.00  100.00  Using index; Using temporary; Using filesort
    # explain: 1    SIMPLE  t       ref     PRIMARY,taxonomy_term__id__default_langcode__langcode,taxonomy_term__tree,taxonomy_term__vid_name       PRIMARY 4       db.p.entity_id  2       5.97    84.44   2.28    Using where
    #
    SET timestamp=1744963336;
    SELECT "t".*, "parent_target_id" AS "parent"
    FROM
    "taxonomy_term_field_data" "t"
    INNER JOIN "taxonomy_term__parent" "p" ON "t"."tid" = "p"."entity_id"
    WHERE ("t"."vid" = 'seo_groups') AND ("t"."default_langcode" = '1')
    ORDER BY "t"."weight" ASC, "t"."name" ASC;
    # Time: 250418 10:02:45
    # User@Host: db[db] @  [172.19.0.2]
    # Thread_id: 75  Schema: db  QC_hit: No
    # Query_time: 0.000696  Lock_time: 0.000026  Rows_sent: 1  Rows_examined: 5001
    # Rows_affected: 0  Bytes_sent: 108
    # Full_scan: Yes  Full_join: No  Tmp_table: No  Tmp_table_on_disk: No
    # Filesort: No  Filesort_on_disk: No  Merge_passes: 0  Priority_queue: No
    #
    # explain: id   select_type     table   type    possible_keys   key     key_len ref     rows    r_rows  filtered        r_filtered      Extra
    # explain: 1    SIMPLE  cache_config    index   NULL    created 7       NULL    10529   5001.00 100.00  100.00  Using index
    #
    SET timestamp=1744963365;
    SELECT "cache_config"."created" AS "created"
    FROM
    "cache_config" "cache_config"
    ORDER BY "cache_config"."created" DESC
    LIMIT 1 OFFSET 5000;
    # User@Host: db[db] @  [172.19.0.2]
    # Thread_id: 75  Schema: db  QC_hit: No
    # Query_time: 0.018071  Lock_time: 0.000026  Rows_sent: 0  Rows_examined: 8986
    # Rows_affected: 4004  Bytes_sent: 13
    #
    # explain: id   select_type     table   type    possible_keys   key     key_len ref     rows    r_rows  filtered        r_filtered      Extra
    # explain: 1    SIMPLE  cache_config    ALL     created NULL    NULL    NULL    10529   8986.00 100.00  44.56   Using where
    #
    SET timestamp=1744963365;
    DELETE FROM "cache_config"
    WHERE "created" <= '1744963221.205';
    # User@Host: db[db] @  [172.19.0.2]
    # Thread_id: 75  Schema: db  QC_hit: No
    # Query_time: 0.000584  Lock_time: 0.000057  Rows_sent: 0  Rows_examined: 2000
    # Rows_affected: 0  Bytes_sent: 120
    # Full_scan: Yes  Full_join: No  Tmp_table: No  Tmp_table_on_disk: No
    # Filesort: Yes  Filesort_on_disk: No  Merge_passes: 0  Priority_queue: No
    #
    # explain: id   select_type     table   type    possible_keys   key     key_len ref     rows    r_rows  filtered        r_filtered      Extra
    # explain: 1    SIMPLE  r404    ALL     NULL    NULL    NULL    NULL    1000    1000.00 100.00  100.00  Using filesort
    #
    SET timestamp=1744963365;
    SELECT "r404"."timestamp" AS "timestamp", floor(log(10, count)) AS "count_log"
    FROM
    "redirect_404" "r404"
    ORDER BY "count_log" DESC, "timestamp" DESC
    LIMIT 1 OFFSET 1000;
    # Time: 250418 10:07:46
    # User@Host: db[db] @  [172.19.0.2]
    # Thread_id: 103  Schema: db  QC_hit: No
    # Query_time: 0.001097  Lock_time: 0.000020  Rows_sent: 1  Rows_examined: 5001
    # Rows_affected: 0  Bytes_sent: 108
    # Full_scan: Yes  Full_join: No  Tmp_table: No  Tmp_table_on_disk: No
    # Filesort: No  Filesort_on_disk: No  Merge_passes: 0  Priority_queue: No
    #
    # explain: id   select_type     table   type    possible_keys   key     key_len ref     rows    r_rows  filtered        r_filtered      Extra
    # explain: 1    SIMPLE  cache_config    index   NULL    created 7       NULL    5990    5001.00 100.00  100.00  Using index
    #
    SET timestamp=1744963666;
    SELECT "cache_config"."created" AS "created"
    FROM
    "cache_config" "cache_config"
    ORDER BY "cache_config"."created" DESC
    LIMIT 1 OFFSET 5000;
    

    I hope, you can see anything, which gives a hint to the problem. If you need any additional information, describe, how to get them and I will post them

  • When you see Full_scan: Yes that's not ideal and it indicates there could be missing indexes. Also plainly Drupal\Core\Database\StatementWrapperlterator::execute is the code unit taking all the time. It was called 2846 times, and because contacting the database server involves a network hop, each call doesn't have to be terribly slow to have a large impact. Does that profiler UI allow clicking on that function for more information? I'd like to see which functions called it.

    • Is there any custom code or pre-release contributed code on that site?
    • Have you made any database server optimizations closer to what you would do in production or is it at the DDEV default? Have you considered a higher-performance configuration?
    • What caching backend is the site using?

    I am far from an expert on multilingual in Drupal but it's my impression that 37 languages is very many and that you may need to make some optimizations so it all works.

  • ๐Ÿ‡ณ๐Ÿ‡ฟNew Zealand quietone

    Changes are made on on 11.x (our main development branch) first, and are then back ported as needed according to the Core change policies โ†’ .

  • ๐Ÿ‡ฉ๐Ÿ‡ชGermany walterp

    Thank you for answering.

    • I added a screenshot with clicking on Drupal\Core\Database\StatementWrapperlterator::execute
    • The network hop - I think - is that the database work in a separate ddev container
    • In the checked installation there is custom theme and custom code (no pre-release). But I make also tests - as I mentioned - with theme Olivero and with uninstalled custom modules and get the same slow performance. The other thing is, that we use this many languages also in drupal 10 and it's much faster.
    • The installation is a standard ddev installation without any database customizations.
    • I also has a staging environment on a high performance hardware, where drupal, nginx and database are on the same machine installed. It's faster as in the ddev enviroment, but also very slow.

    What (database) optimizations can I try?
    Is there any test, to check for missing indexes?
    Any advice is welcome.

  • Thanks for that additional information. It would be really helpful if you could take a moment to answer all of my questions before we proceed further.

    I am tagging this issue as "needs steps to reproduce" because if you can explain exactly how to set up a site from a new Drupal install that you have proven exhibits the problem on DDEV, we could move more directly to answering.

  • ๐Ÿ‡ณ๐Ÿ‡ฟNew Zealand quietone

    @walterp, If this is happening on 11.1.x then include that information in the issue summary. The version field reflects where the change will be made which is 11.x.

  • ๐Ÿ‡ฉ๐Ÿ‡ชGermany walterp

    Yesterday I made further investigations and I think, I found the cause of the problem. Following always the most time-consuming actions in the xhgui of ddev, I get at least the contrib module entity_usage as evildoer.
    I don't use this module actively, but it was installed as dependency of paragraphs_library.

    Uninstalling this module decreases the saving time of my test node to 2 seconds.

    As I do not make any tests with the entity_usage settings, I don't know, if it is a generally problem of the entity_usage module or if changing the standard settings of the module would improve the behaviour.

    Many thanks to @cilefen. Your hint to xhproff and xhgui show me the way, to find the cause of the problem.

    I think, you can close this issue.

Production build 0.71.5 2024