Account created on 7 June 2013, almost 12 years ago
#

Recent comments

๐Ÿ‡ฉ๐Ÿ‡ชGermany walterp

Yesterday I made further investigations and I think, I found the cause of the problem. Following always the most time-consuming actions in the xhgui of ddev, I get at least the contrib module entity_usage as evildoer.
I don't use this module actively, but it was installed as dependency of paragraphs_library.

Uninstalling this module decreases the saving time of my test node to 2 seconds.

As I do not make any tests with the entity_usage settings, I don't know, if it is a generally problem of the entity_usage module or if changing the standard settings of the module would improve the behaviour.

Many thanks to @cilefen. Your hint to xhproff and xhgui show me the way, to find the cause of the problem.

I think, you can close this issue.

๐Ÿ‡ฉ๐Ÿ‡ชGermany walterp

Thank you for answering.

  • I added a screenshot with clicking on Drupal\Core\Database\StatementWrapperlterator::execute
  • The network hop - I think - is that the database work in a separate ddev container
  • In the checked installation there is custom theme and custom code (no pre-release). But I make also tests - as I mentioned - with theme Olivero and with uninstalled custom modules and get the same slow performance. The other thing is, that we use this many languages also in drupal 10 and it's much faster.
  • The installation is a standard ddev installation without any database customizations.
  • I also has a staging environment on a high performance hardware, where drupal, nginx and database are on the same machine installed. It's faster as in the ddev enviroment, but also very slow.

What (database) optimizations can I try?
Is there any test, to check for missing indexes?
Any advice is welcome.

๐Ÿ‡ฉ๐Ÿ‡ชGermany walterp

I added 2 screenshots: One from get and one from post on saving a node.
Here the last slow query logs of the database:

# Time: 250418 10:02:16
# User@Host: db[db] @  [172.19.0.2]
# Thread_id: 75  Schema: db  QC_hit: No
# Query_time: 0.002493  Lock_time: 0.000038  Rows_sent: 55  Rows_examined: 2924
# Rows_affected: 0  Bytes_sent: 6603
# Tmp_tables: 1  Tmp_disk_tables: 1  Tmp_table_sizes: 24576
# Full_scan: Yes  Full_join: No  Tmp_table: Yes  Tmp_table_on_disk: Yes
# Filesort: Yes  Filesort_on_disk: No  Merge_passes: 0  Priority_queue: No
#
# explain: id   select_type     table   type    possible_keys   key     key_len ref     rows    r_rows  filtered        r_filtered      Extra
# explain: 1    SIMPLE  p       index   PRIMARY parent_target_id        4       NULL    404     404.00  100.00  100.00  Using index; Using temporary; Using filesort
# explain: 1    SIMPLE  t       ref     PRIMARY,taxonomy_term__id__default_langcode__langcode,taxonomy_term__tree,taxonomy_term__vid_name       PRIMARY 4       db.p.entity_id  2       5.97    84.44   2.28    Using where
#
SET timestamp=1744963336;
SELECT "t".*, "parent_target_id" AS "parent"
FROM
"taxonomy_term_field_data" "t"
INNER JOIN "taxonomy_term__parent" "p" ON "t"."tid" = "p"."entity_id"
WHERE ("t"."vid" = 'seo_groups') AND ("t"."default_langcode" = '1')
ORDER BY "t"."weight" ASC, "t"."name" ASC;
# Time: 250418 10:02:45
# User@Host: db[db] @  [172.19.0.2]
# Thread_id: 75  Schema: db  QC_hit: No
# Query_time: 0.000696  Lock_time: 0.000026  Rows_sent: 1  Rows_examined: 5001
# Rows_affected: 0  Bytes_sent: 108
# Full_scan: Yes  Full_join: No  Tmp_table: No  Tmp_table_on_disk: No
# Filesort: No  Filesort_on_disk: No  Merge_passes: 0  Priority_queue: No
#
# explain: id   select_type     table   type    possible_keys   key     key_len ref     rows    r_rows  filtered        r_filtered      Extra
# explain: 1    SIMPLE  cache_config    index   NULL    created 7       NULL    10529   5001.00 100.00  100.00  Using index
#
SET timestamp=1744963365;
SELECT "cache_config"."created" AS "created"
FROM
"cache_config" "cache_config"
ORDER BY "cache_config"."created" DESC
LIMIT 1 OFFSET 5000;
# User@Host: db[db] @  [172.19.0.2]
# Thread_id: 75  Schema: db  QC_hit: No
# Query_time: 0.018071  Lock_time: 0.000026  Rows_sent: 0  Rows_examined: 8986
# Rows_affected: 4004  Bytes_sent: 13
#
# explain: id   select_type     table   type    possible_keys   key     key_len ref     rows    r_rows  filtered        r_filtered      Extra
# explain: 1    SIMPLE  cache_config    ALL     created NULL    NULL    NULL    10529   8986.00 100.00  44.56   Using where
#
SET timestamp=1744963365;
DELETE FROM "cache_config"
WHERE "created" <= '1744963221.205';
# User@Host: db[db] @  [172.19.0.2]
# Thread_id: 75  Schema: db  QC_hit: No
# Query_time: 0.000584  Lock_time: 0.000057  Rows_sent: 0  Rows_examined: 2000
# Rows_affected: 0  Bytes_sent: 120
# Full_scan: Yes  Full_join: No  Tmp_table: No  Tmp_table_on_disk: No
# Filesort: Yes  Filesort_on_disk: No  Merge_passes: 0  Priority_queue: No
#
# explain: id   select_type     table   type    possible_keys   key     key_len ref     rows    r_rows  filtered        r_filtered      Extra
# explain: 1    SIMPLE  r404    ALL     NULL    NULL    NULL    NULL    1000    1000.00 100.00  100.00  Using filesort
#
SET timestamp=1744963365;
SELECT "r404"."timestamp" AS "timestamp", floor(log(10, count)) AS "count_log"
FROM
"redirect_404" "r404"
ORDER BY "count_log" DESC, "timestamp" DESC
LIMIT 1 OFFSET 1000;
# Time: 250418 10:07:46
# User@Host: db[db] @  [172.19.0.2]
# Thread_id: 103  Schema: db  QC_hit: No
# Query_time: 0.001097  Lock_time: 0.000020  Rows_sent: 1  Rows_examined: 5001
# Rows_affected: 0  Bytes_sent: 108
# Full_scan: Yes  Full_join: No  Tmp_table: No  Tmp_table_on_disk: No
# Filesort: No  Filesort_on_disk: No  Merge_passes: 0  Priority_queue: No
#
# explain: id   select_type     table   type    possible_keys   key     key_len ref     rows    r_rows  filtered        r_filtered      Extra
# explain: 1    SIMPLE  cache_config    index   NULL    created 7       NULL    5990    5001.00 100.00  100.00  Using index
#
SET timestamp=1744963666;
SELECT "cache_config"."created" AS "created"
FROM
"cache_config" "cache_config"
ORDER BY "cache_config"."created" DESC
LIMIT 1 OFFSET 5000;

I hope, you can see anything, which gives a hint to the problem. If you need any additional information, describe, how to get them and I will post them

๐Ÿ‡ฉ๐Ÿ‡ชGermany walterp

I have no experience with profiling - but I installed xhprof in my ddev environment and get the xhgui site and can inspect get, post and so on.

For example I see "Profile data for POST https://drupal11-ddev.ddev.site/node/1380/edit" and get a Drupal\Core\Database\StatementWrapperIterator::execute = 50,475,930 ยตs (a very high bar in the chart in comparison with the other values)

If you can give me some hints/instructions, which values you are interested, I can post them. Or can I export something?

P.S.: My docker environment has assigned 8 CPUs and 16 GB memory of a MacBook Pro M1.

๐Ÿ‡ฉ๐Ÿ‡ชGermany walterp

Sorry for the late reply, but I have to work on another project and we live with this behaviour, which come up from time to time.
Now we are migrating to drupal 11 and want to integrate 2.3.x-dev@dev with the original DeepL php library.
Please close the issue. If the problem exists further in drupal 11, I'll create a new one.

๐Ÿ‡ฉ๐Ÿ‡ชGermany walterp

Seems to be missing kint main.js. If including this javascript manually, The dpm output works as expected and is expandable.

๐Ÿ‡ฉ๐Ÿ‡ชGermany walterp

May be, it's a devel problem.

- I remove devel_kint_extras -> same behaviour
- I installed a fresh drupal 11 and devel and devel_kint_extras and use Olivero -> same behaviour
- I installed a fresh drupal 11 and devel and use Olivero -> same behaviour
- I installed a fresh drupal 10 and devel and devel_kint_extras and use Olivero -> same behaviour
- I installed a fresh drupal 10 and devel and use Olivero -> same behaviour

๐Ÿ‡ฉ๐Ÿ‡ชGermany walterp

In the moment, I didn't find concrete steps to reproduce the failure. We are always translate into over 20 languages.

Sometimes the error appear on translating one node with small text and on the other side sometimes no error appear on translating many (10) nodes with much text.
But in any way, if the translation abort, you have to delete or submit many jobs manually.

The connection should be fast enough - the drupal server is hosted with Gigabit connections and the client, who starts the tmgmt job, has a 200 Mbit connection.

At this point, I thought - as the hint of DeepL says - to implement the retries would be an approach.

๐Ÿ‡ฉ๐Ÿ‡ชGermany walterp

OK, thank you. I'll try this.

Just another question: We translate our site into 36 languages and use DeepL for the languages that are supported from DeepL and Microsoft Azure for the rest. Is there any possibility to request a translation for all languages in the tmgmt cart and then, when submitting to provider, the jobs are automatically sent to the right translation provider?

๐Ÿ‡ฉ๐Ÿ‡ชGermany walterp

The support from DeepL shows, that <p translate="no">ยงยงยง</p> OLED <p translate="no">ยงยงยง</p> is not sent to DeepL in the query params.

I tried to change the text to be translated with several tmgmt hooks:

  • - hook_tmgmt_job_before_request_translation
  • - hook_tmgmt_job_after_request_translation
  • - hook_tmgmt_data_item_text_output_alter
  • - hook_tmgmt_data_item_text_input_alter

but none of them changes the query string sent to DeepL.

Is there any tmgmt or tmgmt_deepl hook, that I can use?

๐Ÿ‡ฉ๐Ÿ‡ชGermany walterp

Thank you.

Yes, it is set to HTML.

Then I go to DeepL and contact the support.

๐Ÿ‡ฉ๐Ÿ‡ชGermany walterp

Sorry for the inconvenience. The problem was caused by a custom BodyFieldProcessor, which excluded the summary, if the value is empty.

The module works as expected.

๐Ÿ‡ฉ๐Ÿ‡ชGermany walterp

Update to 10.3.1 with composer works well for me. But the menus are still gone?

๐Ÿ‡ฉ๐Ÿ‡ชGermany walterp

I have the same problem.

Rebuild permissions also doesn't affect.

Any other ideas to solve the problem?

๐Ÿ‡ฉ๐Ÿ‡ชGermany walterp

The following patch solves the problem for us:

diff --git a/sources/content/src/Plugin/tmgmt/Source/ContentEntitySource.php b/sources/content/src/Plugin/tmgmt/Source/ContentEntitySource.php
index b101d7b2..099d1b78 100644
--- a/sources/content/src/Plugin/tmgmt/Source/ContentEntitySource.php
+++ b/sources/content/src/Plugin/tmgmt/Source/ContentEntitySource.php
@@ -498,7 +498,7 @@ class ContentEntitySource extends SourcePluginBase implements SourcePreviewInter
     $translation = $entity->getTranslation($target_langcode);
     $manager = \Drupal::service('content_translation.manager');
     if ($manager->isEnabled($translation->getEntityTypeId(), $translation->bundle())) {
-      $manager->getTranslationMetadata($translation)->setSource($entity->language()->getId());
+      $manager->getTranslationMetadata($translation)->setSource($item->getJob()->getSourceLangcode());
     }
 
     foreach (Element::children($data) as $field_name) {
๐Ÿ‡ฉ๐Ÿ‡ชGermany walterp

Thank you for answering.

I solved the problem with a patch in the AugmentorBaseWidget.

I add the case for entity_reference_revisions type and pass the entity_id to the service:

foreach ($source_fields as $field_name) {
      $field_type = $entity->get($field_name)->getFieldDefinition()->getType();
      $nid = $entity->id();

      if (!$entity->get($field_name)->isEmpty()) {
        $values = $entity->get($field_name)->getValue();

        foreach ($values as $value) {
          switch ($field_type) {
            case 'image':
            case 'entity_reference_revisions':
              $value = strval($nid);
              break;

And in the service I load the entity and get the paragraphs field values and process them.

๐Ÿ‡ฉ๐Ÿ‡ชGermany walterp

WalterP โ†’ created an issue.

Production build 0.71.5 2024