Uploading patch from the MR. Currently the only way to retry the jobs is to write an update hook to reset the job state.
achap → created an issue.
I realized it's actually possible to do this without an event already using hook_tmgmt_job_after_request_translation() however it's not as easy and results in saving the Job item and entity twice. But I want to have a backup if the MR isn't committed.
You can do it like this:
/**
* Implements hook_tmgmt_job_after_request_translation().
*/
function my_module_tmgmt_job_after_request_translation(array $job_items) {
foreach ($job_items as $job_item) {
// Only start working if the previous job was accepted.
if ($job_item->isAccepted()) {
/** @var \Drupal\tmgmt\Data $data_service */
$data_service = \Drupal::service('tmgmt.data');
$unfiltered_data = $job_item->getData();
$data_items = $data_service->filterTranslatable($unfiltered_data);
$changes = FALSE;
foreach ($data_items as $data_item_key => $data_item_value) {
// Issue with data-entity-embed-display-settings encoding
// post translation.
$original = $data_items[$data_item_key]['#translation']['#text'];
$result =
// @see https://www.drupal.org/project/tmgmt_google_v3/issues/3508188
preg_replace_callback(
'/data-entity-embed-display-settings="(\{[^<&]*?})"/',
function ($matches) {
$json = $matches[1];
// Decode JSON.
$decodedJson = json_decode($json, TRUE);
if ($decodedJson === NULL) {
// Return original if JSON decoding fails.
return $matches[0];
}
// Re-encode and escape JSON for HTML attributes.
$encodedJson = htmlspecialchars(json_encode($decodedJson), ENT_QUOTES, 'UTF-8');
return 'data-entity-embed-display-settings="' . $encodedJson . '"';
},
$data_items[$data_item_key]['#translation']['#text'],
);
if ($result !== $original) {
$changes = TRUE;
$data_items[$data_item_key]['#text'] = $result;
}
}
// The job item must be re-marked as active for the data to be propagated
// to the node, otherwise it will only be saved on the job item (not the
// corresponding entity) as the translator already marks it active. Only
// mark it as active if something actually changed to prevent unnecessary
// saves.
if ($changes) {
$job_item->active();
$job_item->addTranslatedData($data_service->unflatten($data_items));
}
}
}
}
The event would be nicer but up to you if you want to include it!
Also, I should add the reason my workaround of using a filter doesn't work is because when you go to save the translation in the UI, the filter doesn't get applied. Also not in the translation review section... so that solution was not working properly.
As I couldn't think of a way to solve this without hardcoding a dependency on entity_embed what about dispatching a post translation event directly after the translation is received and the response is decoded. This fixes my problem. Uploaded patch based on merge request.
This issue can also occur when you programmatically create taxonomy terms and then translate them programmatically too, because the content_translation_uid field is NULL in the database by default. When you go to re-save the previously created translation in the UI you get the above error message because it defaults to the anonymous user (0) when assigning a value via the form.
Patch from #128 works for me and the content_translation_uid is still correctly assigned.
Ok so I was able to dig a bit deeper into it and the error I posted above in #2 is not related so can be ignored. I stepped through with my debugger after clicking "Execute Pending Tasks" and in updateIndex it is calling clear because there are some differences between the opensearch mappings and the drupal mappings, however what appears to be happening is deleting the index is causing search api to dispatch the updateIndex event again before it actually deletes the index, but because we haven't actually cleared the opensearch index the mappings are actually still different, so it's causing a redirect loop where deleting just continually calls updateIndex. I've posted some screenshots from my debugger.
Stack trace showing the redirect loop:
For some reason search api is executing all those tasks:
For more context, this error is in the logs: OpenSearch\Exception\ForbiddenHttpException: [] in OpenSearch\Exception\HttpExceptionFactory::create() (line 25 of /app/vendor/opensearch-project/opensearch-php/src/OpenSearch/Exception/HttpExceptionFactory.php).
However, once truncating the table I can index fine so I don't think it is an authentication issue.
I think Html::decodeEntities is already working, the problem is when I look at the original node that has the embed on it (the one that hasn't been translated) it actually stores the embed with the Html entities in tact like:
<drupal-entity data-entity-type="media" data-entity-uuid="bdf34a77-8019-41f6-8234-9ed11370e3cf" data-embed-button="image_bundle" data-entity-embed-display="entity_reference:entity_reference_entity_view" data-entity-embed-display-settings="{"view_mode":"gallery_full_width_no_thumbnail"}"></drupal-entity>
That doesn't get stripped by Xss::filter on output but the decoded one does:
<drupal-entity data-entity-type="media" data-entity-uuid="bdf34a77-8019-41f6-8234-9ed11370e3cf" data-embed-button="image_bundle" data-entity-embed-display="entity_reference:entity_reference_entity_view" data-entity-embed-display-settings="{"view_mode":"gallery_full_width_no_thumbnail"}"></drupal-entity>
@jksloan2974 do you mean the patch from #131. Do you have any steps to reproduce? I think if you were to apply that patch to a site that was using the previous method from an earlier patch (where the language of the entities was changed rather than translated) I could see that happening but not sure how it would occur on a fresh site.
Looking at PathTranslationTest if I understand the logic correctly:
* When the node is not translatable the path alias is und by default.
* When it is translatable and the path is translatable the default alias gets en and the translation alias gets fr.
* When it is translatable but the path is not, both aliases get und.
Does it make sense to create a second alias with language und in the third case for the fr translation? It seems redundant, since it should just fall back to the en nodes alias. You now have two und aliases pointing to the same node. In that case the test shouldn't populate the path field for the translation, since it should be null.
achap → created an issue.
I re-opened an old issue that was closed in favour of this one which is still happening in 10.3. Interface translations are overriding config translations whenever interface translations are updated . Maybe best to concentrate efforts there if you are experiencing the same?
Interface translations override config translations on update 🐛 Interface translations override config translations on update Active
I think this should be re-opened. A number of people in the referenced ticket are still experiencing the issue described even after the fix. It is happening to me in the following scenario on Drupal 10.3
* I have some country names translated in locale E.g. United States = アメリカ in Japanese.
* We display country names in a language selector. That information is stored in simple config.
* Feedback from client was that country names should be displayed in their own language in the language selector. Translations should still exist in other contexts though.
* Put United States for the translation of United States in the Japanese config for the language selector. You could even delete the whole config file.
* Re-import locale like so: drush locale:import --type customized --override customized ja /app/translations/ja.po
* Export your config. You will notice that the config has been overridden with アメリカ
Just resolved the merge conflict by merging in the latest stuff from 2.x so I could test it out as the patch wouldn't apply.
Added a note that using eval sourcemaps and Drupal.t with webpack does not work properly!
One more update on #129, I have a use case that on certain languages that are English language variants e.g. en-GB and translate from a default en node, that the IE shouldn't be translated to en-GB by default. Rather than translations, they are localizations. Currently a new translation would be copied from the en inline node, and the languages would split, so it's not possible to share content across languages.
In this case, en-GB should still be able to reference en nodes without translating them. There could be other use cases, so the most flexible way is to introduce a new ShouldTranslateEvent
that gets dispatched just before translation but keep the behavior described in #129.
Thanks, this works to stop the deprecation warnings in CI but still needs work because there are unused use statements that need to be removed, test code needs to be updated to not rely on the exceptions and there are inconsistencies in the case of NULL, Null etc.
Also, I guess as long as MigrateSkipProcessException is not removed from core this can't yet be removed from the module.
This just occurred for me after upgrading to Drupal core 10.3 but I am on config_split 1.9.0 and config_filter 1.12.0 unlike the others in the issue. All my pipelines started failing. Re-running cim fixed it but I've never had to do that before. So something must have changed in 10.3 that no longer works with the versions I'm running.
In terms of not saving the link in the first place from the node form, I just did a quick debug on my own site and noticed that MenuTreeStorage::save is triggered by 3 modules. First is menu_ui, the other is token module and the third is our own custom submit handler. So even if we preventing saving in menu_ui, a lot of people use token so it would still occur for them.
Is something like what @andy_w posted feasible to do in core for sites that don't have complex access control?
I added a test case to show that this is happening without changing anything else, and addressed some feedback on MR. My use case is pretty simple because we don't have complex access controls like those mentioned in the slack thread. I'm not sure I fully understand the implications described there but I guess it would be good to solve this for anonymous users, while not breaking it for those more complex access scenarios if that's possible.
In my case it happens because of the "Menu Settings" tab on node edit forms. If you click "Provide a menu link" for example and make that node part of a menu, on subsequent saves of the node the menu cache tag gets invalidated even if nothing about the node changed. Seems to happen when directly editing the menu too. That might be unique to my setup but I had a colleague confirm on a different site.
I guess the best thing to do would be write a test proving that that is indeed the case. I'll try and find some time this week to do that.
Updated my fork to handle duplicate aliases caused by altering path alias languages.
Also, I feel like this is actually a bug report, because it's impossible to create a new alias in a different language without overriding the fallback alias. The title could probably be updated too but I'll leave that alone for now.
Hmmm both my MR and patch #10 will result in duplicate aliases being created every time you save a node if you do something like this in hook_pathauto_alias_alter
if ($context['language'] === \Drupal::languageManager()
->getDefaultLanguage()
->getId()) {
$context['language'] = LanguageInterface::LANGCODE_NOT_SPECIFIED;
}
I ran into a small edge case where I have url.prefixes defined in language.negotiation.yml but I haven't yet enabled url detection and selection, so I added a check to make sure the prefix actually exists before the comparison takes place.
I'm happy to write tests, however before I do maybe someone familiar with the menu system can review the approach and see if this is a sensible change?
Thanks for the patch @kasperg. I tested locally running both CLI and web requests and it works as expected.
I'm getting "MySQL server has gone away errors" after long running drush operations like search api indexing and running migrations with the lateruntime processor enabled. Those operations can put thousands of cache tags in the queue and it seems like it is trying to process many of them at the end of the request, or at the end of batches which is resulting in a packet size greater than max_allowed_packet. My max_allowed_packet setting is already quite high and I don't have much room to increase so I agree with the other comments that it would be better to at least have the option to disable it for CLI. Thanks for the MR @Kasperg
Thanks for creating the module! I added glossary support!
All the tests now seem to pass again except for this fail on the phpunit (next major) pipeline:
1)
Drupal\Tests\redirect\FunctionalJavascript\RedirectJavascriptTest::testRedirectUI
Behat\Mink\Exception\ExpectationException: The string "Are you sure you
want to delete the URL redirect from /web/non-existing?key=value to
/web/node?" was not found anywhere in the HTML response of the current
page.
however looking at the pipelines from other commits it looks like its unrelated to my code.
Ran the test-only changes to confirm the test catches the bug. Looks like fixing this has caused knock on effects to other tests though. I'll look into it.
Closing this as it can be prevented by altering the langcode in hook_pathauto_alias_alter rather than hook_path_alias_presave. Pre save occurs too late but the other one is fired before we save the alias so it prevents the duplicate issue.
Also, I guess that the core path_alias should be the one preventing duplicates being saved.
Looks like this patch was on the right path from #95 and then veered away from it. I strongly agree with #95 that we should be translating entities rather than changing their langcode as that is destructive and would confuse editors. I believe the fix in #52 might have been trying to address that but it may be redundant now. Also, it only prevents deleting entities used by the parent node but that could mean they are still deleted if referenced by other entities. Heavily based this MR off #95.
Changes in MR 120:
1) Inline entities are translated rather than having their language changed.
2) Re-attached the submit handler from #95 that was removed and handled the case where the main entity form is submitted but the inline entity form hasn't been opened yet.
3) Removed the deletion code that was trying to prevent entities being removed if referenced by the parent entity (I think that is probably better handled by something like entity_reference_integrity module). See above for reasons.
4) Translations get their own created/updated timestamps when they are created rather than inheriting from the default translation. This helps them show up in the /admin/content view when the parent page is translated.
5) Rename the testTranslation test to testSymmetricTranslation and added cases for translations and timestamps. Also created a new testAsymmetricTranslation test to test asymmetric behaviors which was more thorough than the previous behaviour.
Feedback welcome.
Regarding the comment in #127 the line number you indicated the error on doesn't match up with my MR so I don't think its related. Anyway I'm working on MR 120 now.
I thought this was an issue with the module more generally but turns it out it only happens when using the asymmetric translations patch so will close this and look into it over there.
Just to add some further steps to reproduce. You need uncheck "Include the current page as a segment in the breadcrumb" and "Use the real page title when available" which are turned on by default. Managed to reproduce it on a fresh simplytest.me install
The latest commit of https://git.drupalcode.org/issue/inline_entity_form-2822764/-/tree/2822764-support-adding-new-entities-when-translating applies cleanly to 3.0.0-rc20 as a patch. I made a small change to remove @label
from the translatable markup that was never actually getting replaced.
The failing TranslationTest
needs to be fixed and it seems to me that this patch has unintentionally changed the behavior even when allow asymmetric translations hasn't been enabled as the test fails on line 108 which is well before we enable it.
// Both inline nodes should now be in English.
$first_inline_node = $this->drupalGetNodeByTitle('Kann ein Känguru höher als ein Haus springen?');
$second_inline_node = $this->drupalGetNodeByTitle('Can a kangaroo jump higher than a house?');
$this->assertSame('en', $first_inline_node->get('langcode')->value, 'The first inline entity has the correct langcode.');
$this->assertEquals('en', $second_inline_node->get('langcode')->value, 'The second inline entity has the correct langcode.');
Seeing as this is still occurring I re-opened and opened a MR. Added code in the AliasStorageHelper
class rather than PathautoGenerator
I had a look at the linked issues and tried some patches but they didn't stop this issue occurring. I also downloaded the latest version of this module and and it's still occurring. Where was this fixed? (Using core 10.2)
The layout issue mentioned in #10 is fixed in https://www.drupal.org/project/drupal/issues/3341669 🐛 Dropbuttons and regular buttons are misaligned Fixed
Even though workbench_moderation does cause this with its long button labels I feel like it should be fixed in core as I don't see a reason why the list items in the dropbutton shouldn't be at least as wide as the parent item? I also found on mobile that even the main button gets cut off when the text is too long (see screenshot). Shouldn't it wrap?
Hi @mikelutz
For my personal use case I actually don't need to catch an exception when a row is skipped, I am more interested in when the migration fails. The only reason I have included it is because the original issue had it in scope. I would happily reduce the scope and remove changes to row skipping if that would help simplify things?
Still not sure why it only failed in ci only and not locally but changed the test to just need the exception name rather than the whole object since the serialization error was only happening in the test state recording. Tests passing now.
Setting back to needs review as I found a small edge case after the type hinting changes when the OS mappings are not set.
Have updated the code based on feedback. Looks like that error was unrelated to my code but I was able to trace it to the search_api_item
table not being installed in the SpellCheckTest so I added there.
I took a stab at this with MR#52
It retrieves the mappings from the OpenSearch server and compares them with the local Drupal mappings after event subscribers have fired.
I don't think comparing the original index fields to the new ones will work because the modified mappings after event subscribers have fired aren't stored anywhere and so it's not really possible to compare.
The other thing to be aware of is the dynamic mappings in OpenSearch. If you don't explicitly provide a type it will guess based off the data that is indexed. That means that every time you click save even if nothing has changed it will clear the index (because there are no local mappings). This is mostly a problem if you create fields in processors etc and the fix is just to use the IndexParamsEvent to explicitly give your custom fields the correct types. I have found that if you incorrectly give a Drupal field that is a float for example a string type in Search API then OS will override it with the float type in the mappings once data has been indexed. That will also cause clearing rather than re-indexing. So you need to be careful to assign the correct type.
Did not tackle settings (analysers etc), only field mappings and doesn't negate the need for a blue/green deployment but should definitely help reduce the amount of times you need to switch indexes.
Same as #13 when using the patch at #15 which is using getUserInput
the blockForm
method is called twice on load. The first time the defaultConfiguration is correct. The second time all the defaultConfiguration values are null. When I switch getUserInput
to getValues
like #6 it is only called once and the defaultConfiguration is correct.
This is possible using rank_feature queries in opensearch and it is implemented in search_api_opensearch. See https://git.drupalcode.org/project/search_api_opensearch/-/commit/fa23b3e5f6d24d39d126ab0ad40b974997e16ae4
The search_api_boost value is created for each document and you can use a rank_feature query on that field to boost results.
I saw this as well.
Steps to reproduce:
1. Enable PHP8.2 on your environment.
2. Switch to a branch running core 10.2 (where enforceDefaultTranslation is added to ContentEntityBase).
3. Clear cache (no warnings).
4. Switch to a branch running core 10.1 (no enforceDefaultTranslation).
5. Clear cache (warnings).
But don't think this is an issue with this module.
I've converted those annotations to php attributes. Unfortunately when I've merged in the latest code from 11.x there is a test failure in testRowSkippedEvent
however the error message is obscured by the annoying "Exception: Serialization of 'Closure' is not allowed" message which doesn't actually show what's wrong. When I run MigrateEventsTest.php locally I don't get any errors so this one is hard to debug. Apparently this is quite a common issue faced in Migrate tests (see related issue).
I do have a stack trace:
/builds/issue/drupal-2756269/core/lib/Drupal/Core/Cache/MemoryBackend.php:119
/builds/issue/drupal-2756269/core/lib/Drupal/Core/Cache/CacheCollector.php:275
/builds/issue/drupal-2756269/core/lib/Drupal/Core/State/State.php:102
/builds/issue/drupal-2756269/core/modules/migrate/tests/src/Kernel/MigrateEventsTest.php:481
/builds/issue/drupal-2756269/vendor/symfony/event-dispatcher/EventDispatcher.php:206
/builds/issue/drupal-2756269/vendor/symfony/event-dispatcher/EventDispatcher.php:56
/builds/issue/drupal-2756269/core/modules/migrate/src/MigrateExecutable.php:247
/builds/issue/drupal-2756269/core/modules/migrate/tests/src/Kernel/MigrateEventsTest.php:354
That shows that it is trying to serialize the cached data (which would contain an exception). My guess would be that the exception data has a closure in there somewhere but like I said very hard to debug if I can't reproduce it locally.
Hopefully someone can help pinpoint the issue!
achap → created an issue.
Just to remove any ambiguity I updated the title to reflect that this is an issue with the 8302 update hook not 8301. I think that was entered in error.
From looking at the committed code in that other issue I don't see how it can be related or fix this issue as the modified code only affects the password_policy_history table while this primarily affects user__field_pending_expire_sent. Unless the conclusion is that the error occurs due to db vendors not following Ansi SQL as suggested in #16? My website that was affected by this is hosted with Acquia if that helps?
Kristen Pol → credited achap → .
Just ran into another error code from our remote service (429) so thought I would attempt to make this a little bit more flexible rather than just logging specific error codes. Unfortunately I couldn't find much documentation on OpenSearch error responses so it's a little hard to test.
This error response from OpenSearch:
Array
(
[_index] => index_a
[_id] => entity:node/152691:en-AU
[status] => 429
[error] => Array
(
[type] => status_exception
[reason] => Error from remote service: {
"error": {
"message": "You exceeded your current quota, please check your plan and billing details. For more information on this error, read the docs: https://platform.openai.com/docs/guides/error-codes/api-errors.",
"type": "insufficient_quota",
"param": null,
"code": "insufficient_quota"
}
}
)
)
Ends up looking like this in the logs:
Error from remote service: { "error": { "message": "Incorrect API key provided: foobar. You can find your API key at https://platform.openai.com/account/api-keys.", "type": "invalid_request_error", "param": null, "code": "invalid_api_key" } } for id: entity:node/189871:en-AU. Status code: 401
I updated the log message to include the status code and also changed "for index" to "for id" since it is logging the id of the item not the index it's in.
I'm getting some composer errors in my pipeline but I think it's unrelated to this change:
$ composer install $COMPOSER_EXTRA
No composer.lock file present. Updating dependencies to latest instead of installing from lock file. See https://getcomposer.org/install for more information.
Loading composer repositories with package information
Updating dependencies
Your requirements could not be resolved to an installable set of packages.
Problem 1
- drupal/core 10.2.x-dev conflicts with drush/drush <12.4.3.
- drupal/core-recommended 10.2.x-dev requires drupal/core 10.2.x-dev -> satisfiable by drupal/core[10.2.x-dev].
- Root composer.json requires drupal/core-recommended 10.2.x-dev -> satisfiable by drupal/core-recommended[10.2.x-dev].
- Root composer.json requires drush/drush ^11.0 -> satisfiable by drush/drush[11.0.0-rc1, ..., 11.x-dev].
@jweowu this occurred for us on 9.5. I have run into this issue before though and when it occurred it was when I was installing field config programmatically in update hooks, which from glancing at the update hook is what is occurring inside 8302:
if (!isset($sandbox['progress'])) {
/** @var \Drupal\Core\Config\ConfigInstallerInterface $configInstaller */
$configInstaller = \Drupal::service('config.installer');
$configInstaller->installDefaultConfig('module', 'password_policy');
$sandbox['progress'] = 0;
$sandbox['max'] = $db_connection->query('SELECT COUNT(uid) FROM {users}')->fetchField();
$sandbox['current_user'] = -1;
}
So I think you're right that this might be a core issue... but I can't help with much more than that unfortunately.
However, what has fixed it for me in the past is just resaving the field storage definition without changing it. That makes the error go away:
$update_manager = \Drupal::entityDefinitionUpdateManager();
$field_storage_definition = $update_manager->getFieldStorageDefinition(
'field_pending_expire_sent',
'user'
);
$update_manager->updateFieldStorageDefinition($field_storage_definition);
Have added a change record and made some adjustments based on the comments and also made the getRow method consistent with regards to type hinting/annotations and the rest of the code. Will wait and see regarding the annotations issue.
Just ran into this as well with queues and static cache with state api. Workaround for those who need it is to use the keyvalue factory service directly which is what the state api is built on top of and doesn't use a static cache. Agree with #5 that it would be good to have a way of busting the cache for a particular key using state.
6 years later and a need for this came up again.
I think I have addressed most of the issues from #19. I added additional tests for everywhere the migration fails and ensure an event is dispatched. In addition, I passed the original exception to the event since in a few cases MigrateExecutable catches the generic Exception event, which means it will catch all exceptions giving classes further up the stack no chance to react to the event. In my case, a source plugin can throw a SuspendQueueException but this was being caught by MigrateExecutable and not the queue runners. So I needed to re-throw it in my event subscriber.
Removed the additional ideas from the issue summary as they are outside the scope of the core issue. I guess they can be opened in new issues if they're still wanted.
Thanks for the MR. Checking if fields have changed does seem like a good idea but I think a clear rather than a re-index will be necessary as alluded to in previous comments. From the Opensearch docs:
If you want to create or add mappings and fields to an index, you can use the put mapping API operation. For an existing mapping, this operation updates the mapping.
You can’t use this operation to update mappings that already map to existing data in the index. You must first create a new index with your desired mappings, and then use the reindex API operation to map all the documents from your old index to the new index. If you don’t want any downtime while you re-index your indexes, you can use aliases.
See docs: https://opensearch.org/docs/latest/api-reference/index-apis/put-mapping/
So if we could detect if a field doesn't yet exist then we could use the re-index flag, and if it does already exist then we would need to clear. However that could get complicated if one field is added and another changed etc.
So just checking if any fields exist in new index vs original and only clearing then already seems like an improvement. However I think the clear would need to come before the call to updateSettings and updateFieldMapping
This also only occurred for me once I installed webprofiler. hook_event_dispatcher was working fine before then. Given that this is the case for others as well maybe this issue should be closed in favour of the related one in tracer?
Also, I'm not entirely sure if the test fails are due to my new code because the tests were already failing on the upstream branch
Based on my understanding that hook is called after all the batches have been queued for processing, so it's too late for my use case. I want to prevent them from being included in the queue in the first place as in my case removing them gets rid of about 20,000 nodes that need to be processed.
Although dispatching the event in EntityUpdateManager
is probably not necessary as in that case the event would already have been fired if we are coming from a batch process, or we are just saving an individual node. So in the latter case I could just use the hook.
I think the module will still have issues with subdomain languages as it just removes the www during comparison but this should fix the language prefix issue.
Just want to clarify my comments from number 7. It should work if you pass the --uri
parameter when running via drush. Cron I'm less certain on but I imagine you can set the correct value somewhere.