-
+++ b/publication_date.module @@ -52,17 +52,21 @@ function publication_date_entity_base_field_info(EntityTypeInterface $entity_typ +function publication_date_form_alter(&$form, FormStateInterface $form_state, $form_id) { + $type = isset($form_state->getBuildInfo()['base_form_id']) ? preg_split("/_form$/", $form_state->getBuildInfo()['base_form_id'])[0] : ''; + if (!\Drupal::service('publication_date.enabled_entity_types')->isEnabled($type)) { + return; + } $account = \Drupal::currentUser(); - $node = $form_state->getFormObject()->getEntity(); + $entity = $form_state->getFormObject()->getEntity(); ... }
For first, I would check whether the form object is a
ContentEntityFormInterface
. If so, then we can be sure that theFormstateInterface->getFormObject()->getEntity()
returns a content entity – then we can have the entity type ID without any assumption made on the base form ID:function publication_date_form_alter(&$form, FormStateInterface $form_state): void { $form_object = $form_state->getFormObject(); if (!$form_object instanceof EntityFormInterface) { return; } $entity = $form_object->getEntity(); if (!$entity instanceof FieldableEntityInterface) { return; } $entity_type_id = $entity->getEntityTypeId(); if (!\Drupal::service('publication_date.enabled_entity_types')->isEnabled($entity_type_id)) { return; } ... }
-
+++ b/publication_date.module @@ -52,17 +52,21 @@ function publication_date_entity_base_field_info(EntityTypeInterface $entity_typ - $form['published_at']['#access'] = $account->hasPermission('set any published on date') || $account->hasPermission('set ' . $node->bundle() . ' published on date'); + $form['published_at']['#access'] = $account->hasPermission('set any published on date') || $account->hasPermission('set ' . $entity->bundle() . ' published on date');
These permissions need to be adjusted: e.g. both
node
andtaxonomy_term
entities might have a "foo
" bundle. -
+++ b/publication_date.services.yml @@ -1,5 +1,10 @@ + class: Drupal\publication_date\PublicationDateEnabledTypes + arguments: + - '@module_handler' publication_date.event_subscriber: class: Drupal\publication_date\EventSubscriber\PublicationDateSubscriber + arguments: ['@publication_date.enabled_entity_types'] tags:
Can we standardize the format of the arguments?
Personally, I prefer the format currently used on "
publication_date.enabled_entity_types
" since it is git-friendly. -
+++ b/publication_date.tokens.inc @@ -13,11 +13,15 @@ use Drupal\Core\Render\BubbleableMetadata; + sleep(0);
I guess this is only a debug leftover.
-
+++ b/publication_date.tokens.inc @@ -35,22 +39,22 @@ function publication_date_tokens($type, $tokens, array $data, array $options, Bu /** @var \Drupal\node\NodeInterface $node */ - $node = $data['node']; + $entity = $data[$type];
This variable comment does not make any sense...
Maybe it can be replaced by a condition:
$entity = $data['type']; if (!$entity instanceof FieldableEntityInterface) { return; }
-
+++ b/src/PublicationDateEnabledTypes.php @@ -0,0 +1,62 @@ + + +namespace Drupal\publication_date; + + +use Drupal\Core\Extension\ModuleHandlerInterface;
Nit: Unnecessary empty lines.
-
+++ b/src/PublicationDateEnabledTypes.php @@ -0,0 +1,62 @@ + * Class PublicationDateEnabledTypes + * @package Drupal\publication_date
Niz: Leftover from IDE default s, needs a proper class description.
-
+++ b/src/PublicationDateEnabledTypes.php @@ -0,0 +1,62 @@ + } +}
Nit: missing empty line.
Obviously, we still need some magic to install / uninstall the field whenever it is needed, which (if possible) could be resolved by a ConfigEvent subscriber (save, delete).
Job https://git.drupalcode.org/project/field_fallback_formatter/-/jobs/1780323 revealed that Renderer::renderPlain()
is deprecated.
Adding screenshot about what I see on my test environment.
I am not able to reproduce the issue with Drupal 10.3.x-dev (d4883273125fe) and File URL 2.x HEAD (commit https://git.drupalcode.org/project/file_url/-/commit/2598cbf53c8df4424b8...).
After typing in 'foo' in the remote file url input, I cannot simply switch to the "Upload file" radio and get a file upload input unless I click the "Remove" button. After I do that, I can upload a file without any issue, and the test entity is saved as expected.
Could you please extend FileUrlWidgetTest
and reproduce the error?
Adding the related Webform issue.
I dug into the problem and I don't think there is any issue with this module. The docblock of the Link render element declares that #url
must be a \Drupal\Core\Url
object - so Menu Link Destination can assume that the value at this key is a Url object.
- Based on #10, the reported issue is caused by Admin Toolbar Extra Tools (link renderable array without
#url
). - @alorenc had another case where the error was slightly different (calling getOption() on bool) - and that is casued by a link renderable having
#url
key set toFALSE
. This link is created by Webform; see 🐛 Webform might create invalid link renderable arrays on submission list. Needs review
I propose to close this issue, referring the corresponding bug of the contrib modules. Maybe the module could check if the link renderable array it is processing is a valid and log an error if it isn't... But wouldn't that be overkill? (Imho it wold be.)
Adding old-school patch files.
BramDriesen → credited huzooka → .
Addressed review feedback, asking for a next review.
Adding the usual traditional patch too.
Just noticed that I had an unfinished sentence in the test.
Asking for review.
I also add the old-school patch files.
huzooka → created an issue.
- Restored the original hook for Drupal versions lower than 10.2.0
- Added helper which determines whether we need the new hook or not
- Renamed the original hook to
_ui_patterns_field_form_alter
, and aligned it a bit so it can work both on the old storage form and also on the new, combined form
Adding a traditional patch.
huzooka → made their first commit to this issue’s fork.
Adding the same patch as in 4, but with suppressed deprecation warnings in MetaEntityAccessTest
.
I have a PoC patch for this which also includes the code I have in 🐛 Target entity tags should be invalidated even if meta entity is new Needs work .
Leaving in NW.
Maybe we can merge this issue and #3081067: Do not invalidate the target entity cache when the meta entity is updated →
Uploading patch.
Adding related issues
Adding patch on top of 💬 Slick filter schema is incomplete Fixed (fixed, but not yet released).
Makes sense totally, RTBC
@drunken monkey, Yeah, you're totally right! I think I got so lost in the details that I lost focus too!
Yes, we had this issue with the (SAPI) fields data fetched from the SAPI index. And your conclusion in regards of adding the cache tag to search API field plugins instead of adding it as the row's cache tag makes more sense.
Moving to NW because of #11.
I think there is a good chance that 🐛 View recalculated with wrong data from cache Needs review is related to this issue (not because the possibility of the exceptions thrown during indexing, but because of the cache metadata).
Feel free to remove it from the relations if you disagree.
My team also ran into an issue very similar to this; but our case was a bit more specific. Our indexes are configured to re-index items immediately, so we use post request indexing.
We have an SAPI index view using VBO. We had to build a Views Bulk Operation (VBO) action which changes SAPI-indexed entities. Our (content manager) users are using this action on a SAPI index-based Views display. So, these users visit the views page (which is a form), pick some items from the views list, choose the action and submit the VBO form.
What we've seen is that the Views display wasn't ever updated with the right data. Every other SAPI Views list using the same index seemed to be updated accordingly, but this admin view was stuck. The only solution which worked was to force a render cache invalidation.
I've spent days trying to catch the moment when things go wrong. Now, I'm quite confident to say that the bug we ran into is indeed about wrong cache metadata.
Steps to reproduce:
- SAPI index based view + VBO
- Index being configured to index items immediately.
- User performs a VBO action which changes indexed entities.
- After the form is submitted, Drupal displays the same view, changes are not visible on the current VBO Views page
- ...While at every other SAPI index view (using the same index), changes are visible;
- After dropping the render cache, the VBO Views page displays the expected (changed) data.
This is what happened in the PHP threads in our case, after submitting the form, without any patches applied:
If we refresh the SAPI VBO views page after these above:
- Request arrives
- HtmlRenderer starts rendering.
- SearchApiQuery is executed, because the index cache tag was invalidated. But: no views rows are re-rendered: render cache thinks they're all up-to-date!
- Rendered response is sent back to the client.
In my opinion:
- If A.6 and A.7 happens before B.3 (so: indexing + SAPI cache tag invalidation happens before SAPI views row re-rendering), then there will be no cache "corruption"; everything will be up-to-date.
- Otherwise, our users see outdated views results; which are cached "infinite".
Right now, users see the same (outdated) views rows on the VBO views form page until the render caches are rebuilt.
I agree that the problem is about corrupted / wrong cache metadata; but in my honest opinion, we should solve the problem the other way around: every SAPI index-based Views row must also include the cache tag of the index the displayed data belongs to – because that is the source of the data we display in SAPI index based views!
SearchApiQuery seems to be fine: in our case, we put our entities into different indexes, depending on the state of the entity (we have separate indexes for "published" and "unpublished" entities). If the action moves an item into the other index, the corresponding row gets removed from the view real-time.
What we did:We simply added the corresponding search_api_list:<index-id>
cache tag to the cache tags of the SAPI views row. After doing this, things started to work logically (for me): after performing the VBO action, the first response (still) returned outdated data; but after a page reload, I've seen the right results.
Of course, in our case, we were able* to (and we still have to) work around things since we want to display the right results to our users even after the form submission; But I'm sure that the solution to this problem (or at least one part of the solution) is to add the index cache tag to the Views rows too.
With the additional cache tag, the "backtrace" after form submission does not change:
But after reloading the VBO page, we see a bit more things to happen: the corresponding Views row gets re-rendered too!
- Request arrives
- HtmlRenderer starts
- SearchApiQuery::execute starts
- SearchApiQuery::execute finished
- Changed SAPI views field gets re-rendered <- this is what did not happen before!
- HtmlRenderer finishes
- Response sent back to client
What do you think about this fix?
Attaching an old-school test-only, fix-only and complete patch, hoping that they will be tested by CI.
TBH at Index::indexSpecificItems()
, I would simply wrap every line from the first one potentially throwing a SearchApiException
into a try / finally statement, and move the cache tag invalidation into finally
. (So, trusting your judgement, from the line where you add the extra cache tag invalidation now.):
/**
* {@inheritdoc}
*/
public function indexSpecificItems(array $search_objects) {
if (!$search_objects || $this->read_only) {
return [];
}
...
// Remove all items still in $items from $rejected_ids. Thus, only the
// rejected items' IDs are still contained in $ret, to later be returned
// along with the successfully indexed ones.
foreach ($items as $item_id => $item) {
unset($rejected_ids[$item_id]);
}
try {
...
}
finally {
// Clear search api list caches.
Cache::invalidateTags($tags_to_invalidate);
}
// Clear the static entity cache, to avoid running out of memory when
// indexing lots of items in one process (especially via Drush).
\Drupal::getContainer()->get('entity.memory_cache')->deleteAll();
return $processed_ids;
}
Checked the PR and it seems to be fine.
I'm not a big fun of unrelated CS violation fixes (as they usually cause merge conflicts), but I would leave this up to the decision of the maintainer. Maybe a test assertion could also be added which verifies that the back button is present in the corresponding case.
Re #10
- I spent some time trying to mock a timeout but unfortunately it didn't work (with my current knowledge I don't think it will as long as the test and tested Drupal instance are running in the same PHP thread).
-
the config object should always contain all the valid keys, even if they're empty - so that config validation can occur.
If the configuration key(s) are nullable, then they do not always have to be included in the config object. You verify this if you add this to the settings form test (we cannot trigger schema violations if we use the config form through the test browser):
// These do not cause config schema violation as long as the // "http_client_config" "http_client_config.timeout]" keys are defined as being // nullable. $config_factory->getEditable('imagecache_external.settings') ->set('http_client_config', []) ->save(); $config_factory->getEditable('imagecache_external.settings') ->set('http_client_config', ['timeout' => 10]) ->save(); $config_factory->getEditable('imagecache_external.settings') ->set('http_client_config', ['timeout' => '20']) ->save(); $config_factory->getEditable('imagecache_external.settings') ->clear('http_client_config') ->save(); // ...But any of these do! $config_factory->getEditable('imagecache_external.settings') ->set('http_client_config', ['missing_key' => 'value']) ->save(); $config_factory->getEditable('imagecache_external.settings') ->set('http_client_config', ['timeout' => 'string']) ->save(); $config_factory->getEditable('imagecache_external.settings') ->set('http_client_config', 'string') ->save();
- Thank you very much for the honor, but I don't live with it. (I am terribly behind in the maintenance of my other modules, unfortunately 🫣.)
Reverting my previous commit, status set back to NW.
I think I addressed the concerns raised in regards to the test coverage, back to NR
Attaching patch on top of the MR of ✨ SVG support RTBC
Attaching patch on top of the MR of ✨ SVG support RTBC
To be honest, on the one hand, I didn't want to think about what the value should be in the case when we want to inherit the default configuration. On the other hand, because of DX: as the maintainer of a Drupal site, when I update a module, I prefer no to see any changes in a module configuration if it works like it did before (mainly because it reduces the noise I see in the commits I make if I upgrade a module).
huzooka → created an issue.
Adding an old-school patch on top of ✨ Add option to override the inherited HTTP client configuration Needs review .
The new configuration option is nullable (similar to #3396502) because of BC.
Added a new test to ImagecacheExternalTest
.
Test came back green, unassigning.
@Claudiu Cristea has a wonderful module https://www.drupal.org/project/http_request_mock → that makes it easy to create HTTP service mocks, but unfortunately I wasn't able figure it out how can I test / mock timeouts... I basically spent my yesterday afternoon trying to do that, but nothing I've tried has led to success, so I'm just testing the configuration here.
To be fully BC, the new configuration is declared as nullable, so no preexisting ImagecacheExternal installation will have any change in their module settings if this FR is merged and released.
Sorry, forget to set the right issue status, but keep assignee.
I think this MR is fine as it is. The situation is actually that the module protects the backend from the entities having invalid IRIs described in the IS. The edge case is properly handled, i.e. it cannot happen via SparqlEntityStorage
(neither load nor save).
Maybe the throw exception can be added to the docblock of the corresponding SparqlEntityStorageInterface
methods, but that's only a nit.
The Kernel test additions are fine too.
My personal preference dictates that the method you test in ParamConverterTest (SparqlEntityStorageConverter::convert) should be set up in a way that it is able to return even an entity. So if I fix the entity definition param to be an array with the required key 'type', and then I add a valid entity to the test provider, then the test run must fail because convert returns with a non-null value. This might help to catch regressions in the future (or catch changes in upstream). But I repeat, this is only my preference. Attaching an oldschool diff as reference
NW because of https://www.drupal.org/project/search_api/issues/3394367#mr95-note218856 🐛 Not all fields that have a target_type setting are entity references Fixed
claudiu.cristea → credited huzooka → .
Attaching fix-only patch.
Seems that Gitlab CI is out of office hours...
Added example command to IS.
This was at most a task, not a bug... and let's face it, none of you used this tool, because then you would not have been dealing with the coding standard, but with the actual bugs.
Let's close this
We just run into this issue with 1.3.0 (issue still seems to be present in 1.x too).
We have custom module named contact_form
. If we try to send an email on behalf of this module, Drupal Symfony Mailer won't find a matching email builder (nor a dynamically created instance using the LegacyEmailBuilder
class).
How we worked around this temporarily
We prefixed the mail module param with an extra string, and we added an email builder plugin with the mathcing ID.
Analysis
This happens because we won't have any builder plugin definition for this module. Why? Let's see what happens during the discovery phase under the hood:
- The discovery specified in
DefaultPluginManager
discover all the available email builder plugins. EmailBuilderManager::processDefinition()
processes the definitions. In case of theContactPageEmailBuilder
plugin, the plugin IDcontact_form
also must contain the config entity type ID associated with the email builder plugin (thehas_entity
annotation is set to TRUE),::processDefinition
checks whether there is a entity type with IDcontact_form
available or not. It isn't (we don't have contact module being enabled), so the provider in the definition is set to_
.symfony_mailer_mailer_builder_info_alter
start to alter these definitions.- Our module's
hook_mail()
implementation gets discovered, but since Drupal Symfony Mailer already defines a plugin with IDcontact_form
, the logic inside the foreach loop won't generate builder plugin instance for our module - because there is already a builder plugin with that ID! - Since the contact module (what
contact_form
builder plugin provider is '_', it gets removed inDefaultPluginManager::findDefinitions()
. - So when we try to send an email through that module with
MailManagerReplacement::mail()
,EmailBuilderManagerInterface->createInstanceFromMessage()
wont't find any matching email builder plugin, so it doesn't return anything (which is actually equal to returning aNULL
in this case) ➡️ we get the same error message as in the IS (but for a bit different reason).
Suggestion for a BC-safe fix
Separate module name / builder plugin ID / ID of the related entity type:
- Instead of assuming that
plugin ID === module
name, utilize theprovider
annotation property: the provider could be added to the plugin annotations:provider => "contact"
. Any module can define plugin on behalf of other modules! This could replace the problematic parts in point 2.If this turns to be trickier than ideal, introduce a '
provider_module
' or 'module
' annotation instead, but I think that usingprovider
is enough. - Instead of assuming that the related entity type is the same as the plugin ID (as I see this happens if
has_entity
is set to TRUE), add a 'content_entity_type_id
' (or something similar) annotation property (and maybe deprecate 'has_entity
'). BTW, does this module uses this info for anything else than calculating the provider? (Didn't checked yet, just thinking loudly.) - Change logic behind
EmailBuilderManager::createInstanceFromMessage():
:- Instead of assuming that plugin IDs are matching
$module
or$module.$key
pattern, find the corresponding plugin based on theprovider
annotation of the builder plugin (and also checking the sub_types if any). But! The current approach could be kept as fallback (so we can still use plugin IDs likemymodule.my_email_key
, but it won't be a must.) AND - Make
EmailBuilderManager
usingFallbackPluginManagerInterface
, and convertLegacyMailBuilder
into a real (annotated) plugin - which then can be used as the fallback builder plugin (e.g. this is what theBlock(Plugin)Manager
does in Drupal core). This can eliminate the requirement on the logic implemented insymfony_mailer_mailer_builder_info_alter
.
- Instead of assuming that plugin IDs are matching
@AdamPS, what do you think?
First attempt. Only attaching patch files, no MRs this time.
huzooka → created an issue.