I needed to adapt the patch a little to apply to 10.2.11
Attaching it.
I found an additional solution in my notes.
--- web/modules/contrib/glossify/templates/glossify-link.html.twig 2023-05-08 12:03:53.000000000 +0000
+++ web/modules/contrib/glossify/templates/glossify-link.html.twig 2024-05-08 16:54:06.872025951 +0000
@@ -11,4 +11,4 @@
* @ingroup themeable
*/
#}
-<a href="{{ tipurl }}" {% if tip %}title="{{ tip }}"{% endif %} class="glossify-tooltip-link">{{ word }}</a>
+<a href="{{ tipurl }}" {% if tip %}title="{{ tip }}"{% endif %} class="glossify-tooltip-link">{{ word }}</a>{# this or any comment here removes the erroneous newline that causes spaces when enabling twig debugging, noticeable when terms are right before non-blank characters like punctuation - see https://www.drupal.org/project/glossify/issues/3363487 #}
However, I cannot tell you how or why this works.
I definitely had a bright moment when implementing this :-)
This is needed additionally to the fix in core mentioned above for this edge case (the very bug OP describes; sorry, we drifted away to more general cases).
The fact that debugging mode does not lead to problems when terms are followed by space characters is caused by the browser reducing multiple space chars to one.
As we have an official release that already works with 10.x, I think this can be closed?
tgoeg → created an issue.
Chiming in, as an accessibility check of one site showed that glossified links should have an "aria-label attribute set that shows the purpose of the link", so something like "Glossary definition for " I guess.
This string should be customizable and translatable, as implementations might differ. Some might call it glossary, some dictionary, other lexicon, etc.
You can also use tokens, e.g. [webform_submission:webform_score]
on the confirmation page.
See https://git.drupalcode.org/project/webform_score/-/commit/765d39f5c7c66e... for details.
I don't know if you addressed me, but IMHO the problem lies within core's way of inserting debugging comments itself.
There is an ongoing debate of whether enabling debug mode can be expected to give the same (rendered) results as when disabling it. And some core devs seem to be reluctant to accept any changes regarding the current markup as a site is expected to possibly fail when debugging is enabled.
I'd change core's debugging functionality as I showed above, as it doesn't hurt but fixes a few issues (on my side at least).
Depending on whether this is the way to go there is either nothing to do here or some additional workaround has to be implemented to get rid of those extraneous newlines.
Although I don't see
🐛
Twig debug mode causes parse error in appendXML
Needs work
in my installation, it seems very much related and both ways of fixing it outlined above may be a possible solution for it as well.
I can't reproduce this anymore, seems to work with the stock .htaccess
config in the composer package.
It also has a different rewrite rule than the one mentioned above.
See 🐛 Apache shows 403 forbidden when "destination=" contains url encoded question marks (%3F) (CVE-2024-38474) Closed: works as designed for details.
If others can confirm that a current 10.2 or 10.3 also fixes this, we can close this issue.
tgoeg → created an issue.
Be aware that in case of choices 9.0.1 (contrary to the current master), which gets included by webform, the URL is *not* cdn.polyfill.io
but <script src="https://polyfill.io/v3/polyfill.min.js?features=es5%2Ces6%2CArray.prototype.includes%2Cfetch%2CCustomEvent%2CElement.prototype.closest"></script>
as mentioned by OP.
If you use some kind of search & replace mechanism, it should rather be something along these lines:
"scripts": {
"drupal-scaffold": "DrupalComposer\\DrupalScaffold\\Plugin::scaffold",
"post-install-cmd": [
"sed -i 's#https://polyfill\.io/#https://polyfill-fastly\.io/#g' web/libraries/choices/public/index.html"
]
},
As already mentioned in [
🐛
polyfill.io Library is no longer considered safe to use
Fixed
], I'd very much favor that composer.libraries.json included this or some other form of patching the file without user interaction for the time being. Users should get a secure setup by just running composer update
for their drupal installation, at least that's my way of thinking.
We can update to a fixed version later on (or switch to a better maintained lib as mentioned above) anyway.
I understand all of that, and yes, there are transitional mitigations available, but why not change the root cause and stop composer.libraries.json
from downloading libraries that pull in the malware-ridden version in the first place?
I don't want code in my codebase that is known to cause trouble (even if it gets an override).
This issue is closed/fixed, so I just want to raise awareness that the root cause is not fixed in my opinion.
Choices seems to be unmaintained anyway. Pull requests for fixing the issue (and others) are there, unanswered.
What about adding https://github.com/alanhamlett/Choices/commit/0dae0283740f4f619c7589e7b1... as a (local) patch in composer.libraries.json?
This would be the kind of fix I'd think of.
Is it expected after an upgrade to webform-6.2.3 to still have these JS libraries source polyfill.io?
$ grep -lr "polyfill\.io" web/libraries/
web/libraries/algolia.places/src/navigatorLanguage.js
web/libraries/algolia.places/dist/cdn/places.js
web/libraries/algolia.places/dist/cdn/placesAutocompleteDataset.js
web/libraries/algolia.places/dist/cdn/placesInstantsearchWidget.js
web/libraries/algolia.places/dist/cdn/places.js.map
web/libraries/algolia.places/dist/cdn/placesAutocompleteDataset.js.map
web/libraries/algolia.places/dist/cdn/placesInstantsearchWidget.js.map
web/libraries/choices/README.md
web/libraries/choices/public/index.html
The current composer.libraries.json
at https://git.drupalcode.org/project/webform/-/blob/6.2.x/composer.librari... still sources an old https://github.com/Choices-js/Choices/archive/refs/tags/v9.0.1.zip and https://registry.npmjs.org/places.js/-/places.js-1.19.0.tgz
I guess this issues should be re-opened and these dependencies removed, as well!
I guess this can be closed, then.
I am not sure whether this solves all the problems people face here, but I have a completely different approach that solves it for me with views at least and does not introduce any additional logic.
When I check for empty fields, the problem is not the added XML-commented debug output per se, but the added whitespace/line breaks that are not inside the comments and therefore lead to twig believing there is content indeed.
I first thought that removing the line breaks would mess up the generated markup (but I wanted to live with it, as long as it would solve the initial problem of seeing different rendered things with and without debug mode enabled).
But as it turns out, both firefox and chromium take care of adding linebreaks themselves when using developer tools in the browser and the markup looks just the same as with the patch. And it makes checks in twig whether fields are empty truely true :-)
I am not a developer so please improve as you see fit. Just posting here as it fixes it for me and seems less intrusive.
Applies against 10.1.x and 10.2.x.
Hi and thanks for your quick patch!
It will take some time to check this, as I am currently buried in work with no end in sight, can @hexabinaer probably jump in for the time being?
I think the algorithm is legit and most likely a better approach as it keeps the numeric fields in their correct form and allows sorting, etc. and only converts to strings for the purpose of facetting, which I think should be OK for values with different digits.
This should definitely be tested, as I also had to workaround this problem recently as I had a numeric field indexed as string for faceting to work properly, but also wanted to use it to sort results. That didn't work out properly, as it was 0<value<100
and so the single digit values were sorted in the wrong way, i.e. alphanumerically, 9 > 80
, which is not what visitors expect.
As you only add a numeric field for faceting, this should be cleaner.
When looking at the patch, this line jumped out:
- $facetResults = $this->getFacetData($query->getIndex()->id(), array_keys($facets), [$filters], $keys);
+ $facetResults = $this->getFacetData($index, array_keys($facets), [$filters], $keys);
In all the other locations
$index=$query->getIndex()
but not
$index=$query->getIndex()->id()
You know I am no real dev so this might be totally OK, just wanted to mention it.
Meilisearch itself and the PHP SDK/API allow for providing effective relevancy. It's just that this very module does not yet implement this and I guess a relevancy of 1 is the default, then.
I tend to say we should transform this into a feature request?
I am also trying to track down relevancy problems as my boosts don't seem to cause any difference (but I think that the API key I use - intentionally not the master key - has some permission problems).
Relevancy is also always 1, in my case (in the view display for anonymous users). However, I do not get these errors in the log.
Do you have any custom twig involved that's possible typecasting to int here?
Or maybe this is a bug and my relevance is always 1 indeed (which won't cause any typecasting errors :) ). Which makes me wonder, I'd definitely like to see different relevancy. Your output seems to imply there is some relevancy that's not 1, which puzzles me.
I don't think this module reports any relevancy at all, yet, however.
https://php-sdk.meilisearch.com/classes/Meilisearch-Contracts-SearchQuer... says it would be $showRankingScore
.
I don't see any mention of this in the code, should be in some $meiliOptions
I guess.
Chiming in here and adding another case where this causes problems:
🐛
newline on glossify-link.html.twig causes extra space
Active
.
I also had this in a custom theme when checking whether a field is empty (which it was, but not when the debug code inserts newlines).
This does not only cause problems in non-HTML output as the RSS output might imply.
All of my problems got fixed by the following patch and I think it does more good than bad, so I think it could be included in core.
Current firefox and chromium add newlines to comments in developer tools anyway, so this does not really cause any disadvantage for developers, I think, and it really fixes problems on a lot of places (in my case).
Applies against Drupal 10.1
--- /core/themes/engines/twig/twig.engine.org 2024-05-08 16:37:50.990994524 +0000
+++ /core/themes/engines/twig/twig.engine 2024-05-08 16:38:47.900186459 +0000
@@ -63,8 +63,8 @@
throw $e;
}
if ($twig_service->isDebug()) {
- $output['debug_prefix'] .= "\n\n<!-- THEME DEBUG -->";
- $output['debug_prefix'] .= "\n<!-- THEME HOOK: '" . Html::escape($variables['theme_hook_original']) . "' -->";
+ $output['debug_prefix'] .= "<!-- THEME DEBUG -->";
+ $output['debug_prefix'] .= "<!-- THEME HOOK: '" . Html::escape($variables['theme_hook_original']) . "' -->";
// If there are theme suggestions, reverse the array so more specific
// suggestions are shown first.
if (!empty($variables['theme_hook_suggestions'])) {
@@ -106,17 +106,17 @@
$prefix = ($template == $current_template) ? 'x' : '*';
$suggestion = $prefix . ' ' . $template;
}
- $output['debug_info'] .= "\n<!-- FILE NAME SUGGESTIONS:\n " . Html::escape(implode("\n ", $suggestions)) . "\n-->";
+ $output['debug_info'] .= "<!-- FILE NAME SUGGESTIONS:\n " . Html::escape(implode("\n ", $suggestions)) . "-->";
if (!empty($invalid_suggestions)) {
- $output['debug_info'] .= "\n<!-- INVALID FILE NAME SUGGESTIONS:";
- $output['debug_info'] .= "\n See https://api.drupal.org/api/drupal/core!lib!Drupal!Core!Render!theme.api.php/function/hook_theme_suggestions_alter";
- $output['debug_info'] .= "\n " . Html::escape(implode("\n ", $invalid_suggestions));
- $output['debug_info'] .= "\n-->";
+ $output['debug_info'] .= "<!-- INVALID FILE NAME SUGGESTIONS:";
+ $output['debug_info'] .= " See https://api.drupal.org/api/drupal/core!lib!Drupal!Core!Render!theme.api.php/function/hook_theme_suggestions_alter";
+ $output['debug_info'] .= " " . Html::escape(implode("\n ", $invalid_suggestions));
+ $output['debug_info'] .= "-->";
}
}
- $output['debug_info'] .= "\n<!-- BEGIN OUTPUT from '" . Html::escape($template_file) . "' -->\n";
- $output['debug_suffix'] .= "\n<!-- END OUTPUT from '" . Html::escape($template_file) . "' -->\n\n";
+ $output['debug_info'] .= "<!-- BEGIN OUTPUT from '" . Html::escape($template_file) . "' -->";
+ $output['debug_suffix'] .= "<!-- END OUTPUT from '" . Html::escape($template_file) . "' -->";
}
// This output has already been rendered and is therefore considered safe.
return Markup::create(implode('', $output));
I think the whitespace is introduced by Html::serialize($html_dom) on line 199 of GlossifyBase.php a process which by default inserts space between the strings being recombined
I don't think that's correct. $html_dom
or even $word
before already has that extra space.
If you enable twig debugging, that is.
At least that's the problem in my case. Disabling debug mode makes the problem go away.
It is caused by the ending tag
<!-- END OUTPUT from 'modules/contrib/glossify/templates/glossify-link.html.twig' -->
.
And I explicitly added a newline here, before the comment, as that is what Drupal core does as well.
This very newline causes the whitespace to be inserted.
The following patch for Drupal core fixes it for me (as it does in several other cases where debug mode manipulates the DOM in such a way that it causes differences in rendering.
--- /core/themes/engines/twig/twig.engine.org 2024-05-08 16:37:50.990994524 +0000
+++ /core/themes/engines/twig/twig.engine 2024-05-08 16:38:47.900186459 +0000
@@ -63,8 +63,8 @@
throw $e;
}
if ($twig_service->isDebug()) {
- $output['debug_prefix'] .= "\n\n<!-- THEME DEBUG -->";
- $output['debug_prefix'] .= "\n<!-- THEME HOOK: '" . Html::escape($variables['theme_hook_original']) . "' -->";
+ $output['debug_prefix'] .= "<!-- THEME DEBUG -->";
+ $output['debug_prefix'] .= "<!-- THEME HOOK: '" . Html::escape($variables['theme_hook_original']) . "' -->";
// If there are theme suggestions, reverse the array so more specific
// suggestions are shown first.
if (!empty($variables['theme_hook_suggestions'])) {
@@ -106,17 +106,17 @@
$prefix = ($template == $current_template) ? 'x' : '*';
$suggestion = $prefix . ' ' . $template;
}
- $output['debug_info'] .= "\n<!-- FILE NAME SUGGESTIONS:\n " . Html::escape(implode("\n ", $suggestions)) . "\n-->";
+ $output['debug_info'] .= "<!-- FILE NAME SUGGESTIONS:\n " . Html::escape(implode("\n ", $suggestions)) . "-->";
if (!empty($invalid_suggestions)) {
- $output['debug_info'] .= "\n<!-- INVALID FILE NAME SUGGESTIONS:";
- $output['debug_info'] .= "\n See https://api.drupal.org/api/drupal/core!lib!Drupal!Core!Render!theme.api.php/function/hook_theme_suggestions_alter";
- $output['debug_info'] .= "\n " . Html::escape(implode("\n ", $invalid_suggestions));
- $output['debug_info'] .= "\n-->";
+ $output['debug_info'] .= "<!-- INVALID FILE NAME SUGGESTIONS:";
+ $output['debug_info'] .= " See https://api.drupal.org/api/drupal/core!lib!Drupal!Core!Render!theme.api.php/function/hook_theme_suggestions_alter";
+ $output['debug_info'] .= " " . Html::escape(implode("\n ", $invalid_suggestions));
+ $output['debug_info'] .= "-->";
}
}
- $output['debug_info'] .= "\n<!-- BEGIN OUTPUT from '" . Html::escape($template_file) . "' -->\n";
- $output['debug_suffix'] .= "\n<!-- END OUTPUT from '" . Html::escape($template_file) . "' -->\n\n";
+ $output['debug_info'] .= "<!-- BEGIN OUTPUT from '" . Html::escape($template_file) . "' -->";
+ $output['debug_suffix'] .= "<!-- END OUTPUT from '" . Html::escape($template_file) . "' -->";
}
// This output has already been rendered and is therefore considered safe.
return Markup::create(implode('', $output));
Adding this to 🌱 [META] twig debug can break parts of site Active as well.
Created a page in the cookbook for the time being, currently in review.
I still feel encoding options as raw values (my solution #3) should be the default behavior OOTB, as it would render the manual workarounds obsolete.
Still seeing this on a 10.1 site on a page where a subnavigation is put into one block as well as a Webform into another. The additional ID is only added when loading the second page of the form (which is not configured to use AJAX).
Changing CSS to target the class and not the ID (as advised above) solves it for me, but still this seems like a lurking bug.
This seems to be https://github.com/meilisearch/meilisearch/issues/4060 as we have a numerical field in this index.
Switching it to string for this field fixes it for us.
I see two issues:
- This very issue. We know there are only a handful of users using v1.x. They also have to have a numerical field and facets. If they upgrade, they will see this issue (i.e., have a working setup, upgrade, have a broken setup). For the time being and the probability of this to actually happen, I think this issue here is enough for documentation purposes.
- The other part however is the general problem that this poses. As this does not stem from upgrading at all, I will open another issue: 🐛 Using facets on numerical fields makes them disappear Active
If maintainers feel this is good enough then this issue can be closed.
Not saying that your patch might not be beneficial, the way I solved this is the following:
- Drupal multisite setup (so yes, multiple sites on prod and dev)
- A dev and a prod Meilisearch instance (on separate servers, but this could be on the same server with different ports as well
- Config split switches between the two instances based upon where the site lives
I would however never advise to run dev and prod on one server, be the website as small as it may, considering you get a VPS instance for 3 EUR/USD a month.
No need to justify yourself, and don't feel patronized, your patch might very well be useful.
I just wanted to note this down for anyone looking to set this up similarly.
Works on 10.1 for me, I'll see whether it does so on 10.2, as the needed core patch 📌 Add a 'machine_name' widget for string field types with a UniqueField constraint Needs work does not seem to be in good shape.
Yes, as stated in my original report, the GUI setting you show works indeed as it should.
However, drush does not produce anything useful when using --exporter=json
or --exporter=yaml
!
Tested with current v6.2.2
Do you get any meaningful output when you issue the following?
$ vendor/bin/drush wfx myform --exporter=json > /tmp/exporttest
> [notice] Exported 441 of 441 submissions…
$ file /tmp/exporttest
/tmp/exporttest: gzip compressed data, extra field, has comment, last modified: Thu Feb 7 16:27:08 2019, max compression, from Unix, original size modulo 2^32 173806772
$ gzip -dc /tmp/exporttest | less
gzip: /tmp/exporttest has flags 0x5c -- not supported
If I substitute the 0x5c
in the file with 0x00
I can tar xzf /tmp/exporttest
and get a file submission-3.json
which is half broken, but still it shows it is JSON in (broken) .tar.gz
1) The output is unusable
2) The output is compressed when the option/usage info does not say so. Either do output JSON/YAML as one would expect or state you get a .tar.gz (which should not be corrupt :-) )
I'm OK with a separate module, though I feel this should be the standard (secure) way of setting things up (and installed by default, then?). Maybe adding a note on the module's front page that using the submodule providing key management is strongly recommended for those not seasoned with manual key/index setups can help.
You mentioned
API and Search key
.
Haven't thought of search keys at all until now. The concept sounds good - a search client (Drupal frontend) only gets the least privilege, i.e. only permissions for searching. However, as the backend lives on the same installation, it is kind of limited here. When ingesting/indexing content is done on one machine and searching only on another, that absolutely makes sense. But yeah, why not, it doesn't hurt. And in case there's some vulnerability in the search code only, the index's contents are not at risk at once.
It should not be too hard to solve the backward compatibility I think: Let's take this over to ✨ Change master key to API and search key for improved security Active
If this issue at hand is available as a submodule, I think I can offer testing. It's just that I am currently stuck on 1.x, as the upgrade to 2.x breaks my facetted searches, even when I add the new facetting submodule. No errors whatsoever, I just don't get any results, but that's a different story. Just so you know this might be delayed quite a bit.
Just tested with current stable and I cannot reproduce it anymore - at least not the problem I mentioned above.
I think I am of no big help, then.
If that's the case, the option should be added to the help output, together with an info that it is only available if the module is installed.
As it is now, no-one knows of it except when they read this ticket.
I cannot reproduce my initial issue in 10.1.x anymore, nor do I see the AJAX errors mentioned in
#3033849-8: Rewrite / Replacement pattern not working for (date?) fields in revisions →
Everything works as expected.
Therefore, I'll close this issue.
Thanks for the update!
$ drush pm:enable webform_submission_export_import
> [notice] Updated default configuration to de
> [warning] No configuration objects have been updated.
> [notice] Message: Es wurden keine Konfigurationsobjekte aktualisiert.
>
[success] Successfully enabled: webform_submission_export_import
$ drush wfx --help | head -n7
Exports webform submissions to a file.
Arguments:
[webform] The webform ID you want to export (required unless --entity-type and --entity-id are specified)
Options:
--exporter[=EXPORTER] The type of export. (delimited, table, yaml, or json)
$ drush pm:list | grep "Submission Export/Import"
Webform Webform Submission Export/Import Enabled 6.2.2
Is this a hidden parameter option or is there something wrong with my installation?
I don't see --exporter=webform_submission_export_import
as a possible option here.
I also get "attribute search_api_datasource
is not filterable" when I
- create an index
- also use "node grants" in the filters (in hope of not indexing e.g. unpublished content)
- search as an anonymous user
It's hard to tell for me whether this is related? Or shall I raise a separate issue?
I could not find a solution but managed to solve all other conflicts and upgraded to CKEditor5 and it is working again.
For anyone who wants a quick but more or less proper fix without touching code for the time being:
Export your site's config, grep for "offset: null" in the view config .ymls and edit it to say "offset: 0" (or whatever valid value you want).
Reimport config. Site running again.
I lost track of the original cause/error for this as this is so long ago, but I think this also happens/triggers a similar error when setting up a search API view/search with facets placed above the search in a views_block_area and enabling AJAX.
In this situation, I get
{"message":"No facet link or facet blocks found."}', name: "AjaxError"
I found a note stating:
The problem is that the "block-facets-ajax" class is not being added to the rendered block.
Sorry for being so vague, but maybe that helps in tracking this down somewhat.
tgoeg → created an issue.
So is most likely also something that Meilisearch itself should provide better messages and error codes for.
For the situations where we do get code 0 back, this is definitely a better error message.
I wonder whether there might be explicit cases of an unreachable server which the old message should still be shown for.
But on the other hand, looking into the logs is never a bad idea, so setting this RTBC.
OK, RTFM'ing sometimes really helps :-)
That key derivation first irritated me, but after reading the reasoning behind it, it totally makes sense.
This also means it's of no use to file a feature request, as API keys simply cannot be changed.
It also rules out situations where people might probably change API keys on the Meilisearch server but forget to keep it in sync with Drupal, which could be considered an advantage.
All of this might be too theoretical anyway. I think the way you did it right now makes for 99% of the setups. At that's what this issue was all about at first. To remove the master key from the config and have a convenient way of managing the keys from Drupal itself so one can have a secure setup.
This seems fulfilled to me.
Let's wait for other use cases if there are any.
I could think of one, btw :-)
This is an actual project that might see the light of day next year probably. I'd want to have one central Drupal instance to feed an index and have multiple web sites to be able to search through subsets of this index (based on given filter criteria).
It would mean one default setup with the routines you laid out to initially create the index on the central instance. Nothing fancy yet.
On the other instances, I would manually enter prefixes and keys, as I would very likely want them to not be able to delete the index, add or delete documents, etc. One could probably prepare for such a scenario by adding an additional checkbox "read-only access", which would create the API key with a reduced set of permissions.
I am still not sure how to get the actual content from the central Drupal site, then, if search_api
can handle non-local content for indexed data at all and if this is a good idea. Or if this module needs to be the one that indexed the data itself to be able to use it again (maybe not anymore since the fix in
📌
Remove entity id mapping config
Fixed
?)
But this would probably be one application for another kind of key generation.
All of this sounds reasonable to me!
I think a warning should be added that keys will get deleted (haven't looked at the actual code yet, maybe it is like this already). Most of the time this will be desired, but there may be cases where an index+key may be used from multiple Drupal instances or even different frontends.
Taking care of every hypothetical setup should not be the aim here, however. I'd expect admins of complex setups to be able to handle them manually, for themselves. A warning/note however should protect from unexpected accidents.
> Sadly how key management works, every time the prefix gets changed we'd have to generate a new key.
Yes, that's something that irritated me as well already. Should we probably file a feature request for an UPDATE
feature of the /keys
endpoint to allow both changing the index and/or the key?
Comments overlapped.
Yes, a confirmation and deletion of the generated key totally makes sense. I do already have a lot of unused keys from development tries. This would definitely make setups cleaner and mean less manual admin work on the Meilisearch backend.
Yeah! That looks good to me as well! Probably re-use some of my description texts to make it clearer.
Would I be able to deliberately delete the key and re-activate "Generate API key"?
There might be situations where I want to have a new key when one is configured already. Maybe when a new/empty Meilisearch server gets used and I want to start anew.
Or probably do not deactivate the checkbox and only show a red warning when it is selected and a key is already configured?
No!
And sorry, I seem to not have been clear.
I DO think it's a good idea for Drupal to setup keys. Something along the workflow depicted in the mockups.
It's just that the usage of prefixes (which is a good design decision IMHO) will not always be backwards compatible with existing setups. Only in the case of existing indexes that do NOT conform to a prefixed namespace will an admin have to manually manage keys outside of Drupal.
And that will be a very small percentage of users, if they exist at all.
Even if people have existing indexes, I - for one - would advise to drop them, generate a key like in the mockups and create the indexes anew, now conforming to the prefixed setup.
I think what needs additional attention is the creation of new indexes.
When a prefix (and accompanying key) is setup, only indexes starting with the prefix can be created. So misconfiguration can technically be ruled out as wrong indexes will not be able to be created anyway.
However, it might be helpful if it were possible to restrict new index creation to names starting with the prefix.
If that's too complicated, the error message when trying to create an index with a different prefix should state state the index does not start with the prefix. Not just that index creation failed because of missing permissions.
Seems I did not push save the last time I wrote an answer to this, sorry.
I didn't know about the wildcard option. But I like the idea very much. Do you know why? My indexes already have a prefix, the domain they are being used on :-)
So yes, this feels pretty natural to me and I guess it will to anyone having a multisite setup.
I am unsure about the consequences when deleting an index. If a wildcard API key exists, it will most likely continue to exist. I think I remember that API keys vanish when their index gets deleted (but I may be wrong). Creating an index should also be fine, as - as long as the prefix is correct - there is no additional key needed (which would otherwise be the case; a per-index key needs to exist to be able to use the index, and therefor creating a new index would not make too much sense).
I currently get an error "Index already exists" if drush cim
a config with an existing index. Would this be "fixed" then? I do not think an index should then be deleted, even if that would be possible now.
But that's a different issue actually, which I think needs a little modification as well.
There might be applications where protecting an index from getting deleted might make sense. My current (manually-created-key-)setup does just that. Drupal may in fact only really add and remove documents, and search through them, which feels adequate for a production instance. No way to really destroy anything.
I think people who want just this can however still lock down keys manually. This module should probably not break if people do it, however.
This way, I think this could be added to the core functionality right away. It does not really seem to have the potential to bring disadvantages to anyone, and it would prevent people from using a master-key based setup. I think that's something that should never be possible. It feels like saving your root password into some configuration in plain text.
If this shall be a separate submodule then there should be a big fat warning that running without it is not at all recommended.
I might be a little biased but I think prepopulating the (editable) prefix with the transliterated domain of the page, e.g. my-website.co.at
will become my_website_co_at_
as a prefix) could help beginners a lot to get a clean setup.
Regarding your questions:
> The first one is if there is already an existing active server with indexes. For this, I think we should just set the prefix to an empty string and leave the master_key as was set.
Yes and no. It will keep legacy setups running, yes. But I'd like to see an upgrade path for this. It would be great if one could provide a prefix (that's already in use) and a key, basically just like the new algorithm layed out above would do, just that the key would not be created, but an existing one could be entered.
If the algorithm would just fill in the prefix and the (generated) key upon clicking some "generate" button (that also needs to have the master API key supplied), then those fields could be editable as well (with a warning stating that the key and prefix either need to exist already (for upgrades) or one should better use the generator, if a new key/prefix (and probably index) is preferred.
> The second is removing of submodule. So if the API key stays there searching, modifying, and deleting existing indexes should not be a problem, but if someone tries to add a new index to the server it will only work if the admin adds the prefix that works with the API key when naming the index.
Yes. And a note for this would need to be added. Which makes part of this functionality appear in the main module again. I think this should go into the main module (did I already mention that?). But, as I stated above, using a submodule is a possibility as well. The big fat warning™ from above would need an additional info that exactly those problems will arise if anyone decides to remove the submodule. I can't think of any meaningful setup that would need this, though.
Here are two mockups and the HTML that generates them.
I am unsure whether this should probably better be put into a separate tab. The double "generate key" buttons are no good solution yet. Probably just add a link to the second tab and do the key generation there.
Alternatively, it could be fit right into the basic configuration page already as well, should not be too long.
As you can see my screenshots are partially German. I am from Austria, so I can provide translations as well (later on).
Is there anything open that prevents releasing this as a stable/supported version? Drupal 9 will soon reach EOL, so a stable release for D10 would be important.
I'd say we should restrict the scope here to whether error handling makes sense, regardless of how/where errors originate, to potentially give a better clue for admins/devs if they occur.
Having differing data sources sounds like a useful addition that will have its application but should probably be created as a separate feature.
It does have a way!
If you request the index' details with
xidel -s -e '$json' -H 'Content-Type: application/json' -H "Authorization: Bearer mypass" 'http://127.0.0.1:7700/indexes/mydomain'
(might as well use curl, I use xidel because it understands and formats JSON and XML, can use xquery/xpath, etc.)
you get some fields back, one of them being the primary key:
{
"uid": "mydomain",
"createdAt": "2023-07-12T13:48:59.640002336Z",
"updatedAt": "2023-08-24T12:03:38.907236139Z",
"primaryKey": "id"
}
I am however not sure whether meilisearch_php
exposes that, but gathering from https://php-sdk.meilisearch.com/classes/Meilisearch-Endpoints-Indexes.html it seems so.
Also not a dev, but I think I can at least clear up things a little.
First, this is not a decimal (as strange as it may sound). Your number is a float (mathematically) and SQL stores it as an exact data type DECIMAL
with a given precision (which in fact is a float). There is an explicit FLOAT
data type in SQL, but that is not guaranteed to be exact.
The basic underlying problem is that computers "think" binary and not decimally. Representing an arbitrary number with lots of places after the decimal point will inherently produce errors (depending on which number it is, as some ("machine numbers") map cleanly to binary base and some do not. That's why people here see this only with select numbers). See https://en.wikipedia.org/wiki/Round-off_error for details.
The internal representation of a number 0.123456 might be 0.123456789 because of rounding errors. That's why rounding to the exact given scale does make sense. It's no actual rounding of the given number in the human sense. It just makes sure the number is exactly the one entered and has exactly that many decimal places as configured.
Does the patch in #2230909-292: Simple decimals fail to pass validation → not fix it for you? That would be an important information.
Didn't I fix this in
🐛
Primary key inference fails for Meilisearch >=1.x and fields ending in "id" (includes patch)
Fixed
already? This is part of the current 1.1.0 release (and I guess in 2.0 as well, did not have the time to check, yet) at least. It's hard coded to id
.
Directly handling the problem at hand does not really make sense, yes.
I just think the error handling is not ideal, this might be improved.
I could imagine situations (that was part of the discussion in some other issue as well) where an existing index might get attached to a setup running this module. This could pretty easily hit the error described here, if the primary key differs. And I think it should get caught cleanly.
Our posts overlapped. Yes, your diagnosis is absolutely correct!
Did a full DB and site files clone to a dev instance.
Restored to 1.1.0-rc1.
The second site works flawlessly.
The first one breaks again with the error in the title Document doesn't have a `search_api_document_id` attribute
.
If I index with the older SearchApiMeiliSearchBackend.php
and then switch to the current version, at least I have a version that somehow works for both sites. Indexing new (special?) nodes however will not work in this setup I guess.
Something's off.
The error
Drupal\search_api\SearchApiException while adding Views handlers for field on index mydomain Node Index: Could not retrieve data definition for field '' on index 'mydomain Node Index'. in Drupal\search_api\Item\Field->getDataDefinition() (line 482 of /var/www/drupal/web/modules/contrib/search_api/src/Item/Field.php).
is also back again.
I'm pretty convinced at this point that is a bug indeed and not a configuration/environment problem on my end.
Finally found it!
Due to some previous version, I had a primary key search_api_document_id
in the first page's index.
Dropped all items from the index and changed the primary key with
curl -X PATCH 'http://localhost:7700/indexes/myindex' -H 'Content-Type: application/json' --data-binary '{ "primaryKey": "id" }' -H "Authorization: Bearer mypass"
directly in Meilisearch.
I am unsure whether this warrants some error handling to be included somewhere around createField()
in src/Plugin/search_api/backend/SearchApiMeiliSearchBackend.php
This seems to be the cause for the successive error stating field ''
, i.e. a field that never got created.
400 protected function getSpecialFields(IndexInterface $index, ItemInterface $item = NULL): array {
401 $fields = parent::getSpecialFields($index, $item);
402 $fields['id'] = $this->getFieldsHelper()->createField($index, 'id', [
403 'type' => 'string',
404 'original type' => 'string',
405 ]);
406
407 if ($item) {
408 $fields['id']->setValues([MeilisearchUtils::formatAsDocumentId($item->getId())]);
409 }
410
411 return $fields;
412 }
createField()
comes from Search API's src/Utility/FieldsHelper.php
, which has no error handling either. It in turn calls new Field()
which I was unable to track down further.
Why exactly I got the error only for two nodes is still a mystery to me.
Unsure what to do with this ticket now. This will most likely never hit anyone.
Please decide whether some of the errors presented should get some better handling and/or tests.
tgoeg → created an issue.
tgoeg → created an issue.
Regarding
#3311063-34: Add support for Ckeditor 5 →
: I created a patch over at
#3371353-5: VideoEmbedWysiwyg::getValidMatches() does not detect two videos next to each other →
.
@loopy1492: Can you please try to apply both patches and report whether that fixes it for you? It's not really related to this issue at hand, but I'd like to get rid of this negative feedback here so we can push this (and the other ticket) to RTBC.
Here is a patch for the above MR. It would be great if we could set ✨ Add support for Ckeditor 5 Needs review to RTBC, but that means that #3311063-34: Add support for Ckeditor 5 → (though not related to that ticket but to this one) should probably be fixed as well, as this patch here seems to do.
No worries, thanks for the quick fix.
Drupal 9 is on its way out anyway, we're just not quite there, yet.
Sorry I did not mention the Drupal version used.
I'll close this as it is fixed for me and already pushed to 2.0.1.
Removing the default export options will likely cause unexpected regressions for other people.
No, it won't, as they are not removed.
As already laid out,
src/Commands/WebformSubmissionCommands.php
calls
172 $submission_exporter->setExporter($export_options);
(Additionally to
171 $export_options += $submission_exporter->getDefaultExportOptions();
)
This in turn, in setExporter()
in
src/WebformSubmissionExporter.php:
calls
$export_options += $this->getDefaultExportOptions();
So you end up calling getDefaultExportOptions();
twice (and adding it to the $export_options
).
As also laid out and tested, the default options do get set with only one call (which should be no surprise).
Yours to decide.
I can open up a second issue as well, if you like.
For anyone applying this as a patch in the meantime: Note that this creates new files and because of this (https://github.com/cweagans/composer-patches/issues/514), setting a fixed patch level is mandatory, like so:
"extra": {
"composer-patches": {
"package-depths": {
"drupal/core": 2
},
"comment": "The above is already in preparation for composer-patches v2"
},
"patchLevel": {
"drupal/core": "-p2",
"comment": "Or else we get files created everywhere, see https://github.com/cweagans/composer-patches/issues/514"
},
For anyone applying this as a patch in the meantime: Note that this creates new files and because of this (https://github.com/cweagans/composer-patches/issues/514), setting a fixed patch level is mandatory, like so:
"extra": {
"composer-patches": {
"package-depths": {
"drupal/core": 2
},
"comment": "The above is already in preparation for composer-patches v2"
},
"patchLevel": {
"drupal/core": "-p2",
"comment": "Or else we get files created everywhere, see https://github.com/cweagans/composer-patches/issues/514"
},
Yeah, thanks, found out in the meantime.
This is https://github.com/cweagans/composer-patches/issues/514
It has the potential to considerably mess up an installation or even create files in locations where they get read/parsed/interpreted.
Using patches is safer than using git repos for fixing up stuff, but this is a pretty hefty disadvantage. I don't remember seeing a warning for this in any doc regarding patching (but I may be wrong).
Sorry - the patch itself is correct. It's just composer-patches
that messes things up in case of newly created files.
Because it tries all values of patch
's -p0
till -p4
, it creates new files everywhere. Currently investigating how to prevent that.
Sorry - the patch itself is correct. It's just composer-patches that messes things up in case of newly created files.
Because it tries all values of patch
's -p0
till -p4
, it creates new files everywhere. Currently investigating how to prevent that.
The patch in #2907414-31: File module's FilemimeFormatter icon display renders empty image → produces 4 additional files
core/b/core/modules/file/templates/file-mime-icon.html.twig
core/b/core/themes/classy/templates/field/file-mime-icon.html.twig
core/b/core/themes/stable/templates/field/file-mime-icon.html.twig
core/b/core/themes/stable9/templates/field/file-mime-icon.html.twig
that do not differ from the ones at their correct location.
Attached patch should fix it.
This patch should be added to the new separate modules classy, stable and stable9.
I'll open issues there, too.
The patch https://www.drupal.org/files/issues/2023-05-17/drupal-9-5x-machine-name-... → pulled in by composer.json is broken.
It creates unnecessary files. See #2685749-127: Add a 'machine_name' widget for string field types with a UniqueField constraint → for a fix.
The patch in
#2685749-116: Add a 'machine_name' widget for string field types with a UniqueField constraint →
for Drupal 9.5 is broken.
It adds two unnecessary files
core/b/core/tests/Drupal/FunctionalJavascriptTests/Core/Field/Widget/MachineNameWidgetTest.php
core/b/core/lib/Drupal/Core/Field/Plugin/Field/FieldWidget/MachineNameWidget.php
They do not differ from the ones at the correct path, so I just removed them.
It should look like the one attached.
tgoeg → created an issue.
I think I have found the problem.
A careful look at my initial report above shows
sh: 1: exec: /usr/local/bin/drush -y -l my.domain.org: not found
That is to say, sh
tries to exec() a binary that is called '/usr/local/bin/drush -y -l my.domain.org', which does not exist, the binary is only the first part of this string.
That's what escapeshellargs()
seems to have masked (not fixed, really).
So to reproduce, one has to call drush <some drush parameter> tome:static
or it will succeed.
I have to use -l my.domain.org
as this is a multisite setup.
The attached patch fixes it for me (although I don't get any actual output on Drupal 10 + Drush 12 despite the patches in the issue queue, but I think this is another issue; at least it finishes now.).
What it basically does is it conserves any provided parameters/switches in a separate array and does not merge them into the executable string.
I guess the test modules/tome_base/tests/src/Unit/ExecutableFinderTraitTest.php
needs to be adapted as well, as I changed the return type in ExecutableFinderTrait.php
and also, it seems to not have covered this case.
I am however just a sysadmin, so I cannot provide any tests (no clue how this works, sorry).
Please also have a good review regarding the code presented - it might be a crude solution.
However, feedback would be welcome if it basically tackles the root problem for others as well!
Probably try the patch in
🐛
Export and access checks
Needs review
or build on that. It seems to improve the handling of exports that produce less-than-expected numbers, at least for the webGUI-based export. I don't know if the cron job (drush wfx
?) you use uses this code path as well, then.
If you don't use drush wfx
yet, it might be another option to circumvent timeouts. Note that the --excluded-columns
is broken currently, but there's a fix over at
🐛
drush wfx does not respect --excluded-columns
Needs review
.
JSON export with drush is however also broken at the moment, see 🐛 drush wfx exporters yaml and json produce unusable, binary output Active .
So maybe use drush to export a CSV and transform this to JSON for the time being.
Reopening, as 🐛 Webform 6.1.4 breaks drush data export Fixed only fixed the export (it mentioned this problem here, but this ticket is more precise, i.e. only describes the import issue), is closed already and most current 6.2.x still shows this error.
Right, with more knowledge of the code this is the correct place to fix it, i.e. only for the drush command and not globally as my attempt did.
I built on top of that and removed the extraneous adding of default options, as setExporter()
does it anyway and that function gets used by the webGUI-based export as well, so it should stay there (though it might be better to get the default options only once, before calling setExporter()
, in both cases of drush and webGUI-based export.
The measured overhead however is minimal, caching and PHP optimizations seem to take care of it.
Tested it with and with my removed line.
In both cases, the uuid
column is not exported, as it is part of default excluded columns.
Works for me!
Tests passing, so setting to needs review.
I can confirm my exports now work on 6.2.0-beta6 plus the patch in #3345191-13: Export and access checks → , thanks!
However, I cannot test/verify whether the actual issue at hand here is actually (still) fixed as I only partly understand the problem and do not have an existing problem caused by it which would be fixed (and neither do I use the domain module).
Patch in
#3345191-4: Export and access checks →
applied to current 6.2.0-beta6
breaks drush wfx
for me.
This seems related to the already mentioned
🐛
Webform 6.1.4 breaks drush data export
Fixed
.
I get infinite loops of
> [notice] Exported 0 of 7 submissions…
> [notice] Exported 0 of 7 submissions…
> [notice] Exported 0 of 7 submissions…
with this patch applied. No further info when using --debug
.
For those that prefer a patch for the time being:
3311063-35.patch →
Applies against current stable version 2.x.
Might enable easier test feedback.
Works for me.