These two functions are part of the API. And it seems that this issue might affect any backend, not just Solr. I think, the check has to happen in Search API itself.
Do you agree to move that issue?
Thanks for the patch.
But I have several concerns.
In theory, drupal can run on a non-SQL database. So the code needs to check for the DB driver. In case of mysql, the current value of wait_timeout could be read and be part of the calculation.
In general, wouldn't it be better to simply re-open the connection instead of keeping it alive?
Maybe that could could be written independently from a specific database driver so that we don't need SQL.
In general, you should create a PR against our github repository to get the patch tested.
The default of the module should not target a specific development environment. (even if I use ddev)
I will keep the the default as it is.
By I will accept a patch that changes it dynamically if a ddev environment is detected.
mkalkbrenner → created an issue.
@berdir Thanks for the hint. You're right. This is introduced by 🐛 If you don't want to translate your URL alias, the original URL alias won't work with your translations Needs work
mkalkbrenner → created an issue.
I resolved the Drupal 11.2 conflict.
And here's a patch for 11.1.8.
I thought about this again. I agree, that it might easy contributing, especially in cases where a third party backend like Solr or elastic is required.
But instead of putting a complete ddev config inside each module, a single config file should be sufficient. Comparable to gitlab CI, phpunit, github actions, etc.
So a single yaml file that contains essential ddev settings and dependencies. in that case, for sure another component or module is required that creates the complete ddev environment from it.
Thanks for your patch. But the similar command already exists in the search_api_solr_admin submodule:
https://git.drupalcode.org/project/search_api_solr/-/blob/4.x/modules/se...
search_api_solr_log never supported facets 2. The dependency to facets_exposed_filters was part of the initial commit:
https://git.drupalcode.org/project/search_api_solr/-/blame/4.x/modules/s...
So I don't know how you installed it without facets 3 before.
I totally agree that we should do more major or minor releases instead of patch level changes.
But we had that discussion multiple times in the past. Some Solr hosting providers and/or Drupal service providers don't understand semantic versioning correctly (I don't want to publish names here). They reject to apply important changes or bugfixes quickly if that requires a major or minor update which must run to additional "quality assurance processes".
In the past, that put a lot of additional load on the project to maintain multiple versions. We can't cover that in our free time.
So we had to react with keeping Search API and Search API Solr at the same major.minor for quite some time now to bypass their stupid rules.
yes, a version constraint should be added to composer.json and info.yml
Maybe it's better to change like branch 3530298-too-few-arguments-2.
Please review.
I think that right pattern in drupal is to set values of member variables in the constructor, not in the create method.
OK, the parent didn't change the constructor, it added a constructor:
https://git.drupalcode.org/project/views_data_export/-/commit/dabf06138c...
I don't understand what are you discussing here. The base class you derived your plugin from changed its constructor with the latest release. You MUST adjust the constructor accordingly!
That is intended. I won't backport that to facets 2.
I use ddev every day. But I must admit that I don’t like that approach. Way too much stuff to store inside a module.
mkalkbrenner → made their first commit to this issue’s fork.
mkalkbrenner → made their first commit to this issue’s fork.
mkalkbrenner → made their first commit to this issue’s fork.
58,662 sites report using this module.
I assume that mimemail blocks a lot of upgrades to Drupal 11.
+1 for adding a co-maintainer.
mkalkbrenner → created an issue.
If you do an export in an empty directory, there should be no issue.
Bur "updating" an export folder did never deal with deletions and force-export is the only workaround.
If you want to "update" existing exports or export contineously, you should enable and leverage the search_api_default_content_deploy submodule!
This module leverages the Search API tracker infrastructure to track entity changes and updates your exports. It can also handle deletions.
And in 2.2.x, the importer can deal with deletions as well. And supports incremental imports, etc.
BTW I will talk about that at Drupal Con Vienna.
Thanks for the patch. But as you mentioned, the log entry is generated in the Server class which then doesn't call the method on the backend anymore. The function is not meant to be called directly.
If we should harden the code if called directly, we need to do it for all methods, for example deleteItems().
No further feedback
Tests are still failing.
4.2 is not supported anymore and there will be no further release. But thanks for the patch people could apply locally!
It isn't a breaking change in this module. It is a bug fix to get things working with supported Solr versions. Older Solr 9 versions are EOL.
But Solr made a breaking change and mentioned that and their release notes. I'm sure you read it ;-)
Anyway, I would appreciate a contribution to the README.
Your Solr 9 seems to run in cloud mode
OK, the test fails ;-)
mkalkbrenner → created an issue.
If you require this functionality or generic sequences service, you can use the sequences module:
https://www.drupal.org/project/sequences →
I already thought about exclude lists.
But I suggest to commit this one first as it contains bug fixes and performance improvements without breaking any existing functinality. And maybe another beta release.
Then we could think about the next steps.
I did some more tests and found a minor issue that counting revisions per language needs to be done on the data table.
I noticed two critical issues during testing:
- The keep parameter had no effect.
- The performance is really bad on big databases.
The first one is a bug introduced in the dev branch.
The second is caused by the fact that all entities of a requested type or bundle are queued regardless how many revisions exist in database. In case of one million nodes, that takes a significant amount of time, CPU and memory to load all of them to count the number of revisions.
It is way better to only put those into the queue that have more revisions than to keep.
I fixed both issues within the MR instead of opening new ones because the code changed too much.
BTW, I implemented getting the entity IDs as SQL. Loading entities would quickly run out of memory.
The issue was with profile entities. That had to be handled in
✨
Support all entity types
Active
.
So I decided to combine both merge requests here.
I get an error during queue:run. I'll check that ...
mkalkbrenner → changed the visibility of the branch 1.0.x to hidden.
mkalkbrenner → created an issue.
mkalkbrenner → created an issue.
I added this to the product page:
Search API Solr supports any Solr version from 3.6 to 9.x. Solr 7.x to 9.x are directly supported by the module itself, earlier versions from 3.6 to 6.x require enabling the included search_api_solr_legacy sub-module.
Solr 10.x support will require some work and will be added sooner or later (sponsors are welcome).
In general, maintaining detailed informations and testing all versions requires a lot of time. But since there're are no major sponsors anymore, it isn't easy to do all these small tasks.
Since people a re using default_content_deploy 2.2.0 beta now, they run into this issue and lose data.
I put a big warning on
https://www.drupal.org/project/default_content_deploy →
Since the issue is obvious, can't we commit the fix here?
I think that this is rather a solarium issue.
Redis used request time
That explains why the tests are still failing.
So I leave it to you improve the test.
For us, the patch is good enough to fix the critical issue in our production environment.
Writing to Redis sometimes takes more than 2s in the tests :-(
Expire is time() + 2
, but
'created' => 1746536865.132
'expire' => '1746536864'
I agree that that "core compatibility" should be configurable.
But if I understand the issue correctly, "core compatibility" currently only exists because of a bug in the calculation?
Never edit composer.lock!
Edit the top level composer.json.
I believe I have not installed solarium/solarium explicitly, by itself, or in a specific version.
You must have done that. A contrib module can't do it.
For whatever reason you must have installed solarium 6.3.6 explicitly.
Simply remove solarium from composer.json.
I don't think that this is an issue with search_api_solr, but with your installation. Run
composer why-not drupal/search_api_solr:4.3.8
I was able to reproduce the issue if a module is installed that reacts to media entity updates. In our case, image_replace isn't fault tolerant. So default_content_deploy can't import the entity or fix the file ID later.
A quick fix is to handle files at the beginning of the import.
BTW that issue was hidden in previous versions. Meaning, it existed as well but didn't hurt because of the more verbose export format.
The format changed. Most of _links at top level is not need for our use-case.
So from the JSON code above, it's missing this outside of _embedded:
No, this is correct. Entity Reference fields are only in _embedded.
It seems to be an import related issue.
Sorry, but these patches aren't readable because they reformat entire files.
I ended up using this:
sudo su - solr -c "/opt/solr/bin/solr create -c IAB -n data_driven_schema_configs"
This is totally wrong. It doesn't use the drupal schema. That's why the file is missing.
Also of note, I was not able to use this command from the README.md file to create a core:
`$ sudo -u solr $SOLR/bin/solr create_core -c $CORE -d $CONF -n $CORE`
Did you replace $CORE and $CONF with the required values?
BTW why don't you start Solr in cloud mode and let Drupal create the collection for you?
In 2.2.x we still use JSON, but the format become much more readable because not required information gets removed now.
Nevertheless, YAML would be interesting as alternative.
We import users, webform submissions and newsletter subscriptions from different sites into one backend using default_content_deploy. So we need a UUID field to avoid id collisions.
This works well for everything except the SubscriberHistory. With that patch applied, SubscriberHistory works as well.
But I agree that the update hook requires a batch.
mkalkbrenner → created an issue.
mkalkbrenner → created an issue.
There's already an incremental import based on metadata this module adds to the JSON file.
mkalkbrenner → created an issue.
mkalkbrenner → created an issue.
I don't use Paragraphs at all. Feel free to provide a patch for that issue.
mkalkbrenner → created an issue.
For sure you can commit that patch in the Drupal 7 branch. But the 7.x module is already marked as unsupported, just like Drupal 7 itself.
BTW it is a bit strange that people want to update to newer Solr versions but not to newer Drupal versions ;-)
I went the "events way" in all the contrib modules I'm involved in. And a s far as I know, Core still does events as well.
And other big contrib modules like commerce are using events as well.
I don't think that one thing is right and the other is not. And there's not the one drupal way.
My understanding is that hooks get an OOP replacement. But it is totally valid to use Events.
BTW I would have preferred to replace hooks in core by events and closer following a PSR standard instead of introducing something new that is drupal-specific.
mkalkbrenner → created an issue.
mkalkbrenner → changed the visibility of the branch 1.0.x to hidden.