- Issue created by @erin.rasmussen
- π©πͺGermany mkalkbrenner π©πͺ
when a problem is encountered, they continue to reindex in batches and re-index single items
.
That's the intended behaviour. If there's an erroneous item somewhere in the batch, indexing will never succeed. But you also won't find the reason. So the indexer reduced the number of items to 1 and steps forward until the erroneous item is reached.
Now can get useful error messages and fix the erroneous item or your custom code.Our team has traced the issue down to this issue which suggest avoiding using deleteByQuery() https://www.zisistach.org/posts/solr-performance/#avoid-using-deletebyquery
I'm aware of this "issue". We use DeleteById() but we can't do so for nested documents.
Unfortunately Search API itself doesn't know anything about nested documents.
But the DelteByQuery() part recently changed and you should upgrade. And I would be happy if your team would contribute a patch for the "search for IDs and DeleteById()" approach.In general I would avoid such deletes at all, but search_api_solr has to implement search_api's backend interface.
I would prefer something like https://www.drupal.org/project/search_api_solr/issues/3150654 β¨ Swap cores when reindexing Active but this isn't supported by most Solr Hosting providers.Atomic Updates work very well with Solarium, but again Search API doesn't know that concept.