- Issue created by @djdevin
- Status changed to Needs review
8 months ago 9:30pm 17 June 2024 - Merge request !8436Issue #3455151: bulk delete from field and revision tables โ (Open) created by djdevin
- Status changed to Needs work
8 months ago 2:08pm 18 June 2024 - ๐บ๐ธUnited States smustgrave
MR should be against 11.x as the latest development branch
Feels like a change that needs some kind of test coverage also.
- ๐บ๐ธUnited States djdevin Philadelphia
Rebased against 11.x (applies to 10.3.x cleanly as well)
deleteFromDedicatedTables() doesn't have any test coverage - since this isn't a bug I wonder if having coverage through SqlContentEntityStorageSchemaTest is enough?
- First commit to issue fork.
- ๐ช๐ธSpain vidorado Logroรฑo (La Rioja)
I've added a kernel test to ensure that a single delete query is executed against both the data table and the revision data table.
I'm not sure if directly testing the protected
deleteFromDedicatedTables()
method is the best approach, for the sake of simplicity, or if I should have mocked even more components to test it only through the public interface ofSqlContentEntityStorage
. - ๐บ๐ธUnited States nicxvan
Any chance you can compare deleting that many entities with and without this change?
Also memory usage would be helpful.
Also 100M, is that 100 million?
- ๐ช๐ธSpain vidorado Logroรฑo (La Rioja)
Thanks for the review, @smustgrave! I've applied your suggestions.
@nicxvan: I haven't performed the test suggested in the IS, but I believe it refers to 100 million. In my opinion, thatโs an extremely unlikely case. In most scenarios, there would be only a few bulk deletions, and the gain would be minimal, but I still think itโs worth doing things the right way when the fix is so simple.
- ๐บ๐ธUnited States djdevin Philadelphia
It was part of a multifaceted approach so I'm not sure exactly how much this was part of the gained performance. I used a Drupal queue and ran multiple instances of queue-process to delete a batch of 1000 entities at a time in parallel. A custom progress bar running periodically looked at the number of items at the start and the number of items left in the queue.
Didn't have significant issues with memory, I believe it was solely the efficiency loss from running thousands of delete queries instead of 1.
I was exaggerating a little bit and don't have hard benchmarks, but at 10M (million) it went from ~200 hours to 6 hours or so. They were also entities with ~200 fields (paragraphs) so there were a ton of single field delete queries which is where I identified the issue. So deleting a single entity cost 402 queries. Deleting 1000 of them cost 402,000 (2000 entity, revision, and 2*200*1000 field queries).
With the fix, deleting 1000 entities cost 2400 (1000*2 entity, revision and 200*2 fields and revisionqueries)
At 402,000 queries per batch of 1000, the issue is exacerbated if you are using something like AWS Aurora or MySQL with replication in a durable ACID configuration, and if the database isn't local to the network. Both contribute to delay when writing tons of data.
tl;dr: Bulk queries are recommended anyway.
- ๐บ๐ธUnited States nicxvan
nicxvan โ changed the visibility of the branch 11.x to hidden.
- ๐บ๐ธUnited States nicxvan
I did a quick pass all a couple of comments.
One is addressed, one has not.
Test looks good and test only fails.
Needs work for the last comment.
- ๐ช๐ธSpain vidorado Logroรฑo (La Rioja)
@nicxvan, GitLab may have misleaded you (and me too, for a moment). In the MR overview it doesn't say that the code was changed, but it indeed was! :)