- ๐ท๐บRussia skylord
Hm. After upgrade to php 8.1 and MariaDB 10.5 #17 starts to give WSOD. Have no time to investigate, so just reverted it. Keep it in mind.
- Status changed to Needs work
5 months ago 8:00am 13 June 2024 - ๐ฌ๐งUnited Kingdom james.williams
A performance issue was reported, with a patch, at โจ Increase performance when exporting many rows Closed: duplicate , saying:
When trying to export many rows (10000+), even using a batch, the first step may take several seconds.
In some environments this may result in a timeout.On closer inspection, the patch works on the code being introduced in this issue. So I think that's a report suggesting more work is needed here? I've taken a guess that an updated patch combining the patch from โจ Increase performance when exporting many rows Closed: duplicate is all that's needed, but someone else will need to test this. I haven't included an interdiff, because that patch from that other issue is is the interdiff when compared to the patch from comment 27 on this ticket.
- Status changed to Needs review
5 months ago 8:02am 13 June 2024 - last update
5 months ago 22 pass - Status changed to Needs work
5 months ago 9:36am 13 June 2024 - ๐ฌ๐งUnited Kingdom steven jones
Thanks for the patch everyone, and @james.williams thanks for adding in the work from โจ Increase performance when exporting many rows Closed: duplicate .
It's great that it's working for a number of people on this issue, it shows that the approach has promise.
However, as it stands I think the patch in #31 will introduce a fundamental regression:
At the moment the creation of the table and the insertion of the data is an atomic operation performed entirely by the database.
With this patch, we first create the table, and then we SELECT all the data and get it into the PHP process, and then we INSERT it back into our temporary table.
This might cause memory issues in PHP or as โจ Increase performance when exporting many rows Closed: duplicate outlined, performance issue with shuffling all that data between the database and PHP and back to the database.I suspect that the approach can be re-worked slightly, so that:
- The temporary table is created
- An INSERT...FROM SELECT query is built up, and that can be sent to the database.
- We'll get a quicker copy, with less memory usage all around.
(I assume that gtid replication can handle INSERT...FROM SELECT queries, if not, then urgh. We should check that assumption!)