Exception caused by MySQL 5.6 gtid replication

Created on 4 March 2014, over 10 years ago
Updated 13 June 2024, 18 days ago

My exports are failing with the following error:

Exception: SQLSTATE[HY000]: General error: 1786 CREATE TABLE ... SELECT is forbidden when @@GLOBAL.ENFORCE_GTID_CONSISTENCY = 1.
PDOException: SQLSTATE[42S02]: Base table or view not found: 1146 Table 'mydb.views_data_export_index_19' doesn't exist: SELECT COUNT(*) FROM {views_data_export_index_19};

The reason is that MySQL 5.6 gtid replication does not allow create table ... select statements. http://dev.mysql.com/doc/refman/5.6/en/replication-gtids-restrictions.html

CREATE TABLE ... SELECT statements. CREATE TABLE ... SELECT is not safe for statement-based replication. When using row-based replication, this statement is actually logged as two separate eventsโ€”one for the creation of the table, and another for the insertion of rows from the source table into the new table just created. When this statement is executed within a transaction, it is possible in some cases for these two events to receive the same transaction identifier, which means that the transaction containing the inserts is skipped by the slave. Therefore, CREATE TABLE ... SELECT is not supported when using GTID-based replication.

The query that causes this:

$insert_query = 'CREATE TABLE {' . $display_handler->index_tablename() . '} SELECT @row := @row + 1 AS ' . $display_handler->batched_execution_state->sandbox['weight_field_alias'] . ', ' . implode(', ', $select_aliases) . ' FROM (' . (string)$query . ') AS cl, (SELECT @row := 0) AS r';
db_query($insert_query, $args);

After reading the docs and consulting with colleagues, I believe the solution here is to break this query into 2 - first create the table, then select rows. Has anyone else run into this?

๐Ÿ› Bug report
Status

Needs work

Version

3.0

Component

Code

Created by

๐Ÿ‡จ๐Ÿ‡ฆCanada Nick Robillard

Live updates comments and jobs are added and updated live.
Sign in to follow issues

Comments & Activities

Not all content is available!

It's likely this issue predates Contrib.social: some issue and comment data are missing.

  • ๐Ÿ‡ท๐Ÿ‡บRussia skylord

    Hm. After upgrade to php 8.1 and MariaDB 10.5 #17 starts to give WSOD. Have no time to investigate, so just reverted it. Keep it in mind.

  • Status changed to Needs work 18 days ago
  • ๐Ÿ‡ฌ๐Ÿ‡งUnited Kingdom james.williams

    A performance issue was reported, with a patch, at โœจ Increase performance when exporting many rows Closed: duplicate , saying:

    When trying to export many rows (10000+), even using a batch, the first step may take several seconds.
    In some environments this may result in a timeout.

    On closer inspection, the patch works on the code being introduced in this issue. So I think that's a report suggesting more work is needed here? I've taken a guess that an updated patch combining the patch from โœจ Increase performance when exporting many rows Closed: duplicate is all that's needed, but someone else will need to test this. I haven't included an interdiff, because that patch from that other issue is is the interdiff when compared to the patch from comment 27 on this ticket.

  • Status changed to Needs review 18 days ago
  • Open in Jenkins โ†’ Open on Drupal.org โ†’
    Core: 7.x + Environment: PHP 5.6 & MySQL 5.5
    last update 18 days ago
    22 pass
  • ๐Ÿ‡ฌ๐Ÿ‡งUnited Kingdom james.williams
  • ๐Ÿ‡ฌ๐Ÿ‡งUnited Kingdom james.williams
  • Status changed to Needs work 18 days ago
  • ๐Ÿ‡ฌ๐Ÿ‡งUnited Kingdom Steven Jones

    Thanks for the patch everyone, and @james.williams thanks for adding in the work from โœจ Increase performance when exporting many rows Closed: duplicate .

    It's great that it's working for a number of people on this issue, it shows that the approach has promise.

    However, as it stands I think the patch in #31 will introduce a fundamental regression:

    At the moment the creation of the table and the insertion of the data is an atomic operation performed entirely by the database.

    With this patch, we first create the table, and then we SELECT all the data and get it into the PHP process, and then we INSERT it back into our temporary table.
    This might cause memory issues in PHP or as โœจ Increase performance when exporting many rows Closed: duplicate outlined, performance issue with shuffling all that data between the database and PHP and back to the database.

    I suspect that the approach can be re-worked slightly, so that:

    1. The temporary table is created
    2. An INSERT...FROM SELECT query is built up, and that can be sent to the database.
    3. We'll get a quicker copy, with less memory usage all around.

    (I assume that gtid replication can handle INSERT...FROM SELECT queries, if not, then urgh. We should check that assumption!)

Production build 0.69.0 2024