- Issue created by @dpi
- 🇮🇹Italy mondrake 🇮🇹
The motivation is clear and sensible, +1.
However, pinging with a query anytime the client collection is fetched means a lot of db roundtrips, and as such kill performance - if I am not mistaken,
getClientConnection()
is called anytime a transaction is opened. BTW, that would also mean that non-transactional statement executions will not be covered.I think it would be preferable to store the client connection state in the
Connection
object, being 'active' at opening, then through exception handling change its state to 'connection lost' when an issue occurs. Code executing calls to the db could then check that state before executing and ping/close/reconnect in case. A challenge for sure would be how to determine a clear connection lost exception vs other exceptions.Another aspect to take care of is understanding the implications of a reconnect occurring while a db transaction is open - is the db transaction rolled back, how will the db failure reflect in the transaction manager, etc.
On the positive side, we could think of building on the experience from 📌 Move the on-demand-table creation into the database API Needs work and think of a method that could embed a retry of executing a statement, i.e. in case of failure for disconnect, reconnect and retry.
Finally - MySql (and possibly PgSql) would lose the connection, but what should be the behavior for Sqlite? It also can fail if the file is not in the same filesystem of the web server.
- 🇦🇺Australia dpi Perth, Australia
However, pinging with a query anytime the client collection is fetched means a lot of db roundtrips, and as such kill performance - if I am not mistaken, getClientConnection() is called anytime a transaction is opened. BTW, that would also mean that non-transactional statement executions will not be covered.
As the code is written right now, this is not correct. ->ping() is called in ->getClientConnection(), but since ->connection is always a PDO, it exits. It would only continue if the PDO object in ->connection was set to null by an explicit call to ->close().
The behavior is not wrapping every call in a safety net. If the connection was lost, and ping was not called by userland code, then indeed the app would crash. Encapulating all database calls would be quite a scope increase, and fwiw is not the behavior of Symfony+Doctrine+Messenger.
Another aspect to take care of is understanding the implications of a reconnect occurring while a db transaction is open - is the db transaction rolled back, how will the db failure reflect in the transaction manager, etc.
I'm not sure about this one.
- 🇳🇱Netherlands daffie
@dpi: I get why you need to have a database connection that reconnects itself automatically. Doing an extra database call every time will kill performance and most of the time it is unnecessary. Most of the time we are done with the database connection long before the database connection times out. As an alternative we could add a new method called Connection::getConnectionWithReconnect(). Let the new method do the same as Connection::getConnection() only add the reconnect functionality.
- 🇮🇹Italy mondrake 🇮🇹
The more I look at this, the more I think we should build on top of 📌 Move the on-demand-table creation into the database API Needs work , and introduce a generic method that takes care of making db calls 'safe' from different perspectives.
For example, what we have in that issue at the moment is a
::executeEnsuringSchemaOnFailure
that in case of failure of a db operation, tries to build missing tables and retries the db operation:$execution = $this->connection->executeEnsuringSchemaOnFailure( execute: function (): int { return do_something(); }, schema: $schemaDefinition, retryAfterSchemaEnsured: TRUE, );
we could make that more generic:
$execution = $this->connection->safelyExecute( execute: function (): int { return do_something(); }, withSchema: $schemaDefinition, withReconnection: TRUE, retryOnFailure: TRUE, );