[meta] Just in time updates

Created on 18 June 2025, 6 days ago

Problem/Motivation

Spin off from πŸ› Block plugins need to be able to update their settings in multiple different storage implementations Active . Block plugins are the main use-case for this, but it could potentially apply to other cases too so I think we need a general issue.

In πŸ› Block plugins need to be able to update their settings in multiple different storage implementations Active we noted that block plugins are used in various config entities, in content (via layout builder overrides and experience builder), and in static config objects (navigation).

When a block plugin's settings are updated, all of these places need to be updated, the other issue is mostly concerned without how each of those places could say 'I have a block plugin instance with these settings, please update it to the new settings structure'. Once this is in place it would cover the 'how' of updating all these different places.

However what it doesn't cover is the 'when' to update - there is no 'update all the places' event/hook in core, and since there are potentially thousands or hundreds of thousands of places to update, it is very easy to run into scalability issues with that kind of update - either OOM, or a batched update that takes a very long time.

When we add these kinds of very large updates to minor releases, it can create a barrier to upgrading - sites with very old content or config and end up hitting other errors when it's resaved during the update, and this completely prevents an update to the next minor version. There were countless examples of this during the 8.x-9.x release cycle. Mostly we've been lucky that we haven't had a lot of big updates in Drupal 10/11, rather than getting particularly better at writing/testing them.

We have an existing example of 'just in time' updates in Drupal core - for password hashes, we obviously cannot reverse-engineer password hashes to rehash them, so instead (via the phpass module) we allow passwords to be rehashed when users next log into the site. Sites can keep that module enabled until they think enough users have logged in, or enough time has passed where they haven't, that they can then uninstall the module and any remaining users would need to do a password reset. That module will also eventually move out of core or just be marked obsolete.

I think we can potentially do the same thing for content entities, and to a lesser extent config entities.

Block (or other) plugins would need to handle each version of their config. This could be as simple as adding new settings keys as nullable, as is being done in πŸ“Œ Make menu trail behaviour in SystemMenuBlock optional Active .

Renaming keys etc. would need a translation layer when the settings are both loaded and saved - so that the stale setting becomes valid on load, and is saved in the new format. For config entities we have the existing pattern of presave hooks that will take config shipped with modules or install profiles and update it on save.

What this really means is that rather than implementing an update function to force all content/config to the new version, modules would be implementing a bc layer instead.

What this leaves though is that (much like phpass module), there is no obvious cut-off when a module would ever be able to remove that bc layer.

I think we could add either in core or contrib (or both), a batch/queue/cron-based UI and drush command, that loads all entities and revisions of a type, runs the equivalent of ConfigUpdater::needsChanges() on them, and saves the ones that need to be updated. This could be implemented more like search indexing or node access rebuild - e.g. run as many times as needed on-demand rather than tied to code updates and deployments.

Sites would then be encouraged to run this once on every entity type prior to updating to a new major release of core or a contrib module. It would be useful to add this alongside/after #2770417: Revision garbage collection and/or compression β†’ is implemented so that sites can choose to prune older revisions more easily, before having to update potentially millions of them.

Steps to reproduce

Proposed resolution

Remaining tasks

Lots.

User interface changes

Introduced terminology

API changes

Data model changes

Release notes snippet

🌱 Plan
Status

Active

Version

11.0 πŸ”₯

Component

database update system

Created by

πŸ‡¬πŸ‡§United Kingdom catch

Live updates comments and jobs are added and updated live.
Sign in to follow issues

Comments & Activities

  • Issue created by @catch
  • πŸ‡ΊπŸ‡ΈUnited States luke.leber Pennsylvania

    Setting an official precedent for JIT updates is a great idea, especially given the pending majorly disruptive issues for Layout Builder.

    I've done more than a bit of thinking on this and I think that the existing sequential update framework for database updates in general makes for a good basis. If one thinks about it, JIT upgrades are more or less an unrolled version of that, right?

    It makes a lot of sense to provide a JIT equivalent that can handle untold millions of records without needing to worry about performance.

    Module uninstall requirements can likely handle the more...otherwise prohibitively expensive computations of when it's "safe" for a site to remove a JIT step, yes?

  • One other consideration, other than possible scale, for content updates, such as for block settings in layout builder overrides.
    If content moderation is enabled, and there is a published revision and an unpublished forward revision, then both of those likely need to be updated, with care either to save the revisions in place or some other method to make sure the revision order is maintained.

Production build 0.71.5 2024