Michigan, USA
Account created on 29 July 2014, almost 10 years ago
#

Merge Requests

More

Recent comments

πŸ‡ΊπŸ‡ΈUnited States mikelutz Michigan, USA

Not the decision maker but just going to re-iterate, I think this is a bad idea. The hacks I see for uninstalled modules seem like just the tip of the iceberg. What about removed modules. removed libraries. uninstalled themed. profiles.... it seems like a loosing battle for little benefit.

I'm not fully seeing the issue here. What do you mean about removed modules, removed libraries, etc? If you uninstall/remove code providing a constant/enum used in your active configurations, you will have problems, as you should. The 'hack' here only adds a module to the classloader as it's config is read in preparation to install it. This seems appropriate to me, as the config you are installing should be able to use enums defined in the module you are installing. This isn't about loading enums and consts from uninstalled modules, this is to load enums and consts from a module that we are in the process of installing. I really think the inline comments are scarier than what the code is doing, and I made a couple suggestions on the comments. I also added a test that I think shows another issue, config referencing constants that are in the modules .module file. I think we will have to include the .module file before parsing the config as well.

Also, I tried to find a more elegant way to prevent double caching of the autoloading storage, but I really couldn't, so I just added a second instanceof check in the Comparer. I don't like it.

πŸ‡ΊπŸ‡ΈUnited States mikelutz Michigan, USA

I'm going to move it to a full fledged constraint in Core/Validation to solve the dependency issue. To make it work for both snippets and full pages, I think we can have the constraint just wrap snippets in "" .. "". We could either define whether to do so in an argument to the constraint, or try to autodetect, but I'm going with a constraint argument that defaults to snippet because the schema would want to decide which it allows. The fast_404 html here would want a full page and want to reject a snippet so I don't see autodetecting being practical.

(Ironically, I was resurrecting πŸ“Œ Create a trait and base class to implement \Drupal\Component\Plugin\ConfigurableInterface Needs review the other day and I realized the we nearly did the same intercomponent dependency in the patch we committed and reverted all those years ago. We were going to call Component\Utility\NestedArray from Component\Plugin and didn't notice)

πŸ‡ΊπŸ‡ΈUnited States mikelutz Michigan, USA

Okay, I reviewed this pretty thoroughly as it was being built and gave it a final once over. As far as I can tell, we've covered all edge cases, tests look good, documentation looks good. Let's ship it.

πŸ‡ΊπŸ‡ΈUnited States mikelutz Michigan, USA

Postponing this on πŸ“Œ Allow process plugins to flag a row to be skipped Needs review , as there are a number of things in here that don't make sense in the context of the changes we are trying to make to skipping rows, such as doing away with using Exceptions to do so.

πŸ‡ΊπŸ‡ΈUnited States mikelutz Michigan, USA

Alright, I think I've resurrected this from the ashes. New merge request on 11.x, fixed issues in #123and #132

123.1 Mention the interface instead of the trait in ConfigurablePluginBase.

I mention both, since the base class uses the trait anyway.

123.2 In ConfigurableTrait, document that the $configuration property is also defined \Drupal\Component\Plugin\PluginBase.

done

123.3 Decide on if ConfigurableTraitTest needs adjustments.

It did, though opposite from the previous suggestions of moving all tests to getMockForTrait, as getMockForTrait is getting deprecated in phpunit, so I converted all tests to use the same concrete test class that uses the trait.

123.4 Make sure that the changes in base classes for certain plugin types are mentioned in the release notes.

I added the list to the release notes snippet and the CR

123.5 (optional) Check if setConfiguration() still needs to be overridden in WorkflowTypeBase.

It's an issue of overriding it and keeping tests, or resetting it to the standard and adjusting all the tests to handle the reordered returned array. I also removed use of the trait entirely from ConditionPluginBase for the same reasons, updating it to use mergeDeepArray broke a few tests and will require adjusting the order of items in umami config to pass tests (as they would be exported in a different order, causing a test fail) It's better to explore converting both of those pluginbases to use mergeDeepArray in a followup, if at all, where we can review the test changes and potential BC issues more thoroughly in isolation from the trait and base class work being provided here.

123.6 Rebase the merge request.

done.

132 > implement \Drupal\Component\Plugin\ConfigurableTrait directly

Is that the right wording? Does one 'implement a trait'? I'd have said 'use' or 'import'.

I agree, switched the wording to use `use`.

In addition, I added SelectionPluginBase to the plugin bases that we convert (it was added after the original MR was created here), and removed ConditionPluginBase for reasons described above.

I moved ConfigurableTrait from Component/Plugin to Core/Plugin. In the original patch, we didn't notice that having the trait in Component meant we added a undocumented dependency on Component/Utility to Component/Plugin, and we shouldn't do that. The interface can live in the Component, the trait can live in Core next to the base class (which was already there because that needed to extend Core's PluginBase and not Component's. Despite the fact that the interface lives in Component, it makes more sense to have the implementations live in Core.

I updated the Change Record (and unpublished it, as it was never unpublished after we reverted years ago.

I fixed a few coding standards/phpstan/phpunit10 issues that have changed things over the years.

I think this should be committable now, or close to it hopefully.

πŸ‡ΊπŸ‡ΈUnited States mikelutz Michigan, USA

cross posting from πŸ“Œ Move source_module and destination_module from Migrate to Migrate Drupal Needs work

I think we need to rethink this, and πŸ“Œ Move source_module and destination_module from Migrate to Migrate Drupal Needs work in light of πŸ“Œ [Policy] Migrate Drupal and Migrate Drupal UI after Drupal 7 EOL Active

source_module is a migrate_drupal thing, as this issue points out, but despite the work here, the source_module key is still a thing defined in migrate as part of the source plugin annotations, and 3421014 is moving it over to the attributes defined in migrate. I'm working out the order of operations here, but I think the concept of source_module as a source plugin attribute/annotation key needs to be deprecated in Drupal 11 along with everything else migrate_drupal.

I have not followed the raw details on the annotation to attribute conversion for plugins, and I'm not sure what would break if we did this, but my gut says that here, we should NOT support source_module as a source plugin attribute, nor should we convert the d7 source plugins that we are deprecating in D11 over to attributes. I'm sure there's some core test somewhere that ensures core plugins which support attributes use them, and we may have to find a way around that for the sources we are deprecating, but I feel like it's worth it, and it saves us from having to deprecate source_module as it can just go away with annotations.

πŸ‡ΊπŸ‡ΈUnited States mikelutz Michigan, USA

I think we need to rethink this, and πŸ“Œ Convert MigrateSource plugin discovery to attributes Needs work in light of πŸ“Œ [Policy] Migrate Drupal and Migrate Drupal UI after Drupal 7 EOL Active

source_module is a migrate_drupal thing, as this issue points out, but despite the work here, the source_module key is still a thing defined in migrate as part of the source plugin annotations, and 3421014 is moving it over to the attributes defined in migrate. I'm working out the order of operations here, but I think the concept of source_module as a source plugin attribute/annotation key needs to be deprecated in Drupal 11 along with everything else migrate_drupal.

I have not followed the raw details on the annotation to attribute conversion for plugins, and I'm not sure what would break if we did this, but my gut says that in 3421014, we should NOT support source_module as a source plugin attribute, nor should we convert the d7 source plugins that we are deprecating in D11 over to attributes. I'm sure there's some core test somewhere that ensures core plugins which support attributes use them, and we may have to find a way around that for the sources we are deprecating, but I feel like it's worth it, and it saves us from having to deprecate source_module as it can just go away with annotations.

πŸ‡ΊπŸ‡ΈUnited States mikelutz Michigan, USA

mikelutz β†’ changed the visibility of the branch 2852463-create-a-trait to hidden.

πŸ‡ΊπŸ‡ΈUnited States mikelutz Michigan, USA

mikelutz β†’ changed the visibility of the branch 9.2.x to hidden.

πŸ‡ΊπŸ‡ΈUnited States mikelutz Michigan, USA

mikelutz β†’ changed the visibility of the branch 2852463-configurable-plugin-trait-10.1 to hidden.

πŸ‡ΊπŸ‡ΈUnited States mikelutz Michigan, USA

mikelutz β†’ changed the visibility of the branch 11.x to hidden.

πŸ‡ΊπŸ‡ΈUnited States mikelutz Michigan, USA

mikelutz β†’ changed the visibility of the branch 11.x to hidden.

πŸ‡ΊπŸ‡ΈUnited States mikelutz Michigan, USA

Closing this, as we just use ConfigurableInterface now.

πŸ‡ΊπŸ‡ΈUnited States mikelutz Michigan, USA

NW for @quietone's requests. I left a few comments with some thoughts for folks to think about, but none are hard requirements for this MR, they could be done here if they make sense, or they could all be debated and done in a follow-up (or not)

πŸ‡ΊπŸ‡ΈUnited States mikelutz Michigan, USA

Oops, yes, thank you.

πŸ‡ΊπŸ‡ΈUnited States mikelutz Michigan, USA

I don't want to block this, I just wanted to point it out and ask.

πŸ‡ΊπŸ‡ΈUnited States mikelutz Michigan, USA

Right, that was my question. We now get an AutoloadingStorage here. Previously we got a FileStorage, and since FileStorage has it's own static cache, we were not setting a storage cache. Now we have an AutoloadingStorage, which is based on FileStorage and uses it behind the scenes, but we are setting the storage cache as well, since it's not a FileStorage. I was wondering if we needed to. As I was stepping through HEAD and the patch trying to find the difference that caused the bug in listAll it was one of the differences I noticed, as the importer was operating on a CachedStorage object in the patch versus a FileStorage object on HEAD. It wasn't the problem, but I'm wondering if we need the sourceCacheStorage with the Autoloading storage, or if we should carve out the same exception for AutoloadingStorage here that we have carved out for FileStorage.

πŸ‡ΊπŸ‡ΈUnited States mikelutz Michigan, USA

@deprecated in drupal:11.1.0 and is removed from drupal:11.1.0. Inject

Should this not be removed in 12.0.0?

πŸ‡ΊπŸ‡ΈUnited States mikelutz Michigan, USA

https://git.drupalcode.org/project/drupal/-/blob/11.x/core/lib/Drupal/Co.... Found this too, while I was debugging the tests? Do we need to add the Autoloading storage here?

in the Storage Comparer:

  if ($source_storage instanceof FileStorage) {
      // FileStorage has its own static cache so that multiple reads of the
      // same raw configuration object are not costly.
      $this->sourceCacheStorage = new NullBackend('storage_comparer');
      $this->sourceStorage = $source_storage;
    }
πŸ‡ΊπŸ‡ΈUnited States mikelutz Michigan, USA

Lol, took forever to find the listAll() bug, and you fixed it before I could push :-)

πŸ‡ΊπŸ‡ΈUnited States mikelutz Michigan, USA

mikelutz β†’ made their first commit to this issue’s fork.

πŸ‡ΊπŸ‡ΈUnited States mikelutz Michigan, USA

This one seemed like fun.

I preserved BC, so if orderby is null, or === 'key', or === 'value' existing behavior is preserved.

if orderby is a string other than 'key' or 'value' it is treated as a key inside the value. I did not require a prefix 'subkey:' as it seemed unnecessary. If you need to reference a key named 'key' or 'value' it can be done using the advanced syntax that follows. To drill down into deeper items in the sequence arrays, separate keys with a '/' (see tests). This mimics the format we use to access nested array items in the migration system.

if orderby is an array, each item is assumed to be a sort directive, and the sequence is sorted by the first, then the second, and so on. The sequence is finally sorted by the element values (while preserving keys) so as to be deterministic.

Each sort directive can be either a string, which is taken to be an element key as decribed above, or mapping containing at minimum a 'sort_by' key (containing the element key described above) and optionally 'sort_order' and 'sort_flags' keys.

sort_order and sort_flags accept as strings the constant names described here: https://www.php.net/manual/en/function.array-multisort.php sort_flags can either be a string containing a single flag, or a sequence containing multiple flags to OR together.

It would be nice if #2951046: Allow parsing and writing PHP constants and enums in YAML files β†’ were in, and we could avoid parsing the constants as strings altogether. If we don't wait, we should probably add some error checking here to ensure only the expected strings are used.

A dozenish tests are provided to test combinations of sort orders, ensure that string keys are preserved. and to demonstrate capibilities.

A new `UnsupportedSequenceSortConfigException` is provided and thrown if a sequence element does not contain a key that the schema requests that we sort on.

πŸ‡ΊπŸ‡ΈUnited States mikelutz Michigan, USA

mikelutz β†’ made their first commit to this issue’s fork.

πŸ‡ΊπŸ‡ΈUnited States mikelutz Michigan, USA

This can be committed with review. The distinction mentioned in #19 is the distinction beteween the plugin setting the value to NULL vs the Executable setting the value to NULL when the SkipProcess Exception was thrown. Personally I don't think there is a distinction because throwing a SkipProcessException set the value to NULL, so I feel like the plugin was responsible for the NULL anyway. But the point is moot, as ✨ Allow process plugins to stop further processing on a pipeline RTBC refactored away the exception and now the process plugin does literally set the value to NULL. I consider the doc change trivial, but it's fine to commit, as it does improve the docs.

πŸ‡ΊπŸ‡ΈUnited States mikelutz Michigan, USA

mikelutz β†’ changed the visibility of the branch 3336070-skiponempty-is-ignored to active.

πŸ‡ΊπŸ‡ΈUnited States mikelutz Michigan, USA

mikelutz β†’ changed the visibility of the branch 3336070-skiponempty-is-ignored to hidden.

πŸ‡ΊπŸ‡ΈUnited States mikelutz Michigan, USA

Also switching this to major based on "The major priority is used for issues that are not critical, but that do have significant impact or are important by community consensus." As recipes are a hot item right now I would consider having the example recipe be functional for 10.3.0 would be important, but feel free to reduce if I am wrong.

πŸ‡ΊπŸ‡ΈUnited States mikelutz Michigan, USA

Crediting @Josue2591 as he wrote the code. I just opened a new MR against core, cherry picked the commits and (hopefully correctly) fixed some merge conflicts.

πŸ‡ΊπŸ‡ΈUnited States mikelutz Michigan, USA

Alright, lets try that one.

πŸ‡ΊπŸ‡ΈUnited States mikelutz Michigan, USA

mikelutz β†’ changed the visibility of the branch distributions_recipes-3447994-3447994-example-recipe-isnt-with-test to hidden.

πŸ‡ΊπŸ‡ΈUnited States mikelutz Michigan, USA

Posting a relevant slack thread.

Core contrib question: this issue πŸ› Example recipe isn't functional Needs review got moved into Needs Work status by a bot. Is there something we can do to get it back to RBTC?

Participants:

Related Issues:

πŸ‡ΊπŸ‡ΈUnited States mikelutz Michigan, USA

See https://onlinephp.io/c/faedc

Try catch is like 10x slower than doing most any kind of type checks or conversions. Because explode ends up having to do the same conversions anyway PLUS run the unneeded explode call, and then check the error handler and find the catch block. It's always going to be more efficient to just do our own type checks.

Admittedly removing the type check to the constructor negates the real world impact of this as we are no longer checking the configuration on every row, but a microsecond is a microsecond..

πŸ‡ΊπŸ‡ΈUnited States mikelutz Michigan, USA

I'd prefer not to use a try-catch here, it's certainly not needed, as we know precisely what explode will accept, a string that is not the empty string. Ultimately I would like to require the configuration item to be a string. It's easy enough to force a string in yml by quoting it. So lets set ourselves up to be strict. Go with this

if (!is_string($configuration['delimiter'])) {
  trigger_error('Using a non-string as a delimiter in the explode plugin is deprecated in 11.1.0 and will throw a MigrateException in 12.0.0. Ensure your value is quoted in your migration yaml if necessary', E_USER_DEPRECATED);
  if (is_array($configuration['delimiter'])) {
    $configuration['delimiter'] = '';
  }
  $configuration['delimiter'] = (string) $configuration['delimiter'];
}
if ($configuration['delimiter'] === '') {
  throw new \Drupal\migrate\MigrateException('Delimiter muse be a non-empty string value.....')
}
parent::__construct($configuration, $plugin_id, $plugin_definition);
πŸ‡ΊπŸ‡ΈUnited States mikelutz Michigan, USA

Move it to the constructor, since this is an error on the configuration and not the value, keep the MigrateException for now. I'm tempted to build something new with annotations and validators, but no time for that now..

The correct check is `if (!is_string($x) || $x==='') throw.. `

πŸ‡ΊπŸ‡ΈUnited States mikelutz Michigan, USA

mikelutz β†’ created an issue.

πŸ‡ΊπŸ‡ΈUnited States mikelutz Michigan, USA

Adding related issues

πŸ‡ΊπŸ‡ΈUnited States mikelutz Michigan, USA

I'm not certain that we can do anything here, but we definitely can't do this. All of these sources being moved are scheduled to be deprecated and removed completely by d12 along with the migrate_drupal module, so we can't set a deprecation message telling people to use the migrate_drupal versions before d12 because those aren't going to exist anymore. I think at best we could introduce a copy of the trait to migrate_drupal and switch the sources over to it without doing any deprecation notices about moving things, and I'm not sure we should bother doing that at this point either.

πŸ‡ΊπŸ‡ΈUnited States mikelutz Michigan, USA

Reverted the form changes and moved them to a follow-up πŸ“Œ Improve UX of system performance form max-age form input Active to determine how best to improve the form UX

Also reverted the changes to ConstraintManager that exposed DivisibleBy as we are no longer going to use it here.

Thanks to @catch and @phenaproxima for the slack discussion. Back to NR

πŸ‡ΊπŸ‡ΈUnited States mikelutz Michigan, USA

The latest push makes the change I recommended above, constraining the value to an integer between 1 and 3156000 and reconfiguring the form to use a number field with the same constraints. I didn't see any specific tests around this form to adjust, but there might be a functional Javascript test somewhere that will need to be adjusted.

πŸ‡ΊπŸ‡ΈUnited States mikelutz Michigan, USA

Fixed coding standards and rebased to see if the validation checker can resolve updated dependencies.

πŸ‡ΊπŸ‡ΈUnited States mikelutz Michigan, USA

Yes, that brings drush back into play, do you think it makes an argument to using: `update_skip_future_update`?

I'm not sure what you mean here, I may be missing something. All that function can do is log that we should not run that future update when we eventually see it. If the thing that calls that future update at some point in the future doesn't know to check the logs and see it shouldn't be run, then it's going to be run. There isn't going to be anything we could do in the present to prevent an update from being run in the future without the cooperation of the thing that is calling the future update function, unless, as I noted in the MR, we don't check that from the runner and instead wrap the contents of the future update with some sort of `if (should_be_skipped(__FUNCTION__)` every time.

The only issue I see with this is you need to know if you're going to backport, I think this solution is meant to solve for the case where you didn't know you were going to backport when you wrote the original update hook.

That's not really an issue, as all the code for this goes only in the backport, at least as things stand until we figure out what to do about drush.

πŸ‡ΊπŸ‡ΈUnited States mikelutz Michigan, USA

Regarding #72, that feels like a huge potential issue to require skipping a certain minor version depending where you update from.

In theory, no. In reality, it could be, but with the scheduled support cadence, there is really no way around it. Going from 10.4 or 10.5 to 11.0 would effectively be a downgrade. I'm leaning towards recommending my agency keep clients on 10.3 until we are ready to execute upgrades to 11. My mind could change depending on what happens in the next 6 months, but I'm thinking the sweet spot for upgrades to 11 will be to go from 10.3 to 11.0 in the first half of 2025 followed by a quick update to 11.1, and then 11.2 in the second half of 2025.

Regarding #73 wouldn't the update_skip_future_update function handle drush too?

Well, looks like we bypassed the update_skip_future_update function in favor of just calling \Drupal::service('update.update_hook_registry')->skipFutureUpdate($module, $number); directly in the module, but either way, no. That method just logs that the future update should be skipped, but it's the update runner that currently is responsible for checking that state and deciding not to run the future update when it eventually sees it. But drush has a separate batch runner to actually execute the hooks, which at the moment doesn't know it needs to check to see if an update should be skipped before running it, so if you were running drush updb as everything sits now, you would end up re-running the future update hook. Admittedly update hooks often are or can be made idempotent, (if thing A exists, convert it to thiing B. On rerun, thing A doesn't exist so the hook does nothing anyway). That would further reduce the times we would need to use this (i.e. only if the hook needs backporting to 10.x and only if thinks would break if it were to be re-run)

πŸ‡ΊπŸ‡ΈUnited States mikelutz Michigan, USA

I left some comments on the first approach wondering how to deal with the drush runner, which has its own version of updateDoOne that would need to be updated as well, and wondering if we could go back to attributes since we are changing updateDoOne. We could keep most of the code the same, but use #[UpdateSkip('11101', '11.1')] on the backported hook and look for it in UpdateDoOne and call the service to save the skip data right after or before we run the hook. It might be cleaner that way.

πŸ‡ΊπŸ‡ΈUnited States mikelutz Michigan, USA

+1 on the RTBC

The solution does not specifically mention the process plugins and destination plugins in core modules outside of migrate.

I think we can make some of these decisions as we do the deprecations. I think the destinations can stay where they are for the most part. They are appropriate plugins for the remaining migrate API, and are useful for anyone using migrate to import data from any source. The process plugins should be looked at on a case by case basis, but if they are generally useful and specific to the module in question then they are fine to stay. If they are very specific to converting something from a d7 format to a d8+ format, they can be deprecated along with the migrate_drupal modules. (I think most of the process plugins that aren't in migrate core will fall into this category, but there may be a few worth keeping)

I tweaked the wording about destinations a bit to indicated that module specific destinations can stay where they are and added a note about deprecating field and process plugins outside of migrate_drupal that are migrate_drupal specific. Leaving RTBC as I think this is just clarifying what we've already decided, and obviously if we deprecate migrate_drupal and it's migrate field plugin manager we would have to deprecate the related plugins throughout core too.

πŸ‡ΊπŸ‡ΈUnited States mikelutz Michigan, USA

No, but we've never had a major version receiving bugfixes while the next major's first minor was down to security support or out of support entirely, so we've never needed to.

πŸ‡ΊπŸ‡ΈUnited States mikelutz Michigan, USA

For me this is a big disadvantage - you can't tell people what to upgrade to.

Update 11000 is added to 11.1.x, it's backported to 10.4.x as 10401. Someone updates to 10.4.0, then updates from there to 11.0.0, they've now gone 'backwards' to a release where that update never existed. We need to prevent this, but can do so by checking if there are skipped logged updates that were run in 10.4.x, that don't exist at all in the release we're updating to, and also are higher than hook_update_last_removed() - in that case we'd show an updates requirement error and refuse to update, telling people to update to a later minor release instead.

I started coding an attribute and storage to be able to give an actual version number and I realized we may be completely over thinking this in both approaches here by trying to calculate if we can do an upgrade or not based on missing update hooks. We don't need to. We already know that if you are on 10.3 you can upgrade to 11.0, if you are on 10.4 you have to go straight to 11.1, and if you are on 10.5 you have to go straight to 11.2.

We are adding code and work to hunt for maybe a situation where possibly if you are on an early enough patch version of 10.4 or don't have any modules installed that may have a database update hook skipped in 11.0 that we can technically let you upgrade to only 11.0, but why bother? 11.0 will be in security phase by then, 11.1 will be the recommended version and more importantly regardless of database schema, by the first patch release, 11.4 will have bug fixes in it that aren't in 11.0. Regardless of whether the upgrade to 11.0 is technically possible at that point, we would be allowing them to upgrade to a more buggy version which would be a bad experience we want people to avoid anyway.

I think we just need to document and require that 10.4 must upgrade to 11.1, and 10.5 must upgrade to 11.2, enforce that in system_requirements, and not even mess with trying to calculate allowable upgrade versions via finding missing update hooks. We could address that in a separate issue, and it wouldn't even be a blocker until 10.4 beta.

πŸ‡ΊπŸ‡ΈUnited States mikelutz Michigan, USA

mikelutz β†’ changed the visibility of the branch 3437129- to hidden.

πŸ‡ΊπŸ‡ΈUnited States mikelutz Michigan, USA

I thought I would take a stab at the other option, just to see what it looked like. Not tested yet, and likely horribly broken. I'm happy to keep working on it if there is interest, and happy to close it if we are going with #33, but I wanted to at least attempt to put some code where my mouth is, since I was advocating for it.

Besides the potential for testing to reveal an inherant flaw in the plan, there is still a bunch of cleanup to do. update_get_update_function_list should get deprecated and moved into UpdateHookRegistry, update_resolve_dependencies should also probably be deprecated and moved to refactor away from passing $starting_update around, things like that.

πŸ‡ΊπŸ‡ΈUnited States mikelutz Michigan, USA

What about of moving to the post_update mechanism here, i.e. instead of numbers start using names & just keep the record of the executed ones.

The issue here is that the numbering of the update hooks is generally important for order if you are jumping several versions at once. I'm not completely happy with running the backported ones out of order, but it's unavoidable, we will just have to make sure that we take it into account when deciding what to backport and when. But in the grand scheme of things we need to try to run them in the right order and particularly make it possible for a developer to specify the order when they need to.

πŸ‡ΊπŸ‡ΈUnited States mikelutz Michigan, USA

This is the kind of thing I would prefer never to have to think about if we can avoid it.

Another problem that goes away if we check for unran updates on every run instead of just on a major version update

πŸ‡ΊπŸ‡ΈUnited States mikelutz Michigan, USA

Not worrying about the major version would definitely simplify things. And allow this to work for contrib out of the box for their own multiple supported versions. and solve potential issues with partial updates.

πŸ‡ΊπŸ‡ΈUnited States mikelutz Michigan, USA

Re #47 and #50... I guess one thing we could consider is always doing this.. and ignoring the part about major versions. I'm not actually sure that that is necessary.

I agree. I had considered it. It would be a behavior change, I couldn't come up with a reason why someone would introduce a lower number update hook intending it not to be ran unless you weren't past that schema number, but somebody out there is heating their office with the spacebar (https://xkcd.com/1172/)

The one very edge case I can imagine is what would happen if we have to add an update to 10.4.x only - say one of the remove contrib modules in 11.x needs an update due to security what would then happen?

Just use the next 110xx hook number and skip that number in 11.x. I would probably recommend committing an empty hook with a comment to 11.x. OR - if we go with the above plan of just doing this all the time, you could just use the next 104xx number.

πŸ‡ΊπŸ‡ΈUnited States mikelutz Michigan, USA

When a module is newly installed on 10.4, then updated to 11.x from there, schema_version would record the highest update number available at the time of install, but we wouldn't have a record of any updates being run. I guess we could do what we do for post updates and log all of the updates in the code base as if we'd run them, maybe that would work, but a bit of complexity involved there.

Yeah, I suppose we would have to.

This assumes that we always immediately backport an update to 10.4 when we commit it to 11.x, but if there's ever an issue we decide to backport later, especially if later is in 10.5 instead of 10.4, update numbers would get out of sync. One advantage of the current MR is that it also handles this case.

Curious how often this would happen. If it's more than once or twice in a cycle we would probably need to come up with a better solution, but if it's once or twice, I would say:

given update hooks 1 and 2 in 11.x where 1 is committed to 11.x first, but 2 is backported first:
You would have to make a new commit to 11.x with an empty hook_update_3
backport the update code to hook_update 3, and in that update hook add hook_update_1 to the ran list (in some way that doesn't race/conflict with the list already checked out in the updater that would later be resaved.., so it probably does need some sort of consideration on implementation) so that on upgrade both are skipped in 11

#3 We need to support sites updating directly from 10.3 to 11.1, these won't have a log of any updates that have been run because we don't log anything yet - this might be fine because we also won't have any backported updates in 10.3 though?

I think we would need to replace the updater in 10,3 as well and start logging there, but that's no different than the current solution. If we don't have this in 10.3.0 release, and someone upgraded from the .0 release then 11 would just run everything after last schema (which would be right) and save the major version as 11. The code would have to be in 10.3 before we backported something after we didn't backport something. Which I believe would be the same time we would need to have the current code in 10.3 (i.e. the first time we wanted to flag that some backported hook_update_103xx was equivalent to some future update 110xx

πŸ‡ΊπŸ‡ΈUnited States mikelutz Michigan, USA

So, I'm all for doing something more robust and automated with attributes alongside a deeper rework with OOP updates, but looking at this, I wanted to throw one other idea out there as a temporary stopgap that might be easy enough to implement and avoid boilerplate service calls in each update hook and avoid needing to renumber and reroll for every backport.

So instead of calling this service manually in every hook, we just do it automatically, i.e. store a list of every update hook that we've ran (populate it with all existing schema numbers on the first runthrough) And at the end of an update, we store the current Drupal major version. When we go to run updates we check if the current major version is the same as the one we previously ran, and if it is, we run updates like normal, appending the ones that were run to the list.

If we discover that the major version changed, we do things differently. We pull the list of hooks that we ran, subtract any below LAST_UPDATE_REMOVED and compare it to a list of all the hooks currently discovered.

If we find one we ran that no longer exists, we are are in the situation of #34 and can bow out. Otherwise, we can run everything that exists that isn't in our list of functions that already ran. At the end, we update the ran list, and save the new major version number.

With a system like this, we add all new hooks in the 11000 range regardless of backport. We can backport some of them, keeping the same numbers, and they will run in order just fine on the old major. When you update to the new major, hooks that aren't backported will be ran, hooks that were backported will be skipped. (This again means hooks may run out of order, but that's going to be the case with the current solution anyway)

The benefits that I see is that the hook has the same number on all branches, so there less refactoring just to backport, and there's no boilerplate that the developer needs to add to every hook.

Negatives might be storage of all the updates we've ran already, but that list would be basically the same as what's been proposed here, An entry in KV for every backported hook.
We also would probably want to find some way to differentiate core modules from contrib and only apply this system to core modules, since contribs comparable problem would be related to their version numbers, not cores.
I am not sure we could implement this without also adjusting the drush updater, I looked breifly, it does hook into core for a lot of the work, but it depends on the implementation
Have to handle edge cases when there are no update hooks, or if an update errors out in the middle of an upgrade. We'd probably have to store the core version for each module, and ensure we only update it if all hooks succeed in a run, but I think it would be doable.

Anyway just wanted to throw the thought out there just in case it triggers any thoughts for anybody. Looks like Alex is making good progress on the other solution, which is probably simpler to implement.

πŸ‡ΊπŸ‡ΈUnited States mikelutz Michigan, USA

Could we do something with attributes where we tag the backport with the future version (and/or vice versa) and keep all the code to manage it inside the updater? Or am I completely crazy?

πŸ‡ΊπŸ‡ΈUnited States mikelutz Michigan, USA

Rebased now that πŸ› InstallerTranslationExistingFileTest fails on 11.x branch Active is in. Tests should pass now.

πŸ‡ΊπŸ‡ΈUnited States mikelutz Michigan, USA

I don't think this is the right approach here. The null destination should not have IDs. Declaring one as you have here is a problem, because you are saying the import method is going to return one, and it doesn't. I see three solutions

1) Have the null destination return 'null' or something as the id for every row. I don't like this, because while there isn't supposed to be a requirement that destination IDs are unique, there are monsters in the code that rather expect it, particularly around rollbacks and map source lookups by destination id. Not things you might commonly use with the null destination, but it gives me pause.

2) Have the SQL map require the destination plugins to have at least one id. Much like you've done here, but error it out as more of a requirements check, i.e. "the SQL map is not compatible with destinations that do not provide at least one id. Make a new nullIdMap that doesn't store anything and is compatible with the null destination, fix tests around the null destination to use that. Add a BC layer, deprecation, etc.

3) fix the monsters in the SQL map around having no destinatiion IDs. Store the source IDs, refuse to look up sources by destination id. If there is code in drush or rollbacks that has a problem with it outside of what we could fix in the map itself, investigate and find solutions.

I would need to investigate a bit more before I could be certain which of those is best. I'd need to look through any existing null destination tests, and other tests to see exactly what breaks, and what makes sense to have happen in those scenarios, but I do think we need to support destinations that have no IDs to the extent we can. They should be able to work for imports, even updates if it knows what it's doing.

πŸ‡ΊπŸ‡ΈUnited States mikelutz Michigan, USA

Moving to critical per https://www.drupal.org/docs/develop/issues/fields-and-other-parts-of-an-... β†’

Cause tests to fail in HEAD on the automated testing platform for any supported environment (including random failures), since this blocks all other work.

πŸ‡ΊπŸ‡ΈUnited States mikelutz Michigan, USA

I'm wondering if we should do something more robust here, like add a query parameter on the redirect indicating that we downloaded the translation file, so when we go back through we see that we thought we downloaded it, but can't find it and report the error instead of an endless redirect. Or a more structured way of ensuring that whatever is set as the drupal version will pass the FileTranslation scanner. I'm really not sure, It's been a while since I got deep in the installer code, and I'm not that familiar with the deep parts of the string translation system and po files to know what else could be screwed up by changing the FileTranslation scanner. I suspect the change here is probably safe, but still don't know if it's the right thing to do.

πŸ‡ΊπŸ‡ΈUnited States mikelutz Michigan, USA

Here's the simplest fix I see for this test, making the patch value optional in the regex that filetranslation uses to detect files.

Also, I was trying to figure out why the version change was committed 9 days ago, and we are just seeing this test fail now. The drupal-11.0-dev.xx-lolspeak.po file that is downloaded has metadata indicating it's the file for 11.0.0-beta1. I suspect that file was added to the server with the beta release late last week. The test only tests that a 200 status is returned after selecting a language. If the download fails, the installer displays a requirements page stating that the file is missing rather than trying to reload the page, so the test passes if the download fails. So it wouldn't have been a problem until the download succeeded, but the installer couldn't detect the downloaded file.

πŸ‡ΊπŸ‡ΈUnited States mikelutz Michigan, USA

I’m not sure where it should be fixed, (more specifically I’m not sure what translation file name format we WANT to use for 11.0-dev, but once we know, we need to adjust either of those two regexes above to download and/or find the right file)

πŸ‡ΊπŸ‡ΈUnited States mikelutz Michigan, USA

The failing test seems to be broken on HEAD, will rebase once it's fixed.

πŸ‡ΊπŸ‡ΈUnited States mikelutz Michigan, USA

That all sounds good to me. I also set 'regex' as a schema type so we don't have to add the constraint anywhere we might need to use it in the future.

As a side note, it seems weird to me that 'exclude paths' can't be empty/null, but I checked the code, and fast404 just doesn't do anything unless you exclude *something*. Seems like that could be adjusted, but out of scope here.

All feedback addressed. back to NR.

πŸ‡ΊπŸ‡ΈUnited States mikelutz Michigan, USA

NW for Phenaproxima's comments.

Production build 0.69.0 2024