Looks good to me too. There's going to be not-strictly-deprecated code to remove when this goes, but that's unavoidable, hopefully we'll find it when we need to remove it.
Committed/pushed to 11.x and cherry-picked to 11.2.x, thanks!
The immediate concern I have with not having that is: data storage of XB would then still change at a later time.
It shouldn't change.
If ✨ Add way to "intern" large field item values to reduce database size by 10x to 100x for sites with many entity revisions and/or languages Needs review follows the current proposed solution, the 'interning' would be a configurable (per field instance), internal detail of the sql storage. It would be transparent to XB and the field definition etc.
It may not be straightforward for existing sites to change that on a field that already exists (would probably need a custom update path, maybe to a new field name), but this is not the same as XB itself having to change its data model and provide an upgrade path.
If #2770417: Revision garbage collection and/or compression → happens, there is no change to the data model at all, just occasional pruning of older revisions. The two issues also aren't mutually exclusive although if we do one of them, the other becomes lower priority.
Found the revision pruning issue, linking it here.
The only reason for me to prefer @larowlan tackling it here instead of landing #3469082: time (well, and potential scope creep). #3469082 will not land before 11.2, which means it won't be in time for XB's 1.0-goal-by-DrupalCon-Vienna-in-October.
Once a complex schema for data compression is in Experience Builder it will be incredibly hard to change. Once either revision pruning or field value compression is in core, it will be very easy to enable for sites (might be harder to enable field value compression for existing sites, but it could be added for any new xb field added to a site, and migration paths could be added later).
I don't see why revision compression would be a stable blocker, especially if it introduces complex technical debt that will be hard to refactor later.
Re-titling and moving back to active.
Could this be different config entities for different 'versions' of the component, instead of trying to store all version information in a single config entity?
That would potentially make it easier to determine whether any content is using the old version, allow old versions no longer in use to be deleted etc.
Yeah that works for me. I can see the reasoning in #46, but it's like a reference to documentation, not the actual documentation, so I think people will get the hang of it. And #49 lays out the pros and cons very clearly.
#1. I've updated the statcounter link to go directly to https://gs.statcounter.com/browser-market-share, which includes regional filters by continent and country along with a short mention. I think it's good to link to the resources we know about, but don't think we want to be comprehensive/prescriptive here in case something better shows up.
#2. I had a look at the most recent webaim survey to see if there was an example.
https://webaim.org/projects/screenreadersurvey10/#browsers
This shows Firefox at 16% and Edge at 19%, whereas if we look at the last month's global data on statcounter:
https://gs.statcounter.com/browser-market-share/desktop-mobile/worldwide...
Firefox is at 2.5% and Edge is at 5%
So I've added "(e.g. more than 10-15% stable usage when global usage is under 1%)".
The 'stable' in there is so that the 'downward trend' above can take precedence, this was part of the considerations for dropping IE11 support iirc because it was clearly on its way out in successive webaim surveys, if still disproportionately used.
Stable 10-15% usage isn't a hard number (and we could change it to a different number), it just means we would need to put some effort into understanding why a browser is disproportionately and consistently used over the years before dropping support for it.
#3. I started writing a comment saying I think versions is already covered in https://www.drupal.org/docs/getting-started/system-requirements/browser-... → , but then I realised that doesn't cover how we decide which browsers get two major vs. only latest major version covered. So yes, we probably do need something.
I've added:
In general, desktop browsers are supported for their two most recent major releases, and mobile browsers are supported only for their most recent release. See https://www.drupal.org/docs/getting-started/system-requirements/browser-... → for details of current support.
I think we'd need a good reason to deviate from that for a specific browser, firefox ESR and safari mobile are the only two deviations we have, not sure we can get around discussing special cases like that.
I think that hopefully addresses the feedback, going to be bold and self-RTBC the changes since they are pretty minimal in the scheme of things.
Flood control doesn't depend on the ban module - it's a core subsystem.
We still don't update to major versions of some libraries, e.g. jQuery 4 was only committed to Drupal 11, but in 2012 we didn't have a two year major release cycle, and I think that is frequent enough that it allows us to introduce major revisions to js libraries quickly enough.
Supporting major versions for approximately 4 years each allows us to cycle out older versions quickly enough too - e.g. jQuery 3 when Drupal 10 is EOL.
Updating minor and patch versions every six months is mostly allowing us to keep up with security updates. Ckeditor5 can get tricky because they don't support their own minor versions for 12 months, but the proposal here wouldn't help with a ckeditor5 release one month before a core minor version goes out of support.
In 2012 a major release cycle was more like four years and we also supported Drupal 7 longer than Drupal 9. So for me this issue is thankfully outdated.
#46 is a good point.
I'm a bit behind but I quite like HookName or HookDocumentation.
Moving back to needs review.
The general approach to solve linking if dynamic hooks seems good.
I found a workaround in lms_h5p. Reverting for now, but we might want to make this controlled by a hook or similar? Either a hook that returns an array of verbs, or allow modules to alter/validate statements just before they get written?
This wouldn't get backported to Drupal 10. If the MR was rebased and test failures fixed, and it was reviewed/RTBCd in the next ten days it might be able to land in 11.2.0 still, more likely early in 11.3. That would be a much better use of time than creating 10.x versions of the patch if someone has time to take a look. This has been quite close to being committable for a while now I think.
This is probably the riskier proposition, since deployments that depend on doing a composer install at the remote end, then reapplying the recipes, will not work
Trying to think of a use-case for this workflow, but struggling a bit.
I would expect that people would apply the recipes locally, export configuration, and commit the configuration to git, then import it on production (same as config changes without recipes). Or.. if they're building a recipe-based hosted demo or similar that they'd commit the recipes themselves to git. As long as those two work, then breaking a workflow of composer install locally + recipe apply remote without either exporting config or committing to git seems fine.
This also has to take place on entity load
Ouch, yeah was thinking in terms of config entity presaves but that relies on them being saved before they're loaded, which isn't applicable here for the exact reason we're discussing this at all.
But doing all these checks on load likely comes with a performance penalty, especially if the amount of updates grows over time.
If it's on hook_entity_load() or an equivalent spot, with a lot of isset() checks etc., then it would be cached in the persistent entity cache for most entity types, so it might not be too bad purely in terms of performance impact.
While this is interesting to explore, I'm not sure the final implementation should be in XB itself - we have at least two other options:
1. Having this as an option for any longtext/json field in sql storage - the approach would be applicable to long body fields too (think issue summaries with 300 revisions where the text itself is only updated 5 times).
2. Other approaches for reducing revision table size like purging - e.g. purge all non-default revisions prior to the previous default revision (somewhat implemented in workspaces or workspaces extra iirc). Or purge default revisions with a decay (keep the most recent ten, then purge ever other revision, then purge every 9/10 revisions based on thresholds etc.). This could be done via putting the entity into a queue when it's saved with a new revision, the queue would then thin out the older revisions. There is probably already a core issue for this around but can't find it immediately.
A big reason to do #2 would be because it's not always only the size of the table on disk that's the problem, but if there are millions or hundreds of thousands of rows, just things like indexes on revision IDs etc can get huge too, increases memory requirements, writes can slow down, allRevisions() queries get slower.
#2702061-106: Unify & simplify render & theme system: component-based rendering (enables pattern library, style guides, interface previews, client-side re-rendering) → and downwards is relevant, but the most likely outcome would be that the number of different elements is reduced, and eventually #theme gets dropped in favour of #type component, but don't really see any reason that this couldn't all be worked on in parallel without explicit dependencies.
Replying to myself here:
A just-in-time update sounds interesting although wouldn't that then mean the update code needs to be maintained indefinitely?
I thought about this some more, and we already sort of have a pattern for just-in-time updates in the phpass module. For phpass we cannot bulk update user password hashes, otherwise we could reverse engineer people's passwords, so we just have to wait for users to log in again, update the password hashes then, and then one day site will uninstall the module and any users that didn't log in for years will need to reset their password.
For content we could bulk update content, but we don't really want to do this in a mandatory hook_post_update_NAME() because we don't want to enforce potentially hours of downtime on sites with lots of content in a specific minor release.
So what we could do is implement a just-in-time update on entity presave, and that will update content one by one when it's updated.
We've then got the question of what to do with content that isn't manually updated, there are several ways to tackle this.
Ideally we'd have a way to detect which entities and revisions need updating, e.g. at least only those that are using layout builder overrides or whatever the criteria is. And once it's loaded the entity, we should be able to check if anything actually needs saving before saving it. This would reduce overall churn.
For actually running the update there are several options:
1. Drush or console command which can be manually run, loads and saves everything that needs an update.
2. Admin page that does this in a batch.
3. Admin page that queues all the content on the site, or does something similar to search indexing with a watermark + cron.
We also have issues around discussing revision compression/pruning so e.g. if we add a way to prune old non-default revisions, or very old default revisions, that would allow sites to cut down on the amount of content that needs to be updated.
This does imply keeping the just-in-time update path around for a couple of major releases, but that code will be easier to maintain than a hook_post_update_NAME(), which can go horribly wrong (all of Drupal 7 and the first few releases of Drupal 8 testify to this).
@Wim 📌 [PP-1] Consider not storing the ComponentTreeStructure data type as a JSON blob Postponed is (I think, still catching up on the latest issues a bit) a row-per-component with a single JSON column for the values in a single table, so it would be mutually exclusive with this issue.
For me, having multiple tables, or multiple rows for a single delta, feels like it would be incredibly complex both from the point of view of having to adapt all SQL storage backends to support it, and also for views integration.
However row-per-component with a JSON column would simplify dependency checking, updates, potentially things like revision compression etc. and might well be useful for 🌱 [META] Support alternative renderings of prop data added for the 'full' view mode such as for search indexing or newsletters Active too. Views integration feels like a very low priority because the data is arbitrary as you say.
I have on occasion added listing filters with CONTAINS on the body field or similar on sites that otherwise don't use the search module, when the dataset is small enough that it won't kill the database. There might be the odd case like that but don't think there will be many.
I could see wanting to list entities that are using component x - that would be easy to do with row-per-component because it doesn't rely on the values. e.g. you could list all articles that have an image gallery in them, things like that.
A JSON column would make views integration (at least for the values if not other things like component) dependent on ✨ Add "json" as core data type Active , but that feels like a reasonable limitation to me. No matter how complicated it might be, it is almost going to be less complicated than views integration for the current JSON blob with everything in it, and it might even be less complicated than supporting a fully relational schema here.
So for me personally, I would postpone this issue on 📌 [PP-1] Consider not storing the ComponentTreeStructure data type as a JSON blob Postponed , and if that one works out, then this might not be very necessary to explore.
I think it's worth doing this first, but having ✨ Content templates, part 3b: store exposed slot subtrees on individual entities Active in mind while doing it.
There are probably two ways that issue could be done, assuming row-per-component is implemented here:
1. Keep a single field, but add a column for 'slot' (or whatever else is necessary), meaning deltas would span across multiple slots.
2. Add a separate field for each exposed slot.
If those are indeed the two options and there is not some other third option, then they're both compatible with this issue (I think), so it should be fine to do this first without having to pick one in advance.
Committed/pushed to 11.x and cherry-picked to 11.2.x, thanks!
gábor hojtsy → credited catch → .
alexpott → credited catch → .
This isn't possible with the way H5P core currently works - the only place you can modify the statement data is when interacting with the H5P event in js just before you send it to the LRs. I don't see a way to allow that to be modified generically in PHP without a major refactoring.
Note I haven't checked whether the scoring format is the same for answered as passed/failed yet - this just unblocks storing the xAPI statements at all.
.
It is quite common for core to introduce deprecations in minor branches that contrib can't immediately update to, because the new API isn't in the previous minor version, so I'm not sure why hook_module_implements_alter() being deprecated is a showstopper when many other deprecations aren't? It would still be possible to use it with all supported versions from December 2025, which is still six months before the earliest possible Drupal 12 release.
@jackfoust yes removing that hunk ought to be fine, the latest changes are trying to rely on the libraries system as much as possible.
It would be great if you could rebase https://git.drupalcode.org/project/h5p/-/merge_requests/8 with that change and any of the interim ones - hard to follow what people are doing with patches here.
This information is built in the field formatter, but there's no alter hook. I think it should be straightforward to add one, and it's necessary to cleanly integrate H5P with an LRS, where you need any control over the data that gets sent.
Couple of comments on the MR.
I'm missing why so much of this is necessary, after https://www.drupal.org/node/3013865 → already landed.
This could use test coverage for the original steps to reproduce. It seems like the main issue is that new revisions aren't created by default. Also have you tried using nodes + path aliases + workspaces together?
Tagging for 11.2.0 release highlights, I'll try to review this one more time. Mine was the last RTBC on this issue but not the first two, so I think I'm fine to commit this still.
Rebased.
Going ahead and merging this as a follow-up to the 11.x compatibility MR.
#17 that's pretty accurate. If we remove it outright, we'd need to do the following:
1. Copy the CSS to somewhere in stable9 that's always loaded. stable9 doesn't have a catch-all file, only overrides, so either need to pick an existing file that's not a perfect match or add a legacy.css or something.
2. Add the CSS to Claro's tabs CSS somewhere.
https://github.com/Otamay/potracio is a PHP port of potrace that's GPL licensed.
Some good discussion of the various placeholder generation approaches in https://github.com/axe312ger/sqip/issues/116
catch → created an issue.
We originally backported Hook
to 10.5.x so that phpstan wouldn't complain it was missing, and for no other reason, but I think slack discussion showed that's not really useful, and I reverted that commit already. The only new attribute backported to 10.5.x at this point is LegacyHook
which is used in 10.5 runtime.
So... I think the remaining question is whether we want to backport any stubs to 11.1 patch releases. The reason to do this would be so that contrib can adopt e.g. FormAlter before it drops compatibility with 11.1, and not have to stay on either procedural hooks or change a Hook attribute later. It is very tempting to want to do this, but to be honest it feels like a lot of work to figure out, and every new attribute we add later we run into the same problem again.
The last patch release of 11.1 is in a couple of weeks, so I'm starting to think we should not worry about this at all, and just continue on 11.2 and later. People writing brand new modules can require >= 11.2. People who want to convert existing modules to OOP hooks can require 10.5 | >= 11.2 (at least in 2-7 months). Having to add > 11.1.7 in .info.yml already limits the feasibility of using this and maintaining 11.1 compatibility.
I think that @berdir in #15 is right and we should avoid this, modules installing other modules should let the newly installed module handle updates to itself.
smustgrave → credited catch → .
Haven't reviewed it yet, but from the title 🌱 [Policy, no patch] Normalize on usage of is_callable() instead of function_exists() Needs review might be enough.
I thought so, but I can't actually find a usage of composer provides in core, I did find 'replaces' but we're not using that for any core modules. Maybe I was thinking of 'replaces' and provides is an old alias.
Ohhh... except now I've written all that out, this is controlled on the Drupal.org side e.g. when we move modules out, we have to file an issue to free up the contrib namespace, like 📌 Ensure that Book does not get special core treatment Needs review .
When a module is brought in the same should happen - can't remember we moved in an identically named project though since big_pipe.
So this is possible afaik and implemented somehow, but maybe it could use some documentation somewhere?
Having written it up, the most important thing here is the 'inline' - if we can make that work, then SVG hopefully lets us make better placeholders but tiny webp would work too.
For the image style, we don't actually need a queue, when rendering the HTML, if the placeholder derivative file exists on disk we can load it and inline it, if it doesn't exist, we can render the URL, set a max-age of 0 (or 30s), and disable the placeholder. When the URL is visited, it'll create the file on disk, and the next time it's render it'll get inlined. Should only happen once in the lifecycle of an individual image, and often immediately after the content is created.
I can't remember the specific issue, but I think this is handled by composer provides now?
I don't think the routing concern is necessarily a big pipe conversion blocker. Big pipe just uses render placeholders that return an AjaxResponse object, there's no routing involved.
Moving this to navigation module.
catch → created an issue.
I don't think we need an update here, they can just be available on new sites. Even if existing sites don't have formats with the same names, they may have all kinds of custom formats already created which are similar, and then suddenly these new ones would appear alongside them. Similarly we often don't even fix arguable bugs in shipped views like admin/content in update paths, because there is a higher chance of breaking a site than fixing it.
#2364011: [meta] External caches mix up response formats on URLs where content negotiation is in use → was I think the definitive issue for Drupal 8 where accept negotiation got canned. I haven't re-read that issue recently but remember the whole area being extremely painful at the time. So yes, on the basis we would want at least some HTMX responses to be cacheable in edge caches, let's use a query string from the client.
Committed/pushed to 10.5.x, thanks!
phpstan against 10.5.x made my computer very unhappy - maxed out CPUs + 64gb memory + swap etc. which I haven't seen happen since two computers ago. So had to give up and use --no-verify.
Unsure how this interacts with caching as well - if the same route is usable via both standard HTTP and HTMX then we want the cached responses to differ.
If it's a GET request (which it should be for HTMX when possible), then the internal page cache won't differentiate so it could get corrupted, and same problem with varnish/CDNs, so we would have to enforce that the HTMX route is only used for HTMX and throw a bad request exception or something when the header is missing.
By the way, is there any way to widen the summatry output? T
I think something like this just landed in 11.2, iirc it truncates from the right instead of the left, or something like that now,so next minor might pick that up already.
Or is the tactic to always have more concurrency than #slow tests? I will try it anyway, to see what that shows us.
Exactly the same number is fine, so if the 8 slowest tests are all tagged that's great - as soon as one finishes, a process if freed up for a faster test to start.
Once you get to 9 @group #slow tests with 8 concurrency, the risk is that the 9th slow test is the slowest one - say a test that takes 5 minutes, and it's been displaced by a test that takes 45s that runs first, now the test run is going to be at least 5m 45s. If the 9th test takes 90 seconds, then it'd be 90s + 45s which is usually fine, but this is where it gets into diminishing returns very quickly. Unfortunately speaking from experience here...
So it's best to tag the minimum number of tests as @group #slow to avoid overhangs, and let ordering (improved by the discovery MR) take care of the rest. It may be the case with scheduler that the ideal minimum number of tests to tag is exactly 8 :) at least until the discovery MR lands and changes things again.
I'd personally be happier here if the issue summary on 🌱 Drupal 10 Core Roadmap for Automatic Updates Active was a bit clearer. I tried to re-organise it a bit this past week, but it's still hard to see what the stable blocking issues are or aren't, and some issues are open but still in the beta blockers list - I duplicated one to the stable list just now.
Added ✨ When it is installed, Package Manager should try to detect the paths of Composer and rsync Active to the 'to be categorized' list.
There is something about the tone of many comments that seem targetted and personal.
I opened this issue because the original promotion of these modules in slack led to two long discussions that reached over 200 comments. I was not heavily involved in the discussions because most of comments happened while I was offline, but I did read most of them in the end. This and other things prompted me to open the issue.
The reports to the security team mentioned above were primarily responded two in two ways. I am quoting minimally from the security issues in order to avoid breaking the security team disclosure policy, but I've also asked for permission to post longer excerpts.
1. By replying with vibe coded patches on s.d.o that wouldn't fix the vulnerability and would potentially introduce others. catch: "Was it also 'vibe coded'?" @bigbabert "Yes it was".
2. During the same week, two reports were sent by @bigbabert to the security team for modules authored by the same people who had filed the security issues for the modules discussed here (neither of these people were me). At least one of those reports included the phrase 'Above an analysis done on the module using AI'. Despite repeated questions, no valid steps to reproduce a vulnerability were provided. The fact that both reports were against modules written by people who had filed security reports about his own modules does not seem coincidental.
The common theme here is not specifically that code, security reports and etc. have been written with LLMs, although that is a factor, because it makes it much easier to do this at scale. It's that this is happening without any (apparent) human verification before sending the machine generated content into either stable, security opted-in projects on Drupal.org or reports to the security team. Drupal contributors acting in good faith often report low-severity or hard to reproduce issues to the security team 'just in case', and they often end up as public bug reports in the issue queue, or sometimes people are just mistaken, but this is usually done with that caveat stated fairly clearly from the outset. We do also get lots of other low quality security reports, like tens of pages of PDF generated by automated scanners with no verification, but I don't think I've ever seen two of these from the same person about two different modules in the same week.
Opting a module into security coverage includes the disclaimer "I will cooperate with the Drupal Security Team as needed.". While @bigbabert has co-operated in the sense of replying on issues, the manner in which they're doing so constitutes malicious compliance for me. This has taken hours of security team time that could have been spent on something else.
In the past month, I have only seen one other case of obviously 'vibe-coded' Drupal code - this was in two MRs adding new features to the same module - they contained thousands of lines of code that would never have worked, and were also clearly lacking any human review. However, MRs are not stable projects on d.o opted into security coverage, and neither are they reports to the security team that have to be triaged by a small group of people, so I didn't feel any need to open an site moderators issue in that case. If that makes this issue 'personal and targeted' then I guess we can be thankful that so far not many people are engaging in this behaviour yet.
Committed/pushed to 11.x, thanks!
Now, imagine that those checks happen on every request, wouldn't this mean that we'll be "attaching" the knowledge of those routes to every page in the context of the route preloader?
The route preloader currently loads every non-admin route unconditionally on every request, so it wouldn't make things worse or better, unless we change the behaviour in the issue above, which we might do.
You have the whole configuration set for a given block, which might very well include internal information, maybe you have a block with an API key in the config or something like that.
The lazy builder could take the entity ID of the config or content entity, load it, and then find the config for the block in there. It would need enough information to locate it though, not sure what all of that would be (field name at least on entities etc.) but should be finite.
Thank you!
https://git.drupalcode.org/issue/scheduler-3445052/-/jobs/5206786 looks pretty good.
SchedulerTestLegacyHooks still overhangs, and SchedulerMultilingualTest nearly overhangs but does not (e.g. SchedulerLightweightCronTest finishes in-between).
Since SchedulerLightWeightCronTest starts sixth from last and takes 30s, I think it's likely that the last few finishing tests all finish within a few seconds of each other.
As soon as we use @group #slow for more than 8 tests, one of them won't start until another has, and then you're back to square one, we even considered @group #really_slow in core for that but didn't go that far yet. So if we tagged SchedulerTestLegacyHooks as #slow it might save 10-30s still, but it starts to get towards diminishing returns at this point.
So for me this is showing that we can get good pipeline runtimes with 8 concurrency, but it will probably require manual tweaking for projects with several long-running tests and more than 8 tests in total.
Next thing is to see if 📌 Deprecate TestDiscovery test file scanning, use PHPUnit API instead Active gets us the same or better results with less manual tweaking.
Explicitly marking this with PP-1 on that issue, I think it could make things worse for some projects if we make the switch before that lands. But once it lands, hopefully we can make everything a bit more efficient.
smustgrave → credited catch → .
In https://git.drupalcode.org/issue/scheduler-3445052/-/jobs/5198987 it looks like SchedulerRequiredTest might not be marked as #slow yet (it starts in the middle of the job still), is that definitely the right link?
This has merge conflicts.
Just ran into 🐛 Layout builder doesn't support bundle computed field Closed: duplicate
I think we should take this out of key/value.
We could have field module implement similar logic in it's bundle field info hook instead maybe.
Reading comments in related issues this feels more like major task than normal feature request.
I think we could use this as a hint on the settings page which is being added sin the other issue.
And we could also locate and cache this as a fallback if nothing is found or configured.
I agree upstream is hostile, every time we've raised anything like this it's been immediately won't fixed.
Two concerns with not upgrading though:
This will still be a dependency in Drupal 12, which we will be supporting until some time in 2030, so if there is a single security release in the next five years it will be annoying. But there is not really security surface area here, and if necessary we could fork.
Similarly, do we think this will support every PHP version released between now and at least 2028 or 2029? Given we hope nothing will actually be using annotation parsing at that point, PHP deprecations would probably be OK but a hard break would not.
I've added the two issues brought up by @poker10 as stable blockers.
I'm personally not sure about 🐛 Second level menu items can't be reached if they have children Active - can't you navigate via the admin page like admin/config itself, but haven't tried to reproduce directly yet, but we can always remove it from the blocker list again if it's determined not to be one.
Looks like SchedulerRequiredTest
could also use @group #slow.
(sorry for the back and forth, this is what it was like trying to get core test runs down too - constant whack-a-mole).
@agarzola are you intentionally trying to aggregate and minify that external CSS?
Thanks!
Next up for anyone following along is 📌 Pass RenderContext around in the Renderer Active .
This should be fixed by the follow-up commit on 📌 Drupal 11 compatibility fixes Active .
After discussion in slack, I've reverted the commits here from 10.5.x and 10.4.x.
More discussion in 📌 Reduce hook attribute order BC stubs to the minimum necessary Active .
Thanks for testing, this is encouraging.
I think the next tweak would be to mark this one with @group #slow
SchedulerNonEnabledTypeTest
Because that one finished last in the run. With eight concurrency, that still leaves three spare processes to run the other tests from the start of the job. Theoretically might get under 7m58 then, but also at 7m58 this looks like it's up to 4 minutes faster than HEAD?
Once 📌 Deprecate TestDiscovery test file scanning, use PHPUnit API instead Active lands in 11.2, it would be great to see what a result looks like with @grouip #slow against that issue.
And then once we've got a baseline against that issue, it would be interesting to see if reverting the @group #slow from scheduler still results in faster test runs (because the default ordering should in some cases lead to the same results).
Moving this to fixed - please open follow-ups for any remaining issues as a result of the library upgrade.
I just tried a fresh install of h5p via composer, and it brought in h5p 1.27 and h5p/h5p-editor 1.25, which of course resulted in a fatal error due to the incompatibility in the H5PDrupal class.
It would have been useful to get feedback from H5P developers on this issue, however it's been open for five months already without comment. The MR looks good, tested the upgrade path etc.
I'm going to go ahead and commit this to 2.0.x. Not planning to do a release on that branch for a little while, so there will be time to follow-up with any remaining fixes after this gets wider testing on dev.
I think the installer batch steps and form structure are something we can change in a minor release, if a distribution is shipping multiple install profiles in a single distribution, they would be able to add the page back again (if they're not already replacing it entirely).
Thanks for fixing the bad array_key_exists().
System messages in the AJAX system are rendered via MessageCommand, and the same infrastructure is used by BigPipe since 📌 Use MessagesCommand in BigPipe to remove special casing of the messages placeholder Fixed . I'm not sure how that would compare to #8 in terms of final implementation, although presumably we might need to support an HTMX version of MessageCommand for bc anyway?
Makes sense.
I think the way to do that then would be to create a new demo_umami
contributed theme, and a new umami general project recipe based on the work here. Once the policy issue is agreed, that could start alongside the gradual deprecation of Umami in core (which probably will have to work differently to how we've deprecated modules/themes anyway).
Committed/pushed to 11.x, thanks!
Arggh that was from needs review :( Might ask in slack for a posthumous review/RTBC rather than reverting and recommitting.
This would be great, having to phpcs-ignore the phpstan ignore declarations is slightly eye watering.