πŸ‡ΊπŸ‡ΈUnited States @effulgentsia

Account created on 2 September 2006, almost 18 years ago
  • Software Engineer at AcquiaΒ 

Recent comments

πŸ‡ΊπŸ‡ΈUnited States effulgentsia

https://www.drupal.org/about/core/policies/core-release-cycles/schedule#... β†’ lists RC1 for this week. Should this issue be retitled accordingly, or is a beta2 still on the table?

πŸ‡ΊπŸ‡ΈUnited States effulgentsia

The SDC plugin manager is not like other plugin manager in core in that it doesn't support derivatives OR fire an alter hook.
This means there is no way to dynamically declare SDC plugins/components. This is by design

Oh right, I forgot about that. I think what we should do then is either:

  1. Create a contrib module separate from XB that replaces/decorates the SDC plugin manager in order to allow PHP-defined SDCs, or
  2. Create a contrib module separate from XB that defines an abstract "component" concept that can be rendered by SDCs or other things, similar to this MR.

Then hopefully whichever of the above we choose can make its way into core eventually.

The pro of #1 is that modules that have already begun to standardize on SDCs, like https://www.drupal.org/project/sdc_display β†’ , can work without change with not-on-disk components.

I guess the con of #1 is then the "SD" part of the "SDC" name becomes less accurate.

I'm concerned about XB defining its own component concept, and a bunch of other modules each defining their own component concept. What I prefer about #1 is that the SDC concept is already in core and has the shape we want (except perhaps if @pdureau's comment #2 is foreshadowing that XB will need to work with components that need things that can't be cleanly represented with props and slots, but I'm not clear yet on how those specific examples will relate to XB).

πŸ‡ΊπŸ‡ΈUnited States effulgentsia

If I'm reading this right, this MR introduces the concept of a "component", that can contain props and slots, but is decoupled from SDCs, and therefore can forward those props and slots to a different source plugin, such as blocks and layouts. What does this enable or make better than what you'd be able to achieve by implementing a "block" SDC or "layout" SDC that performs that forwarding? If we already have the concept of SDCs, why introduce yet another concept of a "component"?

#2 gives some examples of where props and slots might be an insufficient data model, but I'm not clear how this MR resolves that.

I guess from my perspective, what's important about SDCs isn't the SD part (though that part is nice for ergonomics) but the props and slots part.

πŸ‡ΊπŸ‡ΈUnited States effulgentsia

I agree with #62 that it would be nice for even the static prop values within a node's XB components to be queryable with Views.

If we go with JSON storage for those prop values, this would require writing Views plugins for getting at the data that's inside of that JSON.

If we go with field_union storage for those prop values, we'd get the Views integration for free, but the downsides would be:

  • For every XB-enabled node type we'd need to create a field_union field per SDC that the site builder wants to use within that node type (and we'd need to programmatically add/remove those fields whenever the site builder (or code) changes which SDCs to enable for XB usage on that node type.
  • Not all SDC props map to fields. That's why https://www.drupal.org/project/experience_builder β†’ currently maps them to TypedData objects instead and is working on an adapter system to handle non 1:1 cases. I don't know off-hand how we would address these things with a field_union approach, but maybe it's possible.
  • For SDC props that hold non-scalar data (objects/arrays), those values might need to be in JSON, so even with a field_union approach we might want to end up writing Views plugins for data within JSON, at which point we could have just done that to begin with.

On balance, my hunch is that writing Views plugins for JSON will be less hassle than addressing all of the things that we'd need for a field_union solution, but of course my hunch could be wrong.

πŸ‡ΊπŸ‡ΈUnited States effulgentsia

@Wim Leers asked me to chime in here. There's a lot of great comments on this issue that I haven't digested yet, so I'll save a longer comment until after I do, but what I want to highlight in this comment is that the essence of what I originally proposed is that there are two distinct pieces of data: tree and props. In this issue's summary, I originally proposed that as two fields. #29 suggested instead making it two props of a single field, which is what's currently implemented in https://www.drupal.org/project/experience_builder β†’ , which I'm okay with for now. There might be benefits to changing it at some point to two fields, but I do like the conceptual simplicity of keeping it all in one field, so I think it makes sense to keep it as one field until there's clear benefits to changing that.

tree needs to store a hierarchy of arbitrary depth. JSON seems like the obvious choice for that. #42/#51 point out that it's possible to store a hierarchy purely relationally (as Drupal already does for menus and taxonomy), but ugh, why would we choose that when all the databases that we care about support JSON now? As an analogy, it's technically possible to store a string as multiple records of a position integer and an integer representing a UTF-8 code, but why do that when strings are available as a data type?

Because tree stores the hierarchy, props doesn't have to. props is just a flat array of components and their prop values/expressions. https://www.drupal.org/project/experience_builder β†’ currently implements it as a single JSON blob. If at some point we want to split tree and props into two fields, instead of two properties of a single field, then props could be changed to a multi-valued field, where each item would be the JSON blob for just a single component.

Or if we want, instead of a JSON field, props could be a multi-valued field_union field. Except, I think there'd be a few potential problems with this:

  1. Some SDC props are themselves objects/arrays. These would then need to be JSON sub-fields within the field_union. So field_union wouldn't entirely get us away from JSON, it would just push the need for JSON one level down. Pushing JSON one level down might have some benefits, so I'm not saying this necessarily makes field_union a bad suggestion, but I just don't think it actually solves all that much for this use case (it's a fabulous module for plenty of other use cases).
  2. If you want to build a page (node) with 20 different SDCs (not just 20 instances of the same SDC), similar to building a page (node) with 20 different paragraph types, wouldn't you then end up with 20 different field_union fields (one for each SDC) added to that node type? While this can technically work (tree can make sure all the components get rendered in the right place even if their prop values are spread across items in different fields), it just seems like a lot more complexity and hassle than using JSON. However, please correct me if I'm misunderstanding how field_union works.

Like I said, I'll write up more later after I digest the rest of the discussion in this issue, but I wanted to at least touch on the above highlights in the meantime.

πŸ‡ΊπŸ‡ΈUnited States effulgentsia

Thanks! Merged, and will tag beta3 with it shortly.

πŸ‡ΊπŸ‡ΈUnited States effulgentsia

I suspect I'll end up creating a mysql57 contrib module

Done: https://www.drupal.org/project/mysql57 β†’

πŸ‡ΊπŸ‡ΈUnited States effulgentsia

I think the CR ( https://www.drupal.org/node/3444548 β†’ ) is ready to be published, but deferring to release managers to decide when to do that.

πŸ‡ΊπŸ‡ΈUnited States effulgentsia

I haven't read all the discussion here, but I noticed some discussion about the DX of JSON schema, or certain decisions we might want to make about the details of the JSON schema we want to use, so I want to quickly mention that https://github.com/vega/ts-json-schema-generator is a pretty great library that lets you convert TypeScript definitions to JSON schema. In other words, if we want to optimize for DX, we might want to consider a DX where people can write TypeScript definitions instead of writing JSON schema by hand, and then the JSON schema creation can be automated which would allow us to optimize the JSON schema to suit SDC's needs rather than optimizing for manually writing it.

πŸ‡ΊπŸ‡ΈUnited States effulgentsia

Per #33, I still plan on making a sqlite326 (or better name if anyone has ideas) contrib module for D11; I just haven't had the time for it these last few months. If someone wants to beat me to it, go for it! I agree that a 3.45 requirement will be inconvenient for enough people to warrant making the contrib module for them.

πŸ‡ΊπŸ‡ΈUnited States effulgentsia

This is the same 'problem' for normal relational SQL field storage like the body field though.

In theory, yes, having a 20kb value for your body field would require the same storage and scale with languages and revisions the same way as having 50 paragraphs with 400 bytes each. In practice, I've never had a coworker tell me about a huge database they encountered for a customer where the cause of that size was many nodes/revisions of large values for the body field (or any other text field). It's almost always due to paragraphs. Either in general there's a lot more sites building big pages out of many paragraphs rather than a long body field, or perhaps it's just more common for that kind of page building to iterate through more revisions than longform articles do.

πŸ‡ΊπŸ‡ΈUnited States effulgentsia

That's 10MB per node

That was off by 10x. I only scaled by number of languages instead of by the square of that. But with languages, they affect both the number of records per revision, and also the number of total revisions (since often when you update the content for one language, you want to translate that update to your other languages, which requires a new revision for each language). I updated #8 accordingly.

πŸ‡ΊπŸ‡ΈUnited States effulgentsia

is it really that bad if the component field is stored as a large JSON string?

We have customers at Acquia with >50GB databases, mostly filled with duplicate (or nearly duplicate) paragraph revisions.

Part of the problem is that every revision stores records for every language, even though any given revision only edits one translation. That's not unique to paragraphs, it's just that with paragraphs you have so many more entities than you do with nodes alone.

For example, say for one node you have:

  • 50 components
  • 50 revisions per language
  • 10 languages
  • 400 bytes per component

That's 10MB per node. So if your site has 1,000 nodes like that, that would be 10GB.

Whereas, if we can reduce that last one to 20 bytes per component: an 8 byte short hash plus 12 bytes of overhead (quotes, commas, plus also needing to store the component instance ID), that can reduce the storage by 20x.

I think we should use a full SHA1 or equivalent here.

Yeah, given a 12 byte (or who knows, maybe even more) overhead, the difference between an 8 character short hash and a 28 character (base64 160bit) SHA1 hash ends up being only a 2x difference (20 vs 40 total bytes), so perhaps that's worth it if we think SHA1 has sufficient collision resistance for this purpose.

However, the other thing we can do is use a dynamic length hash. For example, start with just the first 8 characters, check the lookup table, and if that key isn't already there, or is there and has the same JSON value that we want to store, then great just use that. But if the lookup table already contains a different JSON for that same short hash, then for the new JSON, try again with more characters (e.g., increase from 8 to 16 and repeat, or maybe just go immediately all the way to a full SHA1 or an even longer and more collision-resistant hash). That way, >99% of the data can be stored with just the short hash, and only occasional data ends up needing the longer hash.

πŸ‡ΊπŸ‡ΈUnited States effulgentsia

I opened ✨ JSON-based data storage proposal for component-based page building Active which I think is general enough to be able to support any UI we might end up wanting for this.

πŸ‡ΊπŸ‡ΈUnited States effulgentsia

Upon further reflection, I think we should switch from prop-types to TypeScript. One reason is that TypeScript has become so widely adopted in the JS ecosystem. Another reason is I think it'll be easier to have a single set of TypeScript definitions that can then work for React, Vue, or other frameworks.

When I first opened this issue, I didn't know how we could generate .template-info.json files from TypeScript definitions. Since then I learned about https://github.com/vega/ts-json-schema-generator, which generates JSON schema from TypeScript. Which makes me realize we should change JSX theme engine to read JSON schema (which is a standard) instead of .template-info.json (which is just something I made up because for some reason JSON schema didn't occur to me at the time).

I'll add some more detailed comments here later about how to go about implementing the above.

πŸ‡ΊπŸ‡ΈUnited States effulgentsia

https://github.com/php-tuf/composer-stager/issues/350 just got completed, making Composer Stager compatible with both Symfony 6 and 7, so that it's compatible with Drupal 10 and 11. So next steps for this issue are:

  • Release Composer Stager 2.0.0-beta5, so the above fix is in a tagged release. Note that Composer Stager 2.0 is only in beta status because it hasn't been sufficiently reviewed yet. But as far as we know, it's complete, so we can release RC or stable at any time.
  • Open an MR for this issue that adds Composer Stager ^2.0.0-beta5 to Drupal core's dependencies. See #50 for the latest patch that did this with an earlier version, but we don't do patches anymore.
  • Update the issue summary of this issue for accuracy (e.g., remove the info about the PHP file syncer and check the rest of the issue summary for whether it accurately reflects the current state of Composer Stager).
  • For anyone following this issue, and especially core committers, please review the Composer Stager codebase and open GitHub issues for any problems/concerns that you find.
πŸ‡ΊπŸ‡ΈUnited States effulgentsia

https://github.com/php-tuf/composer-stager/issues/350 got completed so now Composer Stager is compatible with both Symfony 6 and 7, so I think the MR in this issue should include it so that tests run properly.

πŸ‡ΊπŸ‡ΈUnited States effulgentsia

I think the best next step to move this issue forward is to get reviews of the code in https://github.com/php-tuf/composer-stager. @TravisCarden just recently (2 weeks ago) released 2.0.0-beta4. It's only in beta status because it hasn't been reviewed much by people other than those working on Package Manager. As far as we know though, it's feature complete and can be marked RC or stable once it's adequately reviewed. The most noteworthy recent change is that per 🌱 [policy] Require rsync for automatic updates in Drupal core and punt other syncers to contrib RTBC we removed the PHP filesyncer so now only works if people have rsync installed. This makes the package easier to review and easier to maintain. If there's a significant population who's on systems that don't already have rsync installed and can't install it, then a separate GitHub repo/package could be created (if someone were willing to maintain it) containing alternate file syncers (e.g., one that uses the Windows robocopy command for people on Windows), but those would not be "part of core" (i.e., maintained by Drupal core committers). The Drupal issue for discussing stuff related to Composer Stager is πŸ“Œ Add php-tuf/composer-stager to core dependencies and governance β€” for experimental Automatic Updates & Project Browser modules Needs work . Issues and PRs could also be filed in the GitHub repo.

Beyond the above, @tedbow: it would be good to get an updated MR of Package Manager into this issue.

πŸ‡ΊπŸ‡ΈUnited States effulgentsia

Given #18, this issue is essentially "Fixed", but moving it to RTBC first for visibility. I'll mark it Fixed if/when a few days go by without anyone raising compelling objections.

πŸ‡ΊπŸ‡ΈUnited States effulgentsia

Given #9, I think there's nothing to document. Sounds like we vetted that the status quo plan, both in terms of targeting Automatic Updates to be in the core repository, and working within existing backport policy guidelines but just being more likely to actually backport the things that are backport-eligible, is good enough.

πŸ‡ΊπŸ‡ΈUnited States effulgentsia

Umami JSX predates some of the exploration we did in https://github.com/drupal-jsx/starter-react. There, we used Vite, so could use Vite's glob function: https://github.com/drupal-jsx/starter-react/blob/main/src/index.js#L5.

However, there the components are all flat in the src/components directory. Not sure if that's ideal, we might want to prefer Umami's / Olivero's way of organizing templates by category directories.

Also, lately I've been second guessing the trying to use PascalCase for the component filenames, and I wonder if it would be better to just name the files exactly like the Twig files would be named. So for example, perhaps layout--threecol-section.jsx would be a better name than DrupalLayout--ThreecolSection.jsx? My original reason for wanting PascalCase was to make the JSX components feel more like regular React components and less like a Drupal-quirky thing, except the thing is they're already Drupal-quirky things because the shape of the props comes from Drupal, so perhaps by naming them with Drupal's standard kebab-case, we would actually simplify the mental model by even more clearly distinguishing JSX files that represent Drupal templates from the app's other JSX components?

πŸ‡ΊπŸ‡ΈUnited States effulgentsia

Good idea. I think the ones we should publish are the ones in https://github.com/orgs/drupal-jsx/repositories, which are scoped. I'm curious if a github repo is published to JSR, if it automatically stays up to date with pushes to the github repo. That would make it more convenient than figuring out how to get the right workflow for automatically publishing updates to npm, though with as many people using npm as there are, maybe there's already a well documented way to setup the github repos to do this?

Even before those are published to npm/jsr, they can be installed via their GitHub URIs (e.g., `npm install github:drupal-jsx/hyperscriptify`). There might be some divergence between the packages in the `drupal-jsx` repositories and what's in the Umami demo apps that would be good to reconcile at some point.

πŸ‡ΊπŸ‡ΈUnited States effulgentsia

I only reviewed this quickly, but at least on first glance, it looks great!

I'm curious why core_recommended_themes is a single recipe rather than splitting front-end theme (Olivero) and admin theme (Claro) into separate recipes. For example, might people want to reuse the Claro recipe with a different front-end theme, or reuse the Olivero recipe with Gin or some other admin theme? Is there discussion that led to combining them that I can read up on?

πŸ‡ΊπŸ‡ΈUnited States effulgentsia

Yes, there's not much conceptually new in HTMX that we didn't have in Drupal 6's AHAH which then became Drupal 7's AJAX. But the HTMX community did a really nice job in creating a more refined API and documenting it well.

πŸ‡ΊπŸ‡ΈUnited States effulgentsia

Just doing some issue queue organization. Raising the priority to Major because #3396649: Add a demo Next.js app β†’ is critical and could benefit from having at least some examples of use client components, and because #3399991: Add a demo Fresh app and subtheme β†’ is major and could benefit from having at least some examples of island components.

Note that the current MR in #3396649: Add a demo Next.js app β†’ might already have this, so potentially this issue can get closed as part of that one if that MR stays that way.

πŸ‡ΊπŸ‡ΈUnited States effulgentsia

Just doing some issue queue organization. Raising the priority to Major because while it isn't strictly necessary for an MVP, I think it would add a lot of value. For most websites, Next.js is overkill, and being able to show that this theme engine also works well for a lightweight and no-build-step framework like Fresh would open this up to a wider range of websites.

πŸ‡ΊπŸ‡ΈUnited States effulgentsia

Just doing some issue queue organization. Raising the priority to Critical just to add transparency that it's a key feature / bug-fix that's needed for a successful MVP.

πŸ‡ΊπŸ‡ΈUnited States effulgentsia

Just doing some issue queue organization. Raising the priority to Critical just to add transparency that it's a key feature for an MVP.

πŸ‡ΊπŸ‡ΈUnited States effulgentsia

Closing this because I think the child issues stand on their own and we don't need this separate Plan issue.

πŸ‡ΊπŸ‡ΈUnited States effulgentsia

Having thought about it more, I don't think we should do this, because I think we should focus the JSX theme engine primarily on exclusively server-side rendering (except for explicit client components). Given that both Next.js and Deno Fresh are two different takes on how to do exclusive server-side rendering, I don't think we need to also demonstrate yet another framework (Remix), especially since it doesn't currently support exclusive server-side rendering. People interested in Remix integration can write that themselves without us needing to demonstrate it in this project. We're also free to change our minds about this in the future and reopen this issue then if we want to.

πŸ‡ΊπŸ‡ΈUnited States effulgentsia

This was completed in https://github.com/drupal-jsx/drupal-utils-dev a couple months ago, I'm just late updating this issue. The solution was based on using prop-types declarations rather than TypeScript declarations, but the spirit is the same.

πŸ‡ΊπŸ‡ΈUnited States effulgentsia

Fixed incorrect "Drupal 10" references to "Drupal 11".

πŸ‡ΊπŸ‡ΈUnited States effulgentsia

The ajaxSubmit method is massive

But with modern browsers we shouldn't need 95% of it. The submitForm() function in #26 conceptually should be a drop-in replacement for ajaxSubmit(). It didn't seem to work and I haven't had time to look into it further yet but my hunch is that we shouldn't need to add all that much more code into it to make it work.

I therefore am thinking, at least to get us onto jQuery 4, we should just fork jQuery form and fix the four jQuery 4 issues in it

Yes, I agree with doing that so as not to block our jQuery 4 upgrade on this issue.

πŸ‡ΊπŸ‡ΈUnited States effulgentsia

Debian trixie would be fine in the short term, but I don't think its release is any time soon, so it will drift to later SQLite versions when they're released. I think we'll want to keep testing on SQLite 3.45 for the lifetime of Drupal 11, so either Ubuntu Noble or a way to pin SQLite to the desired version regardless of the distro's default would be preferable I think.

πŸ‡ΊπŸ‡ΈUnited States effulgentsia

create another docker image using Ubuntu/noble?

Right, the goal is to be able to run Drupal 11 tests on SQLite 3.45 while still also being able to run Drupal 10 tests on SQLite 3.26. I think one way to achieve that would be with an Ubuntu Noble docker image. Not sure if we should just go with that or if it makes sense to consider any alternative ways of getting SQLite 3.45 onto any of our other existing docker images.

πŸ‡ΊπŸ‡ΈUnited States effulgentsia

#33 affects the timing of when we commit the change in the 11.x branch, but back to RTBC as far as the policy itself is concerned.

πŸ‡ΊπŸ‡ΈUnited States effulgentsia

Looks like Ubuntu got it in already: https://packages.ubuntu.com/source/noble/sqlite3.

I think the next step here then is to not make things more difficult than necessary for developers/testers of 11.x prior to when Ubunutu 24.04 and the next MacOS are released. I'll try to find some time in the next week or two to create a sqlite contrib module and make sure that it's as easy as `composer require` and that things like the quickstart command and running tests work smoothly.

πŸ‡ΊπŸ‡ΈUnited States effulgentsia

Re #28, cool! Glad they released that ahead of their initial projection for end of Jan. If someone knows Ubuntu's process (e.g., if it needs to land in Debian trunk first) and can open an issue in the correct queue (Ubuntu's or Debian's) making the case for why they should try to get this in before Ubuntu 24.04's feature freeze, that would help! With only 6 weeks till Ubuntu's feature freeze, we shouldn't assume that this particular SQLite upgrade is on their radar or something they're prioritizing without someone filing an issue in their queue.

πŸ‡ΊπŸ‡ΈUnited States effulgentsia

Turns out, we might not actually need a subtheme for this.

πŸ‡ΊπŸ‡ΈUnited States effulgentsia

Thanks for all the feedback. I discussed this with @tedbow and we agreed that as long as sufficiently important bug fixes are backported to the production branch(es), then that resolves the original concern here, so closing this as "works as designed" (even though "will work as designed" would be technically more accurate).

When I have a chance I'll open a followup to discuss some more details about what else might be needed to facilitate testing: both manual and automated, both pre-commit and post-commit.

πŸ‡ΊπŸ‡ΈUnited States effulgentsia

Something to think about here is upgrade path:

  1. Sites might have active config with keys that this MR makes invalid when status=FALSE. Do we want a postupdate function to remove those?
  2. There might be contrib/custom modules/profiles with default config that includes these now-invalid keys. Do we want something that runs on module-enable and/or config-import that removes them?
πŸ‡ΊπŸ‡ΈUnited States effulgentsia

I committed the blocker issue, so this isn't postponed anymore.

πŸ‡ΊπŸ‡ΈUnited States effulgentsia

Pushed to 11.x, and published the CR. The CR doesn't currently mention that requiredKey can be set to false, so that might be a good thing to add to it.

Also tagging this (not just this issue, but all the stricter validation features) for 10.3.0 release highlights.

πŸ‡ΊπŸ‡ΈUnited States effulgentsia

This MR looks really great. My only hesitation with committing it is the various changes in here related to enforcing the lack of any other keys when an editor's image_upload.status is FALSE. For example, the need to remove those other keys from Umami's editor.editor.basic_html.yml. I think that should all be removed from this MR and moved to a follow-up that gets its own commit and its own change record.

I don't think separating that out needs to affect this MR too much though. For example, could we accomplish that by adding a editor.image_upload_settings.0
config schema object that doesn't add FullyValidatable? We can still leave FullyValidatable in place on editor.image_upload_settings.1.

The reason I want to punt on enforcing FullyValidatable on editor.image_upload_settings.0 is that I think it could be useful in some situations to have default config that adds the upload configuration even with a status=false, so that someone could change the status to true and have the other stuff already properly configured. Maybe I'm wrong and that's not useful, but in either case, I'd like us to discuss that in its own issue without holding up this one.

Other than that, I think this is good to go.

πŸ‡ΊπŸ‡ΈUnited States effulgentsia

If we end up setting MariaDB's minimum to 10.11, the next question to figure out would be when to start enforcing that in HEAD. While we should communicate the requirements ~6 months before the first 11.0 release window (in other words, now-ish), if we actually set that requirement in HEAD now, then that'll affect developers/testers on Ubuntu 22.04, so I'd recommend not actually setting that requirement until just before beta (which might still be before Ubuntu 24.04 is released, which would be inconvenient, but I think we kind of have to pull that trigger before beta).

πŸ‡ΊπŸ‡ΈUnited States effulgentsia

I don't think we have previous examples, but also the MariaDB release schedule has only existed since 10.6, which is since July 2021, so there is not enough time for it to have been in place, for us to know how it'll affect us.

Is there any reason to think that it will affect us though? What changed is they added a bunch of short-term releases, but has anything significant changed about their long-term releases? Just one piece of anecdotal evidence: Drupal 9 raised the MariaDB minimum to 10.3, and https://www.drupal.org/project/mysql56 β†’ exists which nominally supports MariaDB 10.0 and in the last 3.5 years since Drupal 9.0 was released, there's no significant code differences in that module vs. core's driver, and no one has opened an issue with any problems on MariaDB 10.0, 10.1, or 10.2.

However, I'm not necessarily opposed to the idea that we simply want core to be cautious about the promises it makes, and that the most cautious thing to do would be to set MariaDB's minimum to 10.11, so that core doesn't have to take on the risk of dealing with MariaDB 10.6 bugs in 2027/2028. For MySQL I think we should support the last 2 LTS releases of Ubuntu (because people don't upgrade to the newest Ubuntu immediately), but given MariaDB's lower marketshare, I think it's okay to set a more aggressive minimum for MariaDB.

Changing the title to say 10.11 for MariaDB since it seems like that's what we're recommending, so wanting that to have visibility to people scanning the issue queue who might not already be following this discussion.

πŸ‡ΊπŸ‡ΈUnited States effulgentsia

What would be the benefit of setting MariaDB's minimum to 10.11 vs. 10.6? Are there any features or syntax in MariaDB 10.11 that both:

  • aren't in MariaDB 10.6, and
  • are in MySQL 8.0

If not, then Drupal core's mysql driver would literally have no code in it for the lifetime of Drupal 11 that won't work on MariaDB 10.6. At which point, we'd be making the adoption of D11 harder for people (e.g., people on Ubuntu 22.04 who use MariaDB) for no reason.

Or, is it not about MariaDB features, and more about MariaDB support that's available once it's EOL? Are we concerned, for example, that in 2027 or 2028 we'll still be supporting D11 and we might find a MariaDB 10.6 bug that doesn't exist in MySQL 8.0 and that MariaDB won't fix and Drupal's core mysql driver will find challenging to work around? This would be a reasonable benefit to setting 10.11 as our minimum, but do we have any prior experience of this actually happening with prior MariaDB EOL versions?

Because MariaDB has low marketshare (not sure how reliable these are, but source1, source2), I think it's acceptable to set the minimum to 10.11. For example, people on Ubuntu 22.04 would still have options, such as switch to MySQL, or use a contrib mysql driver per #27, or upgrade to Ubuntu 24.04. However, if we decide to set the minimum this high, I think we should have a clear reason for doing so, such as features we actually intend to use or actual experience with EOL MariaDB versions giving us problems during the last couple years of supporting a major Drupal version.

It looks like no one from Pantheon has commented on this issue.

This hasn't happened yet, but I found https://docs.pantheon.io/pantheon-yml#specify-a-version-of-mariadb, so looks like Pantheon customers have an easy way to choose their MariaDB version as long as Pantheon offers it, and that doc page says it was last reviewed in March 2022 and it lists 10.6 in its example, so we could perhaps extrapolate that to mean that 10.11 will be available this year.

πŸ‡ΊπŸ‡ΈUnited States effulgentsia

Setting the title to what I think is RTBC here per #19.

πŸ‡ΊπŸ‡ΈUnited States effulgentsia

I can also now confirm that I'm able to reproduce this with the same steps as in the issue summary

That's because I followed the exact steps, including installing Drupal 10.1.5. I think this got fixed in 10.1.7 and 10.2.0 in πŸ› LibraryDependencyResolver::getMinimalRepresentativeSubset() calculates dependencies incorrectly Fixed . Please re-open this if there's still a bug related to this in those newer versions.

πŸ‡ΊπŸ‡ΈUnited States effulgentsia

Actually, when I wrote #5, I misunderstood what my coworkers were seeing. In fact, they're seeing the same as the original issue summary, where each response is a 200 (not a 404), but it's a PHP request every time, because the hash in the src attribute of the script tag on the page doesn't match the hash that gets generated in AssetControllerBase::deliver, so in this case each page view keeps requesting the former hash, and each request hits PHP instead of a file on disk.

I can also now confirm that I'm able to reproduce this with the same steps as in the issue summary and for the same reason as explained in #2.

Retitling for clarity (I missed that this was the pattern when I first read the issue, despite the issue summary actually explaining it quite clearly), and bumping to Major (frankly, I think Critical could even be justified since this can be awful for high traffic sites).

πŸ‡ΊπŸ‡ΈUnited States effulgentsia

I mostly agree with #7, but with a few tweaks...

For MySQL, I think we should be the most conservative, which means I would prefer instead of:

'the MySQL LTS available before the first major beta window'

something more like 'the MySQL LTS that's been in an Ubuntu LTS for at least a year before the expected Drupal major release date'.
Given the alignment of Drupal's cycle with Ubuntu's cycle, in practice I think that'll often be the same thing, but in cases where MySQL releases an LTS after Ubuntu's feature freeze, or if Drupal changes its cycle to be less aligned with Ubuntu's, then my version is a bit more conservative.

For PostgreSQL:

the most recent PostgreSQL major version that is available prior to the first beta window for a Drupal major version

Except we want to set our requirements 3 months before beta. But given PostgreSQL's schedule of Sep/Oct, I think we could change this to the most recent PostgreSQL major version that is available 6 months before the expected Drupal major release date without that making a difference to the version that that corresponds to.

For MariaDB, I think we should get Pantheon's input into what their constraints are around adopting MariaDB versions.

For SQLite, I think we could potentially be as aggressive as "the version that's in the latest Ubuntu LTS before Drupal's beta", or as conservative as "the version that's in the latest Ubuntu LTS, Debian, and MacOS before Drupal's beta", or in between the two (e.g., Ubuntu and MacOS but not Debian). I think for any given Drupal major version we should base what we choose within that range on how important are the features in the newer SQLite release. For Drupal 11, SQLite 3.45 has jsonb which I think is important, so if that makes it into Ubuntu 24.04, I'd support that minimum even if it means early testers/users on Debian and Mac need to manually upgrade their sqlite or install a contrib driver. However, if Ubuntu's latest ends up being 3.44, then I don't think there's anything between 3.40 and 3.44 that's useful (I don't think json5 in db insert/update queries is useful to Drupal), so I'd go down to 3.40, or even to 3.39 in order to pick up the 2nd latest MacOS. Basically, where it costs us nothing to make things a bit easier for more people, I think we should do that where we can, but where core development/maintenance gets a tangible benefit from a higher minimum, we should choose that higher minimum within reason.

πŸ‡ΊπŸ‡ΈUnited States effulgentsia

I support requiring MySQL 8.0 and MariaDB 10.5 (at least). I suspect I'll end up creating a mysql57 contrib module, like I did with https://www.drupal.org/project/mysql56 β†’ for Drupal 9 and 10, but I don't know for sure if I will yet.

I also support raising the MariaDB minimum to 10.6 if we decide to do that. RHEL's next minor release might include 10.11. Even if it doesn't, most RHEL users probably use MySQL over MariaDB, and the few that need/prefer MariaDB could use the contrib driver if I create one (which I'll be more likely to do if both Drupal requires 10.6 and RHEL remains on 10.5).

It looks like no one from Pantheon has commented on this issue. Might be good to reach out to them for comment. They're probably the largest Drupal host that uses MariaDB and I don't know their platform constraints with respect to versions that are available to them. (I work for Acquia and we use MySQL.)

πŸ‡ΊπŸ‡ΈUnited States effulgentsia

The biggest problem with 3.45.0 is that we aim to set requirements six months before the first release window for the major version

I forget if we documented our "set platform requirements 6 months before major release" policy, but if so, then FWIW, I think we should change that policy to explicitly mention SQLite as an exception to that (not just for 11.0 but ongoing), and allow SQLite's minimum to float up until beta. While I think SQLite is a fantastic db, and I would actually love to see more Drupal sites using it in production, especially if https://sqlite.org/hctree/doc/hctree/doc/hctree/index.html ever makes it beyond its currently experimental state, I don't think there's currently enough Drupal sites using SQLite in production to warrant needing 6 months prior to major release notice.

πŸ‡ΊπŸ‡ΈUnited States effulgentsia

Debian and Ubuntu both have releases scheduled prior to Drupal 11's earliest release date which will support sqlite 3.42.

This is correct for Ubuntu, but I don't think it is for Debian. Debian's latest stable release is Debian 12, released in June 2023, which has SQLite 3.40. Debian has not yet announced a release date for Debian 13, but they typically follow a ~2 year cycle, so mid-2025 is likely. So if we want Drupal 11 with sqlite to work on the latest release of Debian that's available when 11.0 is released, we should set the minimum to 3.40. However, if there's nothing we need in 3.40, I'd suggest lowering it to 3.39, which is the version in MacOS Ventura. If there is something we need in 3.40, then it's okay to set above Ventura's minimum since MacOS Sonoma has 3.43, but some people are slow to update their MacOS (including if they're on old hardware), so if there's no difference to us between 3.40 and 3.39, then we might as well be nice to them.

The argument for 3.42 over 3.40 is json5 support, but I don't envision us having any use case in core for json5. Especially if MySQL doesn't support json5 yet (does it?). json5 is great for human-readable (e.g., adding comments) json, but what would be the use case for us needing to send json5 into the db? Also, sqlite doesn't even retain the json5 format (e.g., comments would be stripped out before storing into the db file).

I think we should not be constrained by RHEL here

I can accept this for core, but I don't think it's good to drop RHEL support entirely. In other words, if Drupal 11 requires anything more than SQLite 3.26, I will create a contrib module with a sqlite driver that works on 3.26. It makes sense for core's sqlite driver to use the -> and ->> operators (which requires 3.38), but I don't expect there to be any problem with making contrib's sqlite driver change those to the equivalent json functions that are available in 3.26.

It would be great to just bump the minimum to 3.45.0

I agree that core being able to use 3.45's jsonb_*() functions would be nice. Given that I'll create a contrib sqlite driver for old sqlite versions, there might be a pathway to us choosing to be this aggressive with core's requirement. I think the key question here is whether 3.45 will make it into Ubuntu 24.04. If SQLite releases 3.45 end of January and Ubuntu's feature freeze is end of February, this is possible, but not guaranteed, especially since I think Ubuntu's process is that it would need to land in Debian's trunk first.

I think the other question is whether Drupal 11.0 will be released in June or July, because the June window means a beta end of March (before Ubuntu 24.04's release) and the July window means a beta end of April (just after Ubuntu's release).

But, let's assume hypothetically that 3.45 makes it into Ubuntu 24.04 and Drupal 11's beta is right after Ubuntu's release. Would it then be reasonable to say that core only needs to support that, and people on the non-latest Ubuntu or on any other OS (including MacOS until a new version of that is released later in 2024) need to install the contrib driver? How would this affect the quick-start command (e.g., could we add a check in that script that tells people if their sqlite is too old and give them the composer command they need to run to install the contrib driver)?

Finally, if any of the work in ✨ Add "json" as core data type to Schema and Database API Needs work and related issues actually wants to use sqlite 3.45 syntax (jsonb functions) or even 3.38 syntax (-> and ->>) operators, are we then saying that that work will only be committed to 11.0 and not to 10.3?

In summary, my opinion is that we should choose either 3.39 or 3.45 (or the former now and consider the latter later), but that anything in between the two doesn't make as much sense (unless there's important features between those two other than json5).

πŸ‡ΊπŸ‡ΈUnited States effulgentsia

FrankenPHP has a known issue about crashes when certain PHP constructs and functions are used within Fibers. I think it would be great to open a separate Drupal core issue that tries to find out if Drupal makes any of those calls within fibers. That could happen independently of this issue, which affects all persistent app servers, not just FrankenPHP. However, I don't have any ideas at the moment on what the best way would be to determine if Drupal triggers any of those FrankenPHP crashes without first solving this issue.

πŸ‡ΊπŸ‡ΈUnited States effulgentsia

I set #95 to NR just to queue tests, but now back to NW to keep this out of the review queue.

Question from the issue summary this is a workaround for drupalCi but what about gitlab?

It's a good question as to whether with GitLabCI if it's now possible to move this issue to the https://www.drupal.org/project/mysql56 β†’ queue but still be able to put some custom stuff into that project's .gitlab-ci configuration that would make it run all of Drupal core's tests. I'll need to look into that when I can carve out some time to do so.

πŸ‡ΊπŸ‡ΈUnited States effulgentsia

@bkosborne: I've heard reports from coworkers seeing something very similar to #3, but haven't found steps to reproduce it. If you (or anyone else seeing this) can figure out a way to get from a newly installed Standard Profile site into a configuration (which magic combo of contrib/custom modules is needed?) that puts you into that 404-every-time state, that would be super helpful to debugging and fixing it!

πŸ‡ΊπŸ‡ΈUnited States effulgentsia

It's that time of year :)

πŸ‡ΊπŸ‡ΈUnited States effulgentsia

The current MR looks really good to me and addresses all my prior concerns.

+    // All values are optional by default (meaning they can be NULL), except for
+    // mappings and sequences. A sequence can only be NULL when `nullable: true`
+    // is set on the config schema type definition. This is unintuitive and
+    // contradicts Drupal core's documentation.
+    // @see https://www.drupal.org/node/2264179
+    // @see https://www.drupal.org/node/1978714
+    // To gradually evolve configuration schemas in the Drupal ecosystem to be
+    // validatable, this needs to be clarified in a non-disruptive way. Any
+    // config schema type definition Ò€” that is, a top-level entry in a
+    // *.schema.yml file Ò€” can opt into stricter behavior, whereby a property
+    // cannot be NULL unless it specifies `nullable: true`, by adding
+    // `FullyValidatable` as a top-level validation constraint.

I'd like to take an attempt at wordsmithing this a bit more, but not right now. I'll either do that before committing this myself, or if someone else ends up committing it, I'm fine with opening a follow-up for improving code comments.

This issue is tagged "Needs subsystem maintainer review", so would be great to get @alexpott's review if possible. However, if he doesn't have bandwidth for it, I won't block commit of this on that.

       $root_type_has_opted_in = FALSE;
+      foreach ($parent->getRoot()->getConstraints() as $constraint) {
+        if ($constraint instanceof FullyValidatableConstraint) {
+          $root_type_has_opted_in = TRUE;

Since this MR only adds FullyValidatableConstraint to menus and shortcut sets, neither of which has dynamic types (plugins or third party settings), I think this is fine for now. However, before adding FullyValidatableConstraint to config roots that include any descendants with dynamic types, I think we need to work out how to handle those boundaries. It's impossible for a config root to know that all of the plugins within it are fully validatable, so I think the semantics of FullyValidatableConstraint must be limited to only its descendants with static types. For an element that has an ancestor with a dynamic type, we need to use that ancestor's presence or absence of FullyValidatableConstraint instead of checking ->getRoot(). But again, we can punt that to a follow-up since that's not surfaced for menus and shortcuts.

πŸ‡ΊπŸ‡ΈUnited States effulgentsia

Thanks for finding these. I think the proper place to fix it would be in https://github.com/drupal-jsx/propsify/blob/main/strict.js. Also in the petite.js file in that same repo if the same change is needed for Preact/Solid, but not if it's only for React.

Umami JSX isn't using the above module yet, so until it does, fixing it in this PHP branch is a reasonable stopgap.

πŸ‡ΊπŸ‡ΈUnited States effulgentsia

After opening this issue I learned about https://www.npmjs.com/package/prop-types, which is a better fit than TypeScript for this use case. This is now demonstrated in https://github.com/drupal-jsx/starter-react/blob/main/src/components/Dru....

πŸ‡ΊπŸ‡ΈUnited States effulgentsia

Very nice, thanks! Merged to 1.0.x.

πŸ‡ΊπŸ‡ΈUnited States effulgentsia

Hey @cosmicdreams! Great to see you here :)

Are you more interested in HTMX as it relates to Drupal core? Or how it relates to the (still in very early stage development) JSX theme engine (this issue)? If the former, I opened 🌱 [Plan] Gradually replace Drupal's AJAX system with HTMX Active . For that issue, I think a possible next step would be to pick one place that currently uses AJAX in core (I'd probably suggest taking a straightforward example like ConfigSingleExportForm rather than starting with Field UI or Views UI) and trying to convert it to HTMX and see what you end up needing to add to such an MR (besides the HTMX library itself) to make it work.

For this issue, I currently have it postponed on #3397049: Integrate with AJAX β†’ , because the first step is to make the JSX theme engine work with ajax at all. Once we figure out how to do that, it will inform how to make it work with htmx as well. I expect the solution with respect to JSX theme engine to be very similar between ajax and htmx, since the underlying problem is the same: where to intercept the response before it's added to the DOM so that we can manipulate it first (for example, render it with React or whichever JSX renderer is being used).

πŸ‡ΊπŸ‡ΈUnited States effulgentsia

Re #20, since converting to HTMX is too large scope for this issue, I opened 🌱 [Plan] Gradually replace Drupal's AJAX system with HTMX Active as a separate Ideas issue.

πŸ‡ΊπŸ‡ΈUnited States effulgentsia

I haven't tested this at all, so this might fail hard, but something along these lines is what I recommend.

Production build 0.69.0 2024