This looks awesome! Merged to 0.x.
effulgentsia → made their first commit to this issue’s fork.
Or add https://www.drupal.org/project/sqlite337 → to the image :)
Thanks!
Oh that's neat: thanks, @brianperry, for creating that!!
Looks like that module is using 11ty instead of Astro. Not sure if that really matters for us. We started here with Astro because that's a very popular and well maintained framework, but it's quite possible that 11ty is sufficient for our needs here. @brianperry: what are your thoughts on pros/cons between 11ty islands vs. Astro islands?
The MR here is currently using CodeMirror and no strong opinions have yet been raised making the case for why Monaco would be better, so retitling to reflect the current state. We're still open to switching to Monaco if a strong case is made for that, but in addition to comment #7, I'd like to point out that:
- CodeMirror is used as the code editor within the dev tools of Chrome, Safari, and Firefox.
- The experience described in https://sourcegraph.com/blog/migrating-monaco-codemirror also matches my (much more limited) initial positive impressions of CodeMirror and frustrations with Monaco. There's a lot to like about the user experience of Monaco, so I hope Microsoft evolves it into something that can also be made lighter weight and easier to write plugins for.
Updated the proof-of-concept issue for the component code editor from 📌 [exploratory] StackBlitz PoC Active to ✨ [exploratory] PoC of Preact+Tailwind components editable via CodeMirror or Monaco Active .
✨ [exploratory] PoC of Preact+Tailwind components editable via CodeMirror or Monaco Active is looking quite promising, and works without relying on any commercial service, so closing this as outdated.
The Purpose and Expected result sections are incomplete. They don't capture the entirety of the remaining items that are in the Requirements section. When I find a bit of time, I'll fill in those sections so that they add clarity as to what the overarching goal of the 0.2.0 milestone is.
Also descoped (with confirmation from @lauriii) revisions and Workspaces from 0.2.0.
Removed the stretch goals from the list. We're not currently working on them so there's no realistic plan of them getting done by early January.
Just jotting some notes down from a conversation with @balintbrews and @hooroomoo...
When you run astro build
, it generates preact.module.js
, hooks.module.js
, and some other JS files that the in-browser-editable components depend on. So the question is how do we want to get those assets "into Drupal".
I think the most straightforward way would be to just add an astro build
step to XB's build step. In other words, XB's current build step is npm run type-check && vite build
so conceptually we could expand that to npm run type-check && vite build && astro build
.
However, XB is a React project in the ui
directory. It would probably be good for the Astro project to be a separate project (meaning, have its own package.json) from the main XB project. This separate project could be either a directory that's a sibling of the ui
directory, or a child. For example, astro
or ui/astro
(or we might want to come up with some other name for it than just calling it astro
). In which case, we'd want the build script in ui/package.json
to do whatever it does plus then kick off an npm run build
command within the astro directory. I'm guessing there's idiomatic conventions for how to structure/implement this type of setup, but I don't know what that is.
We're also looking into whether we can still wrap the Preact components into Astro islands
Yay, #17 proves that we can! Because Astro doesn't insert any special sauce into the JS for the component-url
: that's just vanilla Preact so @swc/wasm-web
is sufficient for compiling that. Meanwhile, all of Astro's special integration code, from the JS for the renderer-url
to its JS for the <astro-island>
custom element can be generated statically rather than in-browser. So that's all very promising!
Later on we also want to change how either SWC or Astro bundles their JavaScript to be mutually compatible
I looked a bit into this, and I think all we need for this is to set SWC's jsc.transform.react.runtime configuration to automatic
. That compiles to the jsx() function introduced in React 17, which is more efficient at creating vnodes at runtime than the older createElement()/h() function. So that's what Astro's Preact integration (and pretty much everything else in the React ecosystem) uses by default at this point, and SWC's default of classic
is pointless for new projects.
Based on a conversation with @effulgentsia ideally later on we could just rely on the Astro-bundled Preact packages instead of also bringing in Preact CDN through importmap.
I think I figured out how to do this. The challenge here is how to make Astro bundle its JS in such a way that it exports the original, rather than the minified, names of the Preact functions that we want components to be able to import. Normally, Astro takes care of bundling the components, so it can minify the export names of library functions since it can compile the components to import those minified names. In our case, since we're compiling the component code separately from the Astro code, we need any Astro bundles that we want to import from to export un-minified names. Astro uses Vite which uses Rollup, but I couldn't find any Astro/Vite/Rollup configuration to accomplish this. However, I found the following indirect way to accomplish it.
Within the Astro project, we can add a Stub
component like this:
const { useState } = await import("preact/hooks")
const { useSignal } = await import("@preact/signals")
const { jsx, jsxs, Fragment } = await import("preact/jsx-runtime")
export default function () {
}
And then add a <Stub client:only="preact"/>
usage of it in the index.astro file. Because this Stub component uses dynamic imports, it results in Rollup exporting the useState
, useSignal
, jsx
, jsxs
, and Fragment
functions, with those names, from the corresponding module bundles. If we want to expose additional functions/hooks for in-browser-editable components to be able to use, we can add those as well to the Stub component, but for now, let's just keep it to these.
With this in place, we can then take the output of SWC's compilation and replace from 'preact/hooks'
with from './hooks.module.js'
, and similarly for signals
and jsx-runtime
.
One thing I am not sure about is in Counter_SWC .js, is where I am supposed to be calling the render function and to what part of the document I should inject it into.
I don't think you need to call render()
. I think you only need to export { Counter as default };
.
Easy: #3462633: Allow copy pasting components with CTRL+C and CTRL+V, but for entire component trees. Precisely because there is nothing structured, it's trivial to go from FreeformPage to Node!
I agree with the part about it being easy to create a new node and copy content from the freeform page into the new node. But then how do you update links and entity references from other nodes/pages/blocks/etc. that were targeting the old freeform page to now target the new node? My suggestion for that is that links will be the easier problem to solve (whether via Redirect module or FreeformPage::toUrl() or something else) and we don't need to solve for it now so long as we're reasonably confident that it's solvable in principle. But that updating entity references throughout the site from targeting a freeform page to targeting a node is a harder problem to solve, and therefore we should instead simply disallow entity references to freeform pages to begin with, at least until we come up with an approach for how to update them or a product decision/UX that somehow avoids the need to update them.
Discussed this with @longwave, and one big next step here besides general review is to create tests that surface the difference cases where there isn't a 1:1 map between field values, prop values, and widget values. @longwave offered to start on that.
With 📌 Introduce the `xb_stark` theme that uses the Semi-Coupled theme engine, removes the need for taking over the Twig theme engine for the whole site Active in, we're no longer "taking over the Twig theme engine for the whole site", so retitling the issue to what I understand @larowlan's proposal to be. However, per #5, I still don't really see why we would want to circumvent the theme system when rendering to HTML, and I think trying to do so would lead to a lot of difficulties when the form has a mix of some elements that have XB suggestion overrides and some that don't interspersed through the tree (e.g., either can be a child of the other).
I'm not convinced by the configuration management fragility arguments of node bundles as the reason for a new entity type.
Despite best efforts, someone can still delete a locked down configuration entity and break the Experience Builder integration.
Can't we prevent that with a config import validator?
However, I do think the question is intriguing as to whether the concept of an "un-modeled" page is sufficiently different from "modeled content", such that creating that split at the entity type level ("node" = "modeled content"
, "machine_name_tbd" = "un-modeled page"
) would actually make Drupal easier to understand to new Drupal users (content creators and site builders) without taking any power away from existing Drupal experts.
I'm using the term "(un-)modeled" here rather than "(un)structured", because the entity type being proposed here would still have fields (e.g., the XB field itself and SEO-related fields), so the distinction isn't fields or no fields, it's whether or not there's a content model. For example, for content types like Product, Event, JobPost, there's a content modeling process involved in creating those for a site. And most of Drupal's awesomeness derives from letting people do that content modeling in the UI and then being able to use that content model in powerful ways.
But what happens when someone is new to Drupal, or wants to build a quick new site, and doesn't want to first go through a content modeling process, they just want to create a few pages? Currently, Standard Profile provides that via the "Basic Page" content type, but does it actually make sense for "Basic Page" to be one content type among Article, Product, Event, JobPost, etc.? Or is the concept of "Basic Page" somehow special, different from those other content types? And would Drupal be easier to understand if we leaned in more into that specialness?
For example, within XB, I think there's designs for making the content creation links say "New page" and "New CMS content", where "New page" lets you create a new un-modeled page and "New CMS content" would then show you links for "Article", "Event", "Product", etc. Now, we can achieve that by having a config setting within XB for which node type is the one that "New page" creates, but the fact that user research led to designs that partition in this way leads me to wonder whether there's a real conceptual difference between un-modeled pages and modeled content, and whether creating a new entity type for un-modeled pages would help us both refine our understanding of that difference, and optimize various Drupal UIs accordingly.
However, my biggest concern with un-modeled pages being its own entity type, is then what happens when someone starts by creating some un-modeled pages, but then discovers that those pages could benefit from a content model. There are contrib modules, such as
https://www.drupal.org/project/convert_bundles →
, for converting a node from one bundle to another, but that wouldn't work for converting a not-a-node to a node. Instead we'd need to create a new node, move the not-a-node's XB field data (and SEO field data) to the node, and move/redirect all links and references from the former to the latter. Links I think we could handle (e.g., by implementing the toUrl()
method on the class for this not-a-node entity type), but references would be really tricky. However, we can avoid this problem by disallowing references to the proposed not-a-node entity type. Which I think would fit... why would you need an entity reference to an un-modeled page? If you do need an entity reference to it, that's probably a sign that it should actually be a modeled node.
I'm still not 100% clear on whether a separate entity type vs. a special node bundle is desirable or not. But I think the decision should be based on to what extent do we think un-modeled pages are sufficiently distinct from modeled content, rather than on concerns that we won't be able to implement sufficient config import validation to prevent people from making config changes that will break the code.
By `hook_query_entity_query_alter` do you mean `hook_entity_query_alter`?
I think I mean the former. Or in other words, hook_query_TAG_alter()
, where entity_query
is the TAG.
Just to be clear though, I'm not suggesting to actually alter the query. I'm suggesting instead that before the query is executed, that we invoke the code to save auto-saved states into revisions, and then let the original query proceed as normal.
Not sure yet about how much that will cover vs. needing additional hooks to also trigger the revision creations.
the two leading options are CodeMirror and Monaco
https://npmtrends.com/@codemirror/view-vs-monaco-editor shows how basically tied they are in terms of npm downloads.
we'll need to choose which one to try out first (we can always switch to the other one later if we learn that our initial choice wasn't the best one)
Monaco has some appeal just because it's backed by Microsoft and it's the "just the editor" portion of VSCode and VSCode is great. However, there's a few downsides:
- Its version is still 0.x and has been for years. I'm not aware of any announcement as to whether a 1.0 release is even remotely close.
- https://microsoft.github.io/monaco-editor/ says it's not supported on mobile.
- https://blog.replit.com/code-editors is a blog post that's a few years old now, so not sure how much of it is still accurate, but it points out some other challenges with Monaco.
I think this makes CodeMirror the better option at least for now, but I'm curious what others think.
Thanks for opening 📌 Update "DX: CI" section of CODEOWNERS Active . That'll be good to resolve. In the meantime, I merged this by temporarily turning off codeowner protection on 0.x. I turned that back on though right after merging this.
effulgentsia → made their first commit to this issue’s fork.
Monaco is a component of VSCode, but VSCode does a lot more than Monaco. https://code.visualstudio.com/docs/editor/vscode-web is an interesting option, but does it work in a cross-origin iframe?
I approved the remaining thing that needed codeowner approval. I tried to merge this, but it said a pipeline is failing. I suspect due to 🌱 [Proposal] A different approach to the iFrame preview interactions Active not being merged into this fork yet (or this fork rebased on top of that), but not sure.
The work that's been done here, and the video in the issue summary that shows it off, is really great!
We may come back to this at some point, but for now, we're pivoting to ✨ [exploratory] PoC of Preact+Tailwind components editable via CodeMirror or Monaco Active .
effulgentsia → created an issue.
I'm not sure if it covers everything that's imagined in #3, but for what's described in the issue summary there's some prior art in https://github.com/drupal-jsx/starter-react/blob/main/src/index.js#L5. Just using Vite's glob function.
I opened ✨ Lazily create draft revisions from valid auto-saved states on demand rather than on save or only on schedule Active as a child issue, with an idea for how we can make valid auto-saved states essentially equivalent (from a UX standpoint) to real draft revisions, but without the performance cost of doing a full entity save for every auto-save.
effulgentsia → created an issue.
Re #4, if we want to pursue an interim solution where we either retain a partial history of auto-saved non-revisions, or a partial history of auto-saved actual-revisions, I'd recommend we do it as logarithmically-spaced snapshots. That way, even if the content author makes 1000 autosaved edits prior to sending their work into a "for review" state, we'd only need to save/retain 10 or so states/revisions, which actually seems quite manageable even without any of the libraries proposed in this issue. https://madebyevan.com/algos/log-spaced-snapshots/ has a clever algorithm for how to do this in a way that only requires at most one deletion per save. That post shows an example with a "density" of 2, but we might find that even a density of 1 is sufficient.
XB data would only reference the field union data
If the XB field is a multi-valued field of component instances, and there's a separate multi-valued dynamic_field_union field for the component instances' static values, then how would each component instance reference its corresponding dynamic_field_union item? Doing it with a numeric delta
that gets re-ordered would be fragile. Each dynamic_field_union item would need a stable ID, which could be the same as the instance_id
of the component instance, or it could be its own separate ID.
Currently, a regular field_union
doesn't have the concept of a stable item ID. Would we want to add that concept to dynamic_field_union
without also adding it to field_union
? Would we then be adding this to both field_union
and dynamic_field_union
solely for the XB use-case, or would a stable item ID serve other use cases as well? If it's only for the XB use case, then what makes this better than having the XB field either extend or aggregate the dynamic_field_union
field?
I'm sure there's basic FOSS code coverage tools available for both Playwright and Cypress. What's unique about Cypress's UI Coverage feature is that it connects the coverage information to the UI that you're building, so you can just look at the UI that you're building and see visually the UI elements and states that aren't getting hit by tests.
I agree that the fact that that's a premium feature makes it less compelling for weighing Cypress vs Playwright. However, I don't think it's out of the realm of possibility that the DA could negotiate a reasonable price for making that available for integration with tests run on drupal.org. Or, that organizations building JS-heavy websites with Drupal would find it worth the price of a Cypress Cloud subscription in order to have that feature for testing their sites.
I suspect that none of this outweighs the simple fact that Microsoft is behind Playwright, and once Microsoft enters a market and gets to 51% market share, then it's only a matter of time before it's game over for the small businesses in that market. In that sense, Playwright is probably the safer choice for Drupal, even though I'm personally fond of some of the unique things that Cypress offers. Also, it does seem like most of the people in this issue find some of the technical advantages of Playwright over Cypress to be compelling.
Note that Cypress Cloud (paid service) recently added a UI coverage feature. We are not currently using Cypress Cloud on XB, so switching to Playwright wouldn't lose us anything with respect to that relative to our status quo. However, if that feature actually does what it says it does, that seems like a potentially awesome tool. Given that it requires a commercial subscription, not sure how much that should factor into our decision for Drupal core.
I understand that Drupal CMS is using Cypress because XB is (fair enough), but that leaves the XB decision which there is no record of.
For XB, we did not evaluate Cypress vs Playwright. We chose Cypress only because several members of the team had experience with Cypress and liked it, and no one had experience with Playwright. Our experience with Cypress is positive, but that doesn't mean we wouldn't also have a positive experience with Playwright.
Per #14, Cypress has really nice time-travel debugging. Turns out, a couple weeks before I wrote #14, Cypress wrote a blog post about that in https://www.cypress.io/blog/debugging-with-cypress-vs-playwright. For people who've used those debugging tools in Cypress and who've also used Playwright, how would you compare your debugging experience between the two? Obviously that blog post is biased, so I'd love to read some unbiased opinions about this. I haven't used Playwright myself yet, so it's not something I can speak to, other than seeing how nice Cypress is for that and not wanting to lose that without Playwright offering a comparable level of debugging productivity.
effulgentsia → created an issue.
It's an XB issue. I'll open an issue for it when I have a spare moment. Thanks for discovering it!
Did we give any thought to CRDTs here?
The issue in #58 proposes to use a CRDT library for implementing that issue, both to eventually support collaborative editing, and also because those are the libraries/algorithms that also implement the most mature, robust, and efficient storage of large histories.
It should be possible to revert back to previous revisions and auto-save states.
I strongly request that we descope "revert back to previous auto-save states" from the initial implementation of Save functionality and from the 0.2.0 milestone. I opened ✨ Implement time travel (undo/redo/revert) of auto-saved states across devices and users Active to discuss and work on just that feature in its own issue.
effulgentsia → created an issue. See original summary → .
How about this?
While the end user might not need to know the difference between an auto-save state and a forward revision, I think it would be very good to use the terms exclusively in this discussion and not mix and match 'draft' to mean both things.
I agree with this sentiment, but I think picking the right words is tricky. I think we need to have terminology for three states. Taking an implementation-perspective, we can call these states:
- an auto-saved state
- a non-live (is "non-live" the more accurate term than "forward"?) revision
- a live (is "live" the more accurate term than "published"?) revision
However, as the UX for this hasn't been fully designed yet, and the product requirements not even fully flushed out yet (though this issue is making good progress towards that), I wonder if it would help the requirements/design process to also have product/UX-oriented terminology for these states rather than jumping straight to the implementation-oriented terminology.
Perhaps something like:
- In-progress stuff (or some other term than "stuff", such as "work", "content", "edits", "changes", etc.)
- Reviewable stuff
- Published stuff
?
That would make the whole client-side JS bundle/app GPL, what is a no-go for most businesses.
Right. And just to be explicit, the key reason for this is that since this is code that runs in the browser, there's no way to not "distribute" it. The owner of a website can have proprietary PHP code alongside GPL PHP code, because the website owner doesn't have to "distribute" the PHP code in order for the website to function, whereas JS code that needs to be sent to each user's browser becomes a "distribution" of that code.
In order to avoid blocking future integrations like this and to support wide-adoption, I'd suggest to pick a more permissive license for JS-components/libraries (e.g. MIT instead of GPL)
Agreed. And examples where this will become relevant for Experience Builder, but that haven't come up yet because we haven't built those parts yet are:
- JS components intended for the front-end of a website, such as SDDS or some other design system (or parts of one) ported to JSX, Vue, Svelte, etc.
- A JS library that Decoupled front-ends can use for fetching the data that's in Experience Builder and rendering a tree of components based on that data (i.e., placing all of the components within the right parents, in the right slots, and in the right order, so as to match what's been laid out by the content editor in Experience Builder).
Both of the above would need to be MIT licensed per #9, so would be separate libraries on GitHub until they're allowed on drupal.org. Or perhaps per #8 they'd already be allowed on drupal.org, so long as they're in separate projects than the primary (GPL) Experience Builder project?
wim leers → credited effulgentsia → .
I don't think that Experience Builder is currently in violation of current d.o. policies around this. We are using MIT licensed code (e.g., React), but that doesn't prevent the Experience Builder module as a whole being licensed under GPL.
Also, I don't think the SDDS use case in #5 (being able to create an MIT-licensed Vue.js port of SDDS) strictly requires this issue to be resolved. From a technical standpoint, if the SDDS authors want to license SDDS under MIT, then so long as the SDDS code isn't derivative of any GPL code, the SDDS authors can release SDDS on GitHub with an MIT license, and then people can create a Vue.js port of that, also released on GitHub with an MIT license.
However, I still think that drupal.org should relax its policies to allow use cases like Drupal API client and SDDS to be hosted on drupal.org with an MIT license, so as not to force those projects off of drupal.org and onto GitHub.
npm run build
currently encounters 5 errors:
Found 5 errors in 4 files.
Errors Files
1 src/components/sidebar/AssetItem.tsx:12
1 src/components/sidebar/AssetsMenu.tsx:28
2 src/components/sidebar/PrimaryMenu.tsx:18
1 src/components/stackblitz/StackBlitzController.tsx:54
It would be great to fix those so that this can be tested in Tugboat. I'm guessing these errors weren't found during development because perhaps they're not triggered during npm run drupaldev
.
Thank you for spinning this issue out from 📌 Add Tugboat integration RTBC .
Ideally, this sort of thing would be integrated into d.o itself
Can you open an issue in https://www.drupal.org/project/infrastructure → explaining what would be needed from d.o. to best support this (or if that's still unknown, then however much you do currently know about that)? There's also https://www.drupal.org/project/drupalci → and its subprojects but I think that's not the relevant issue queue for stuff related to GitLabCI or that sits outside of both DrupalCI and GitLabCI, so I think https://www.drupal.org/project/infrastructure → would be the right place for d.o. stuff needed to support more stuff related to Tugboat.
Thanks for #27! Retitling to surface the Astro part in case others are looking for that as well.
We could add those additional columns in one of two ways
Given the choice of subclassing or aggregating, I think aggregating would fit the desired mental model better. FileItem subclasses EntityReferenceItem but that's because conceptually a file item is an entity reference item, that also has a description. However, the mental model of a component instance should not be that it's a field union that also has some other stuff; the mental model should be that a component instance is its own thing, where one of the things it has is static values and those static values can be modeled as a dynamic field union.
Would the following reconcile #28 with this issue's current summary?
The field union table
What do you mean by the field union table? Currently, field_union defines a field type (via its deriver) for every field_union config entity. I imagine a "field union json" concept would be implemented as a new field type: mixed_field_union
, where the properties/columns of this field type are: type
and values
, where type
is a reference to the field_union config entity for that item, and values
is the JSON.
So that's basically the same as this issue's proposed last 2 columns. However, for XB, we also need the first 5 columns. We could add those additional columns in one of two ways:
- Define the XB field type as a subclass of
mixed_field_union
and add the extra columns. Just like how in core FileItem subclasses EntityReferenceItem. - Define the XB field type as a field union of the first 5 columns plus a
mixed_field_union
. This might actually be nice in terms of making thecomponent
column a full-fledged EntityReferenceItem (sub)field in its own right.
Did I misunderstand option 3 from that issue? In it @catch wrote:
The field table would store the usual field columns, plus the 'field union type' to identify which field union is stored, plus a single 'values' JSON column holding the actual entered field values.
Isn't that what this issue's summary is also suggesting, except renaming values
to static_values
?
For me, the key difference between this issue and how I understood option 3 from that issue is that in this issue I'm suggesting that in addition to static_values
, we also have the other columns, in particular parent
and slot
, so that each item has all of the information about the component instance: both its "Field union JSON" value and its location in the tree.
wim leers → credited effulgentsia → .
It would be nice to remove the step where the user has to type npm run build
into the terminal. In fact, if we remove this step then perhaps we could even set terminalHeight to 0 or a small number? The user would still need to be able to run npm update
if they want to update their dependencies or npm install ...
if they want to add additional libraries, but they wouldn't need to do this every time they use the code editor, so ideally we could start the StackBlitz iframe with the terminal as out-of-the-way as possible as long as there's a way for the user to show/expand it when they do want to use it.
Ok, so how can we remove needing to npm run build
? Well, what if when the user clicks the "Get Files" button, instead of getting the dist/_astro/*
files from the StackBlitz iframe, we instead spin up a separate WebContainer, transfer the project/source files from the StackBlitz instance to the WebContainer instance, then tell the WebContainer to run the build step, and then get the dist/_astro/*
files from there?
I agree with #14, but that wouldn't fully address @larowlan's original goals in this issue. He may still want to pursue the "custom serialization format" part, which I disagree with, but who knows, maybe he'll come up with an MR that convinces me. Therefore, I think we should open a child issue to do just the part that's in #13 and what's minimally required for that, and leave this issue as potentially still having @larowlan's more ambitious goals.
we always negotiate the "Stark" Drupal Theme, which actually means "No Drupal Theme"
I don't get that. If we always negotiate the Stark theme, why can't we instead create an xb_stark
theme that's just like Stark (whether a fork or subtheme) but that sets semicoupled
as its theme engine, and always negotiate that theme? That would allows us to remove experience_builder_system_info_alter()
which is the part that's "taking over the twig theme engine for the whole site" that this issue's title is referring to.
I think instead we can use an anonymous function assigned to a local variable: $convert = function(...) {...}
.
By the way, if I understand the desired UX correctly, we're wanting to render the forms in the right-hand sidebar using a set theme, neither the site's front-end theme, nor the admin theme. If this is correct, then even if we did no refactoring to the semicoupled theme engine, we could just use it as it was initially designed for: as the theme engine of the theme that we use for rendering the forms in the right-hand sidebar, so we wouldn't need to hack-alter the engine of themes we don't control.
That said, I'm not opposed to refactorings that decouple it from being a theme engine, so that it could be more cleanly integrated with any theme.
longwave → credited effulgentsia → .
If we move from JSON structure to a multi-valued field (where each delta represents a component), how do we handle scenarios where there are overrides on top of the desktop breakpoint (e.g. for the mobile breakpoint)?
Is the thinking here that any prop could be overridden by breakpoint? For example, text content such as the quote for a testimonial component could be changed by breakpoint? Or only certain props, identified by the SDC creator as "style" props as opposed to "content" props?
I'd like to understand the email newsletter use case more.
I brought up the case of wanting the show the first four (or however many) components of an article in a newsletter with a link to read more
Is this a common thing to do? I agree we should support it if it is, but I was under the (perhaps incorrect) impression that HTML emails generally required very different considerations from web pages, such that people would generally want to create an entirely different node for the newsletter rather than trying to derive it dynamically from the "for the website" article node.
XB could add the field_union config entities, and reference them from the component config entities, when the field_union module is enabled, and not do that when that module is not enabled
I realized after writing this that the reason this is true is because the component
config entities have essentially all of the information that would need to be in the field_union
config entity, which is what would let us generate the field_union config entity at any time that we needed to.
Given that, I wonder if making the field_union module a hard dependency wouldn't actually be that bad. It would let us take a bunch of stuff out of the component
config entity and instead move that information to the field_union
config entity.
Added a Caveats section to the issue summary.
Alternatively, we could omit [the
field_union
column] and instead add the field_union reference to the component config entity.
A big advantage of this would be that it would allow the field_union module to be an optional dependency. XB could add the field_union config entities, and reference them from the component config entities, when the field_union module is enabled, and not do that when that module is not enabled, without that affecting the schema of the XB field type.
denormalizing the field_union reference into [the XB field type] might help with querying since Drupal doesn't have great support for JOINing on config entities (though that support could be improved if Drupal core refactored the config table to store as JSON instead of serialized PHP)
Given the advantage above of letting the field_union module be an optional dependency if we keep the field type normalized and only access an item's field_union
config entity via its component
config entity, I recommend doing that, and solving the querying use case by changing core's config
table from serialized PHP to json.
#5 is correct. Since JSON queries haven't landed in Drupal 10, and Drupal 11 is already released, so they won't land in 10.4, 10.5, etc. either, we can both support 10.3 now, and relax the constraint to ^10
as #6's MR does. Therefore, merged this to 8.x-1.x.
I'll run some tests next week before tagging a release with this fix, but I don't expect that to uncover any major problems.
effulgentsia → made their first commit to this issue’s fork.
This is great. Thank you! Merged to 1.0.x.
effulgentsia → made their first commit to this issue’s fork.
Crediting @joachim who proposed de-jsonifying the tree back in #3440578-51: JSON-based data storage proposal for component-based page building → .
I incorporated this into 📌 Refactor the XB field type to be multi-valued, to de-jsonify the tree, and to reference the field_union type of the prop values Active .
@catch, I, and others met a few days ago to discuss option #3 in the proposed resolution. I included that in an XB refactoring proposal in 📌 Refactor the XB field type to be multi-valued, to de-jsonify the tree, and to reference the field_union type of the prop values Active .
effulgentsia → created an issue.
@hooroomoo, @xinran, and I met with @lauriii, and here's some UX improvement suggestions based on his feedback:
- Instead of the Open StackBlitz button, create the StackBlitz iframe when the user clicks on the "Assets" button in XB's left sidebar. However, create this as a hidden iframe. This way it can boot up in the background without it appearing slow to the user.
- On XB's "Assets" tab, show
PreactCounter
andSvelteCounter
(or whatever theirname
is in their SDC YAML file). - When one of those is clicked, make the StackBlitz iframe visible and tell it to open the corresponding component's source code file in the editor.
- Within the StackBlitz project, in addition to the index.astro file that's there, add a .astro file for each component containing a usage of that component in order to preview it. For example,
pages/preview/PreactCounter.astro
andpages/preview/SvelteCounter.astro
. - For #3 above, it addition to navigating StackBlitz's code editor to the component's source file, also navigate its preview window to the component's preview page.
Added an "In Summary" bulleted list to the Overview.
For this POC, we will create an SDC wrapper so our Astro components are registered as Drupal SDCs.
I added a sentence after this explaining that we'll remove these SDC wrappers after 🌱 [META] Support component types other than SDC Needs work is done.
The title I'm thinking of is maybe something like "Redux-integrated XB-cohesive Field Widgets". Maybe there's a better term to use than "XB-cohesive" but the concept is that we want the field widgets rendered as React components not only for the Redux integration, but also to be able to style them and/or add JS behaviors to them in a way that makes the props edit form look and feel cohesive with the rest of the XB app. While it might be possible to achieve that with Drupal's traditional JS/CSS/library system, I think it's easier to achieve it via the same React toolchain (JSX, CSS modules) as the rest of the app. Maybe this is moot though if the Redux integration alone sufficiently justifies and clarifies the need to render the widgets as first-class React components.
wim leers → credited effulgentsia → .
Closing the MR and re-opening it did it. Thanks!
What's needed for the "view live preview" Tugboat link to show up under an issue's MR? Does it need the issue status to be a certain value? Or the MR to be a non-draft? Or tests to pass? I'd like to have it in 📌 [exploratory] StackBlitz PoC Active but it's not showing up there.
All I did in #9 was rebased the MR against 0.x. I hoped that would add the Tugboat link under the issue fork, but it hasn't yet.
effulgentsia → made their first commit to this issue’s fork.
I haven't caught up on the comments in this issue, so this is just a drive-by idea, but I wanted to post it here in case it leads to any good discussion or helps spark other ideas.
The key problems with an auto-save saving an entity "for real" (whether as a draft revision or in a non-live workspace) as opposed to "just" in the temp store are:
- It might be unexpected. Unlike with Google docs, Drupal content editors might not expect their stream-of-consciousness writing automatically becoming real draft revisions or real changes in the workspace that they're in.
- The in-progress work being auto-saved might not be in a valid entity state (e.g., required fields might not be filled in yet).
- A real entity save (even a draft revision or a save to a non-live workspace) can be slow. Especially if the site has contrib or custom code with hook implementations that call out to 3rd party web services.
So here's a possible idea for mitigating the above by combining Workspaces and Temp store:
- Every auto-save submission from the client we save to the temp store.
- We introduce an "Allow auto-saves" setting to Workspaces. This mitigates #1 above.
- If while in XB, the editor is also in a workspace that allows auto-saves, then when we get an auto-save submission from the client, in addition to saving to the temp store, we also save the entity for real, if:
- It passes validation (mitigates #2 above).
- We're not already in the process of saving it in a different request (i.e., we add a lock around a real entity save) (mitigates #3 above).
This would mean that even without the Workspaces module enabled, we still get auto-saving to the temp store. But with Workspaces enabled, and when in a workspace configured to allow auto-saves, we also get real entity saves happening for valid states and protected from stampedes without having to slow down the rate that we're able to save to the temp store.
In addition to downgrading this from Critical (to either Normal or Major), I also think we should consider postponing it until after we've implemented AutoSave in 🌱 Research: Possible backend implementations of auto-save, drafts, and publishing Active . It's not that we can't work on this until then, but I think it would be more efficient to wait until that's done and there's plenty of other front-end things that need doing in the meantime.
Oh cool. I didn't know about the chart linked to in #10. But it's a shame that chart separates "JS", "TSX", and "TS" as 3 separate languages. In my opinion those should all count as JavaScript.
I think so. Per step 4 in https://www.drupal.org/community-initiatives/starshot-demo-design-system... → .
Thanks @cosmicdreams. Yes, early on in XB's development Same Page Preview gave us a head start (so thank you for that module!, plus it's also a great module for outside of XB), but I think at this point XB isn't using it anymore.
One way to find out is to open an MR that removes it from the codebase and see if any tests fail. If none do, I think we're good to remove it.
Adding @hchonov to this issue per #3462360-20: Partial preview updates: update preview of modified component only, not entire component tree → .
@tedbow helpfully opened 🌱 Research: Possible backend implementations of auto-save, drafts, and publishing Active for discussing/brainstorming auto-save and manually saving drafts or published revisions.
Do folks here think 📌 Should we add SDDS to XB's Tugboat? Needs work is a good idea or bad idea? Please chime in over there with what you see as pros or cons of that.
I opened 📌 Should we add SDDS to XB's Tugboat? Needs work as a followup.
effulgentsia → created an issue.
Is it 1) possible, and 2) desirable to add a link to the XB project page → to let people view the current state of XB (branch tip of 0.x) in Tugboat? For example, perhaps a new item #4 under "Track progress"? Are there docs somewhere for how to link to a Tugboat deployment like that?
This looks great. Thank you, @q0rban, for driving this, and everyone else for your help!
+1 to #17. I think the "submit form" part we might want to do differently, but otherwise +1 to reusing a good chunk of Autosave Form.