- Issue created by @loze
- πΊπΈUnited States loze Los Angeles
I can take a stab at doing this, but I'm unsure how to hook into Gutenberg's duplicate functionality. (act when a block is duplicated) If someone can point me in the right direction, I can work on this.
- πΊπΈUnited States loze Los Angeles
We are also going to run into an issue when a block is copy/pasted or added manually in the code editor. Im not sure the best approach here.
Maybe we can show a warning if a content block has the same ID as another on the same page, with a button to create a duplicate, wich would clone the block entity and rebuild the preview.
- π΅πΉPortugal marcofernandes
Hmm... I don't think it's enough to just clear the
contentBlockId
attribute.Here's my thoughts on how it should work:
When duplicating a content block or copy/paste it, at Drupal side, a new content block must be created (cloned from the source) an pass thecontentBlockId
to the gutenberg block.Unfortunately there isn't a filter/hook to handle the duplicate action nor copy/paste https://stackoverflow.com/questions/67667923/how-to-detect-when-a-block-....
A possible implementation would be creating a filter foreditor.BlockEdit
(example at/filters/mapping-fields.jsx
) to somehow handle the content block duplication. - πΊπΈUnited States loze Los Angeles
Also, what if a node it cloned completely? Editing a block on one would affect the other.
- πΊπΈUnited States loze Los Angeles
Perhaps we need to track content block usage in a custom table? This way, if a contentBlockId used in a node appears in another node, a duplicate is created, and they can be edited independently.
This would also allow us to clean up orphaned blocks on cron also. Consider someone adding a block in a node, not saving the node, deleting the block, then saving the node. That block would still exist in the db.
Some effort is made to delete unused blocks on node save, where we compare the old body field to the new body field and delete any blocks that were detected there, but in this case the deleted block was never saved in the body field.
- πΊπΈUnited States loze Los Angeles
What about node revisions? Do we want to make revisions of the content blocks as well that correspond to the node revisions?
Or at least not delete blocks that are present in a revision.ehh, theres a lot to consider here.
Anyway, I have something somewhat working that handles the original issue, of cloning a block when it is copy/pasted/duplicated in a single node. That I will push up in a bit.
- Status changed to Needs review
6 months ago 9:23am 1 June 2024 - πΊπΈUnited States loze Los Angeles
This is my first stab at the initial issue.
See my comment in content-blocks.jsx, I'm not sure how to best handle this. Any thoughts?// @todo This doesn't feel safe. Someone can just hit the url editor/content_block/clone/[ID] and clone a block. // Maybe csrf token is the way to protect against it, but I dont know how to get it in javascript. // Also, as it is now, this will clone any block, even if its not a type that is used in gutenberg. // Maybe we should also pass the node type, and check that this block type is enabled in gutenberg for this // node type?
- πΊπΈUnited States loze Los Angeles
I think a table that tracks usage makes sense. something with fields like: entity_type, entity_id, block_type, block_id, created_timestamp
With this we could do the following:
- Change the editor/content_block/clone/[ID] path (that this MR currently adds) to editor/content_block/check_usage/[NODE_ID]/[BLOCK_ID].
This would check against the usage table if the block is used in any other nodes and duplicate the block it if it is and return the new block id. - on saving of the node, we would parse the body and write all the used contentBlockIds to the table, and delete all the records for contentBlockIds on that node that arent used.
- When a node is cloned programmatically (is new) check that any contentBlockIds arent being used in another node and clone them if they are
- Then on cron we can delete orphaned blocks older than a certain timestamp
This would address several of the issues I raised but still not deal with revisions, which could be addressed later.
- Change the editor/content_block/clone/[ID] path (that this MR currently adds) to editor/content_block/check_usage/[NODE_ID]/[BLOCK_ID].
- πΊπΈUnited States loze Los Angeles
Ok, so I think I got this pretty much there and was able to work out some of my questions.
This latest MR does the following
- Adds a update hook to create a table to track block usage
- Adds a update hook to batch process all existing Gutenberg nodes and create usage records for any blocks being used.
- Changes my original route for cloning a node, to one that checks the usage and clones it if it needs to, returning the block Id
- Cleans up the javascript filter detecting the duplicates
I still need to
- Clean up orphaned blocks on cron
- Address updating block ids when programmatically cloning an entire node.
Bigger picture:
What to do about node revisions, that feels like a separate undertaking full of gotchas.It appears to be working well from my initial tests. Be sure to run updb when testing.
- π΅πΉPortugal marcofernandes
I read all you monologue here, @loze π It was great to follow your thinking.
The table for block usage seems the best approach.
IIRC, Layout Builder when saving the layout of a node with revisions will create new content blocks even if the content block type has revisions disabled. Maybe we should follow the same approach? Because IMO since the content blocks are tied to nodes, there's no need to handle block revisions. But we could check how LB handles the insert/update/delete operations regarding content blocks. - πΊπΈUnited States loze Los Angeles
Thanks for the insight @marcofernandes.
I think this is pretty solid now. Still not addressing revisions, though.
But it appears to be working pretty good. The latest changes requires running another updb because I made some edits to the usage table. - Status changed to Fixed
5 months ago 1:30pm 6 July 2024 - π΅πΉPortugal marcofernandes
@loze I reviewed and tested it. Great work!
Automatically closed - issue fixed for 2 weeks with no activity.