Looks great to me.
RTBC!
RTBC
The approach makes sense to me. It feels a little bit weird to have workspace tests need to enable workspaces_ui but for integration tests the UI is obviously needed.
But one curious observation:
Why don’t I see removals of the new Hooks added to workspaces_ui to support toolbar, etc.
I had expected a big removal from workspaces.module, but there is nothing.
Maybe that is related to the hook_help problem?
RTBC - fix is straightforward and the additional check ensures that we don’t regress in case someone else is setting the publication status for the live revision.
I also reviewed the snippets of patches applied after the automatic conversion manually.
They all also look good to me.
The change record makes sense and even in D7 it was the right way to use module handler to invoke a single hook in another module.
Still RTBC
RTBC + 1 from me - post merge
This implementation is much cleaner.
RTBC - I did not look at every single line of code, but performed isolated spot checks and the conversion seems fully automatic to me.
I am also +1 to do this in one full sweep, so that we can get off .module files ASAP.
This MR also clearly shows that hooks are as easy to implement as before the conversion, so that’s a huge plus.
Now without debug code.
What this does is:
- Copy function __construct & setPrefix from pgsql driver
- Fix schema functions, where the prefixInfo schema is not used
Attaching patch for now.
Fantastic work, catch!
RTBC - looks great to me!
Issue summary could explain the approach with the token - for the most basic validation.
Patch looks great so far, but if it was me:
- We only allow one hook implementation per module right now (which is fine)
Why don't we just merge the collected listeners with the procedural when:
moduleHandler->invokeAll() collects the procedural implementations?
Then the whole module_implements_alter(), etc. would still work (without any changes or before/after!).
That would ensure for this FIRST iteration, we get OOP hooks via event listeners and auto-wired services, but moduleHandler really does not change except for discovering these implementations for the hook.
Before this was not possible due to how we did not have getListeners() so ordering became impossible, but now it's natural.
And we still throw an Exception() if a hook is defined more than once in the same way.
I feel - and I might be wrong - this would make for the most minimal patch.
One thing I liked about the Hux-style approach was that you could just add a Hook attribute to any service and it was not limited to things in the hook namespace - you just tag the service as a 'hook_listener' and it works.
BUT this can and should be a follow-up.
I just want to ensure that this use case CAN still be made to work with the current architecture. (even if we then just later decide for or against it)
I apologize if I am missing something, but:
#[Hook(hook: 'user_cancel')]
feels a little bit duplicated (or is that because of alter hooks being different?).
But I would prefer clearly:
#[Hook('user_cancel')]
#[AlterHook('module_implements')]
or
#[AlterHook('module_implements_alter')]
Therefore we get rid of the magic naming, but don't have: Hook(hook: ) duplicated.
But maybe it is necessary, then please disregard.
Back to RTBC!
The new approach is fully BC compatible for layout_builder and makes it much easier for forms to mark themselves as safe by default / definition.
What Kris did here is kinda orthogonal to the approach and does not care where the events come from.
In essence adding support for //[@Hook(entity_insert)] style annotations to the existing patch is very simple:
- We know the service to call
- We know the method to call
- We just map that into the hook_entity_insert event (along side all other events)
The advantage is that:
- Full BC (and possibility to clearly deprecate module style invocations)
- Does not care if you use event listener to listen to the events or @Hook style invocation or .module file (for now)
We get rid of a lot of run-time stuff, which we cache anyway per hook and instead create for the .module things more like a registry of potential invocations.
RTBC
My only concern was for existing code that sets workspace_safe = TRUE or FALSE, but the $form_state property always takes precedence, so this is just a way to set the default state of the form.
And it's very nice to use.
RTBC + 1 - Been using workspaces together with wse in production since a few years on several Enterprise sites.
RTBC - looks great to me and makes sense!
Just to re-iterate that point:
- Automatic enabling is good for 80% of sites, which do not care either way
- Being able to opt-out of automatic enabling is important for sites, who want to control the process of what gets installed when
It is important to not just take the stance that "you can disable it afterwards again", because of privacy concerns when the feed is consumed, which can happen when e.g. cron runs in between module enabling and disabling.
I agree. Let's add some constants, which are used in both places.
Looks good, but one typo nit in the docs:
bu -> but
(https://git.drupalcode.org/project/drupal/-/commit/8e5acc674b6394a83249f...)
Hello there!
At Tag1 we have been hard at work to create a drop-in replacement for the Layout Builder UI, which is fully compatible with all existing contrib modules and used by our Enterprise clients already for a while in production.
It essentially changes the experience in three key points:
- a) 1 drag instead of 5 clicks (with full best-effort configurable auto-completion of blocks using sample images and lorem ipsum ...)
- b) Create your layout automatically -> Just drag your blocks and then change the auto generated layout to fit your needs
- c) BETA: Nested layouts including the ability to drag blocks into a sub-layout (think of it as Layout UI for blocks itself)
It's called lb_plus and can be found here including a small GIF to show the awesomeness:
https://www.drupal.org/project/lb_plus →
Some more detailed description by Tim Bozeman, who wrote the second version of lb_plus follows:
Like larowlan said, we've been working on a drop in replacement for the Layout Builder UI that addresses many of these issues. There has been interest in adding LB+ to core also.
lauriii said:
One idea was to build a community "recipe" that installs targeted contributed modules on top of Layout Builder, to have a more opinionated starting point.
I think LB+ can help fill the gap.
Features include:
- Drag and drop block placement
- See what blocks do at a glance with block icons
- Place blocks quickly with Promoted Blocks
- Sortable sections
- Change a sections layout
- Duplicate blocks
- A cleaner UI that looks more like the published content while editing
- Soon to be released: Nested layouts ✨ Support Inline Layout Blocks Fixed which adds a whole new block builder functionality
Click here to see LB+ in action!
RTBC + 1
Back to RTBC!
One thing is that we should open a follow up issue to generally track cache events in the caching layer and not in the database layer.
As the performance data is neutral to where it comes from , this is straightforward:
- Collect cache events
- Subscribe to it
- Add to performance data
- Ignore DB cache things instead
RTBC
RTBC! Great fix!
Approved!
I am fine with that change.
I agree with the great review, but think that the additional columns should not make a problem in practice.
RTBC!
Straightforward fix!
Back to RTBC!
RTBC - Code looks good to me, fixes an important oversight from the last MR related to this and overall enhances the performance testing capabilities.
Only caveat is that only pages loaded via the Mink driver can be performance tested in that way right now, but that is by design and hence the perfect scope and hence fine for this issue.
#9: If this is false, what should happen? Do we assume it got set with the correct value already and silently skip it?
If this is false, then another process has gotten the "lock" and either calls setWithoutLock() right now, has called setWithoutLock() already or will call setWithoutLock() in the near future.
As this is just about avoiding write stampedes to the same item (first step), there is likely even a process that is still before the apcu_fetch() and will write the data again, so we also don't need to worry about the process that got the "lock" to crash.
Someone else will then set it.
Here is the ChatGPT analysis of this function:
You’ve highlighted an important aspect of how these APCu functions interact with the underlying data store, and you are correct in your understanding of the different lock mechanisms they employ.
### apcu_cas()
- `apcu_cas()` operates with a lighter locking mechanism.
- It only requires a reader lock on the entry.
- It performs a compare-and-swap (CAS) operation, which is atomic.
- This means that it can check the current value and, if it matches the expected value, update it in a single atomic operation.
- Because it only needs a reader lock, it doesn't block other reads from happening simultaneously.### apcu_store()
- On the other hand, `apcu_store()` requires a writer lock.
- A writer lock is more restrictive; it blocks both other writers and readers until the operation is complete.
- This can lead to longer wait times and potential bottlenecks, especially under high load or when dealing with large data sets.### Why Your Approach Is Beneficial
- By initializing the lock with `apcu_add()` (which uses `apcu_store()` internally but is only called once at the start), and then using `apcu_cas()` for subsequent lock acquisitions, you ensure that the heavier writer lock is acquired less frequently.
- Most of the synchronization relies on `apcu_cas()`, which uses the lighter reader lock, resulting in less contention and better performance.
- This is especially beneficial in a high-concurrency environment, where many processes are trying to acquire the lock simultaneously.By understanding and leveraging the underlying mechanisms of these APCu functions, your approach efficiently reduces lock contention, leading to better performance and scalability. This kind of optimization is crucial in performance-critical applications and high-load scenarios.
As background reading.
Fabianx → created an issue.
I think this is independently useful as it replaces one write in 99% of concurrent cases with two reads and just one non-concurrent write plus one additional cache key.
Here is another optimization of it:
public function set($cid, $data, $expire = CacheBackendInterface::CACHE_PERMANENT, array $tags = []) {
$key = $this->getApcuKey($cid);
$lock_key = $key . ':apcu_update_lock';
$results = apcu_fetch([$lock_key]);
$old_lock_value = $results[$lock_key] ?? FALSE;
// Ensure lock key exists.
if ($old_lock_value === FALSE) {
// Initialize the lock with a random value using apcu_add().
$old_lock_value = mt_rand(0, 2147483647);
apcu_add($lock_key, $old_lock_value);
}
$lock_value = mt_rand(0, 2147483647);
if (apcu_cas($lock_key, $old_lock_value, $lock_value)) {
// Lock acquired, you can perform your update here.
parent::set($cid, $data, $expire, $tags);
}
}
Here is the ChatGPT analysis of this function:
You’ve highlighted an important aspect of how these APCu functions interact with the underlying data store, and you are correct in your understanding of the different lock mechanisms they employ.
### apcu_cas()
- `apcu_cas()` operates with a lighter locking mechanism.
- It only requires a reader lock on the entry.
- It performs a compare-and-swap (CAS) operation, which is atomic.
- This means that it can check the current value and, if it matches the expected value, update it in a single atomic operation.
- Because it only needs a reader lock, it doesn't block other reads from happening simultaneously.### apcu_store()
- On the other hand, `apcu_store()` requires a writer lock.
- A writer lock is more restrictive; it blocks both other writers and readers until the operation is complete.
- This can lead to longer wait times and potential bottlenecks, especially under high load or when dealing with large data sets.### Why Your Approach Is Beneficial
- By initializing the lock with `apcu_add()` (which uses `apcu_store()` internally but is only called once at the start), and then using `apcu_cas()` for subsequent lock acquisitions, you ensure that the heavier writer lock is acquired less frequently.
- Most of the synchronization relies on `apcu_cas()`, which uses the lighter reader lock, resulting in less contention and better performance.
- This is especially beneficial in a high-concurrency environment, where many processes are trying to acquire the lock simultaneously.By understanding and leveraging the underlying mechanisms of these APCu functions, your approach efficiently reduces lock contention, leading to better performance and scalability. This kind of optimization is crucial in performance-critical applications and high-load scenarios.
As background reading.
Here is the code for set() using just the lock mechanism:
public function set($cid, $data, $expire = CacheBackendInterface::CACHE_PERMANENT, array $tags = []) {
$key = $this->getApcuKey($cid);
$lock_key = $key . ':apcu_update_lock';
$results = apcu_fetch([$lock_key]);
$old_lock_value = $results[$lock_key] ?? FALSE;
// Ensure lock key exists.
if ($old_lock_value === FALSE) {
apcu_add($lock_key, 0);
$old_lock_value = 0;
}
$lock_value = mt_rand(0, 2147483647);
if (apcu_cas($lock_key, $old_lock_value, $lock_value)) {
parent::set($cid, $data, $expire, $tags);
}
}
This however brings to mind the question why APCu itself could not protect it's apcu_store() mechanism in this way.
But it would take quite some time to get it into APCu, so still useful here.
Last to explain the lock:
The lock is for ensuring that only one item ever is written at the same time to the same cache key.
This can again be independently done:
It works like this:
- Retrieve the old value of the lock entry
- Create random value
- Do a CAS on the old value (compare-and-swap)
If we win the race to swap, then we will write the new entry.
Note: There can still be several items written after each other, but at least not concurrently.
Again the check for if the data is still the same, can help here.
An optional optimization that the IS patch had done was also:
- Because we already retrieved the items from the fast backend, we can continue to make use of it.
No need to re-get the timestamp and cache_item when we just did the same before.
This is why ChainedFastBackend.php was patched to have an additional argument to the ->set().
Probably better if this is explicit to have a different set() method for when items are given that had just been retrieved from cache - if we wanted to do that.
Let me quickly write up the idea behind the code:
When a cache entry is retrieved from primary cache backend and it has not changed in the fast backend, then all we do is write the invalidation timestamp again and again.
We also might have several processes at once writing to the same item - again: the same data -- except for the timestamp. While before, the conflict would be resolved naturally by: last writer wins, now the conflict is resolved using an apcu lock.
This lock is the second part of the patch, but should probably be a follow-up as it makes an already non-trivial architecture harder to understand.
Given these properties we can split up each cache item into two parts:
- The data to be written
- The timestamp of the cache item
By splitting it up, we can "store" the new timestamp using a compare-and-swap operation without having to write the whole item.
So what we do on a set() from the chained fast backend:
- Retrieve items from APCu: [$old_cache_item, $timestamp] = apcu_fetch(...)
- Compare if $old_cache_item->data === $data [and probably same for tags now]
If it is not the same -> write: $cache_item normally [todo: locking code for sets]
compare-and-swap the new timestamp in, because we know the data is correct right now
- On gets:
[$cache_item, $timestamp] = apcu_fetch(...)
Overwrite the timestamp in $cache_item with the $timestamp from APCu - unless it's 0.
While we never got #94 in, it also has other problems.
That said I think we can fix the remaining edge cases, by doing this:
diff --git a/includes/module.inc b/includes/module.inc
index 494924f..24e5dd7 100644
--- a/includes/module.inc
+++ b/includes/module.inc
@@ -833,6 +833,11 @@ function module_hook_info() {
* @see module_implements()
*/
function module_implements_write_cache() {
+ // The list of implementations includes vital modules only before full
+ // bootstrap, so do not write cache if we are not fully bootstrapped yet.
+ if (drupal_static("drupal_bootstrap_finished") != TRUE {
+ return;
+ }
and then set the drupal_static at the end of the bootstrap full function.
That way - that at phase is set when it's entered, not when it's completed - would no longer bite us here.
Fabianx → created an issue.
Back to RTBC! Changes look great to me!
Nice API!
Oh - I love that idea of just storing the returnValue in the performanceData.
++ to less boilerplate.
if ($return) {
$performance_data->setReturnValue($return);
}
$return = $performance_data->getReturnValue();
both feels really good.
And given it's an edge case that probably is not needed in 90% of cases, it's good to optimize for the common case.
Re-reviewed and the changes look good to me.
Originally I thought that the collectPerformanceData() could just return the $performance_data object, but as it also needs to return the result of the inner closure, passing it as an argument is the best solution indeed.
I agree that a trait is the best compromise between an inflexible class and a hard-to-register MiddleWare.
A MiddleWare can easily be created in the future to test normal page requests using the new trait.
Back to RTBC!
RTBC!
Thanks, Nat! The new API looks splendid and works fantastically well locally.
Tested locally!
Works really well.
For those that want to do that on a M1/M2 from the google-drupal repo, here are the steps:
cd .ddev
rm docker-compose.chromedriver.yml
ddev get ddev/ddev-selenium-standalone-chrome
The rest works normally.
Reviewed:
There had been 4 things I flagged:
- a) One nit / question
- b) A suggestion to add a proper API, so that object properties do not need to be used as API
- c) I read now that JS wants to be avoided to load on the page directly, so it's probably answered, but would still be good to make this clear. (as this design decision is only in the chromium bug report)
- d) It's unclear how spans are working in relation to scopes; might be good to add some comments to explain how that works
Overall looks really really good and once these questions are answered, we can go back to RTBC!
Thanks for the answer!
New version does not need the patch anymore.
Should be fixed now, but for LAT / LON will need 'additional_fields' => TRUE.
WKT is best.
Looks good to me as well!
This looks great to me now.
Assigning to David for review as this is base system core functionality.
Looking at this again with my core committer head on, I think we should revert the string changes.
While it is technically more correct, we should avoid changing strings as much as possible in D7.
The rest still looks good to me.
Back to RTBC, thanks for fixing
RTBC - looks good to me.
Should that not then also be done for all the other query parameters like include/exclude?
+++ b/core/lib/Drupal/Core/Asset/JsCollectionOptimizerLazy.php
@@ -125,13 +125,13 @@ public function optimize(array $js_assets) {
- $ajax_page_state = $this->requestStack->getCurrentRequest()->query->get('ajax_page_state');
+ $ajax_page_state = $this->requestStack->getCurrentRequest()->get('ajax_page_state');
Is that an intentional change?
Overall the patch looks great.
One thing I wondered:
Do we want to make this the default and remove / deprecate the old ways or do we want to keep the old code around and refactor it a little, so that by nice sub-classing we can provide both ways.
Given how critical this is, it might be prudent to allow via configuration to switch back to the old way.
#185: Just ping me once you have something you want to have reviewed :) - I am also silently and eagerly following.
+++ b/core/modules/system/src/Controller/CssAssetController.php
@@ -0,0 +1,282 @@
+ 'Cache-control' => 'private, no-store',
We will need to put this headers into a helper function / service or configurable via the container or settings.
Also I really think we should add Edge-Control: bypass-cache if we go to the lengths here or at least give an akamai-8.x module a change to influence those headers ... (if we don't want to add proprietary headers)
--
Overall looks like a great start to me! Needs some more work obviously ...
No, there will be one app per command for now and a shell script to wrap it.
Conversions won't happen until 8.1 - per this being a feature request.
You can play all you want for 8.1, but lets get D8 shipped first, please.
No, that critical will just use a simple application for that one command in core/scripts/some-app.sh, which is three lines of code.
My favorite example is still:
https://github.com/msonnabaum/XHProfCLI/blob/master/src/XHProfCLI/App.php
Clear, easy and concise.
Wow, this would be great to have in ...