Account created on 12 January 2010, almost 15 years ago
#

Recent comments

🇩🇪Germany Fabianx

With that in place, we can simplify our strategy significantly:

Let's just add a getMultiple() to RenderCache as well (help from ChatGPT).


/**
 * Retrieves multiple cached render arrays at once, following redirects until resolution.
 *
 * Steps:
 * - For each render element, compute a cache ID (CID) based on its keys and cacheability.
 * - Group elements by their cache bin.
 * - For each bin, use getMultiple() to fetch all items in one go.
 * - Follow any redirects discovered, updating CIDs and refetching until resolved.
 * - If the current HTTP request method is not cacheable, skip items tagged
 *   with 'CACHE_MISS_IF_UNCACHEABLE_HTTP_METHOD:'.
 *
 * Only items that resolve to a final cached value are returned. Misses or 
 * disallowed items (due to request method constraints) are omitted entirely.
 *
 * @param array $multiple_elements
 *   An associative array keyed by arbitrary identifiers. Each value should be
 *   a render array that includes '#cache' with 'keys' and optionally 'bin', e.g.:
 *   $multiple_elements = [
 *     'item_a' => [
 *       '#cache' => ['keys' => ['key1', 'key2'], 'bin' => 'render'],
 *       // ... additional render array data ...
 *     ],
 *     'item_b' => [
 *       '#cache' => ['keys' => ['another_key']], // defaults to 'render' bin
 *       // ... additional render array data ...
 *     ],
 *   ];
 *
 * @return array
 *   An associative array keyed by the same keys as $multiple_elements for items that
 *   are found and allowed. Items that never resolve or violate request conditions 
 *   are not returned.
 */
public function RenderCacheGetMultiple(array $multiple_elements): array {
  $bin_map = [];
  foreach ($multiple_elements as $item_key => $elements) {
    if (!$this->isElementCacheable($elements)) {
      continue;
    }

    $bin = $elements['#cache']['bin'] ?? 'render';
    $keys = $elements['#cache']['keys'];
    $cacheability = CacheableMetadata::createFromRenderArray($elements);

    $bin_map[$bin][$item_key] = [$keys, $cacheability];
  }

  $results = [];
  $is_method_cacheable = $this->requestStack->getCurrentRequest()->isMethodCacheable();

  foreach ($bin_map as $bin => $items) {
    $cache_bin = $this->cacheFactory->get($bin);
    if (!$cache_bin) {
      continue;
    }

    $fetched_items = $cache_bin->getMultiple($items);

    foreach ($fetched_items as $item_key => $cache) {
      if (
        !$is_method_cacheable &&
        !empty(array_filter($cache->tags, fn (string $tag) => str_starts_with($tag, 'CACHE_MISS_IF_UNCACHEABLE_HTTP_METHOD:')))
      ) {
        continue;
      }
      $results[$item_key] = $cache->data;
    }
  }

  return $results;
}

Then our CachedStrategy looks like:


// \Drupal::service('placeholder_strategy')
protected $placeholderStrategy;

// \Drupal::service('render_cache')
protected $renderCache;

function processPlaceholders($placeholders) {
  $new_placeholders = []; 

  $cached_items = $this->renderCache->getMultiple($placeholders);

  foreach ($cached_items as $placeholder => $cache) {
    $elements = $cache->data;

    // Recursively process placeholders
    if (!empty($elements['#attached']['placeholders']) {
      $elements['#attached']['placeholders'] = $this->placeholderStrategy->processPlaceholders($elements['#attached']['placeholders']);
    }
    
    $new_placeholders[$placeholder] = $elements;
  }

  return $new_placeholders;
}

🇩🇪Germany Fabianx

While this is untested, here is the code that Grok created with some direction for VariationCache::getMultiple():

/**
 * Retrieves multiple cached items, handling redirects for each key set.
 *
 * @param array $items
 *   An array where each element is an array containing:
 *   - keys (array): An array of cache keys to retrieve entries for.
 *   - cacheability (CacheableDependencyInterface): Cache metadata for these keys.
 *
 * @return array
 *   An array where keys are the cache IDs, and values are the final cache items 
 *   or NULL if not found.
 */
public function getMultiple(array $items): array {
  $results = [];
  $cids_to_process = [];

  // Generate all cache ids for $items
  foreach ($items as $item) {
    [$keys, $cacheability] = $item;
    $cid = $this->createCacheIdFast($keys, $cacheability);
    $cids_to_process[$cid] = $keys;
  }

  // While there are still cache ids to process:
  while (!empty($cids_to_process)) {
    $current_cids = array_keys($cids_to_process);
    $fetched_items = $this->cacheBackend->getMultiple($current_cids);

    $new_cids_to_process = [];

    // Generate new list of cache ids to process further
    foreach ($cids_to_process as $cid => $keys) {
      $result = $fetched_items[$cid] ?? NULL;

      if ($result && $result->data instanceof CacheRedirect) {
        // If there's a redirect, generate a new CID to follow the redirect
        $new_cid = $this->createCacheIdFast($keys, $result->data);
        $new_cids_to_process[$new_cid] = $keys;
      } else {
        // If not redirected or no result, store the result directly
        $results[$cid] = $result;
      }
    }

    // Update $cids_to_process with new CIDs for the next iteration
    $cids_to_process = $new_cids_to_process;
  }

  return $results;
}
🇩🇪Germany Fabianx

Whoever works on CachedPlaceholderStrategy here is how the raw architecture should look like (untested code):



// \Drupal::cache('render')
protected $cache;

// \Drupal::service('placeholder_strategy')
protected $placeholderStrategy;

// \Drupal::service('render_cache')
protected $renderCache;

function processPlaceholders($placeholders) {
  $cids = [];
  $new_placeholders = []; 

  foreach ($placeholders as $placeholder => $elements) {
    $cid = $this->renderCache->getCacheID($elements);
    if ($cid) {
      $cids[$cid] = $placeholder;
    }
  }

  $cached_items = $this->cache->getMultiple(array_keys($cids));

  foreach ($cached_items as $cid => $cache) {
    $elements = $cache->data;

    // Recursively process placeholders
    if (!empty($elements['#attached']['placeholders']) {
      $elements['#attached']['placeholders'] = $this->placeholderStrategy->processPlaceholders($elements['#attached']['placeholders']);
    }
    
    $placeholder = $cids[$cid];

    $new_placeholders[$placeholder] = $elements;
  }

  return $new_placeholders;
}

The reason for the recursion is to support the following scenario (part of ORIGINAL 2014 big pipe demo):

- There is a sleep(1) on the page in a cacheable block content, which sets max-age=0
- Then this makes the whole block uncacheable.
- As soon as you create a placeholder within the block for the current_time, then the block becomes cacheable again. You still want to create a placeholder for the block as it contains other content that is independent of the page.

With the CachedPlaceholderStrategy with above implementation:

- If the block is uncached then the whole block is streamed via big_pipe.
- If the blocked is cached then just the placeholder is streamed via big_pipe.

Without the recursive calling of the strategy, this would fail and the `current_time` would again make the whole page slow.

🇩🇪Germany Fabianx

To comment myself:

The reason we have a ChainedPlaceholderStrategy in core by default is not only to support big_pipe, but because I envisioned that a lot of things that are not bound by URL would become placeholders in the first place.

As can be seen from the linked issue in #7 I had this whole idea of making it super easy to put e.g. a shopping cart placeholder on the page and replace it via JS, AJAX, etc.

And therefore the natural way to go about this is to use the chain:

- CachedPlaceholderStrategy
- BigPipePlaceholderStrategy
- SingleFlushPlaceholderStrategy

The reason is that the cached placeholder strategy can do a cache->getMultiple() [for anything that can be cached] while the SingleFlushPlaceholderStrategy is just rendering them one by one.

So even for just core this can save performance.

🇩🇪Germany Fabianx

Looks great to me.

RTBC!

🇩🇪Germany Fabianx

RTBC

The approach makes sense to me. It feels a little bit weird to have workspace tests need to enable workspaces_ui but for integration tests the UI is obviously needed.

But one curious observation:

Why don’t I see removals of the new Hooks added to workspaces_ui to support toolbar, etc.

I had expected a big removal from workspaces.module, but there is nothing.

Maybe that is related to the hook_help problem?

🇩🇪Germany Fabianx

RTBC - fix is straightforward and the additional check ensures that we don’t regress in case someone else is setting the publication status for the live revision.

🇩🇪Germany Fabianx

I also reviewed the snippets of patches applied after the automatic conversion manually.

They all also look good to me.

The change record makes sense and even in D7 it was the right way to use module handler to invoke a single hook in another module.

Still RTBC

🇩🇪Germany Fabianx

RTBC + 1 from me - post merge

This implementation is much cleaner.

🇩🇪Germany Fabianx

RTBC - I did not look at every single line of code, but performed isolated spot checks and the conversion seems fully automatic to me.

I am also +1 to do this in one full sweep, so that we can get off .module files ASAP.

This MR also clearly shows that hooks are as easy to implement as before the conversion, so that’s a huge plus.

🇩🇪Germany Fabianx

Now without debug code.

What this does is:

- Copy function __construct & setPrefix from pgsql driver
- Fix schema functions, where the prefixInfo schema is not used

🇩🇪Germany Fabianx

RTBC - looks great to me!

Issue summary could explain the approach with the token - for the most basic validation.

🇩🇪Germany Fabianx

Patch looks great so far, but if it was me:

- We only allow one hook implementation per module right now (which is fine)

Why don't we just merge the collected listeners with the procedural when:

moduleHandler->invokeAll() collects the procedural implementations?

Then the whole module_implements_alter(), etc. would still work (without any changes or before/after!).

That would ensure for this FIRST iteration, we get OOP hooks via event listeners and auto-wired services, but moduleHandler really does not change except for discovering these implementations for the hook.

Before this was not possible due to how we did not have getListeners() so ordering became impossible, but now it's natural.

And we still throw an Exception() if a hook is defined more than once in the same way.

I feel - and I might be wrong - this would make for the most minimal patch.

🇩🇪Germany Fabianx

One thing I liked about the Hux-style approach was that you could just add a Hook attribute to any service and it was not limited to things in the hook namespace - you just tag the service as a 'hook_listener' and it works.

BUT this can and should be a follow-up.

I just want to ensure that this use case CAN still be made to work with the current architecture. (even if we then just later decide for or against it)

🇩🇪Germany Fabianx

I apologize if I am missing something, but:

#[Hook(hook: 'user_cancel')]

feels a little bit duplicated (or is that because of alter hooks being different?).

But I would prefer clearly:

#[Hook('user_cancel')]
#[AlterHook('module_implements')]

or

#[AlterHook('module_implements_alter')]

Therefore we get rid of the magic naming, but don't have: Hook(hook: ) duplicated.

But maybe it is necessary, then please disregard.

🇩🇪Germany Fabianx

Back to RTBC!

The new approach is fully BC compatible for layout_builder and makes it much easier for forms to mark themselves as safe by default / definition.

🇩🇪Germany Fabianx

What Kris did here is kinda orthogonal to the approach and does not care where the events come from.

In essence adding support for //[@Hook(entity_insert)] style annotations to the existing patch is very simple:

- We know the service to call
- We know the method to call
- We just map that into the hook_entity_insert event (along side all other events)

The advantage is that:

- Full BC (and possibility to clearly deprecate module style invocations)
- Does not care if you use event listener to listen to the events or @Hook style invocation or .module file (for now)

We get rid of a lot of run-time stuff, which we cache anyway per hook and instead create for the .module things more like a registry of potential invocations.

🇩🇪Germany Fabianx

RTBC

My only concern was for existing code that sets workspace_safe = TRUE or FALSE, but the $form_state property always takes precedence, so this is just a way to set the default state of the form.

And it's very nice to use.

🇩🇪Germany Fabianx

RTBC + 1 - Been using workspaces together with wse in production since a few years on several Enterprise sites.

🇩🇪Germany Fabianx

Just to re-iterate that point:

- Automatic enabling is good for 80% of sites, which do not care either way
- Being able to opt-out of automatic enabling is important for sites, who want to control the process of what gets installed when

It is important to not just take the stance that "you can disable it afterwards again", because of privacy concerns when the feed is consumed, which can happen when e.g. cron runs in between module enabling and disabling.

🇩🇪Germany Fabianx

I agree. Let's add some constants, which are used in both places.

🇩🇪Germany Fabianx

Hello there!

At Tag1 we have been hard at work to create a drop-in replacement for the Layout Builder UI, which is fully compatible with all existing contrib modules and used by our Enterprise clients already for a while in production.

It essentially changes the experience in three key points:

- a) 1 drag instead of 5 clicks (with full best-effort configurable auto-completion of blocks using sample images and lorem ipsum ...)
- b) Create your layout automatically -> Just drag your blocks and then change the auto generated layout to fit your needs
- c) BETA: Nested layouts including the ability to drag blocks into a sub-layout (think of it as Layout UI for blocks itself)

It's called lb_plus and can be found here including a small GIF to show the awesomeness:

https://www.drupal.org/project/lb_plus

Some more detailed description by Tim Bozeman, who wrote the second version of lb_plus follows:

Like larowlan said, we've been working on a drop in replacement for the Layout Builder UI that addresses many of these issues. There has been interest in adding LB+ to core also.

lauriii said:

One idea was to build a community "recipe" that installs targeted contributed modules on top of Layout Builder, to have a more opinionated starting point.

I think LB+ can help fill the gap.


Features include:

  • Drag and drop block placement
  • See what blocks do at a glance with block icons
  • Place blocks quickly with Promoted Blocks
  • Sortable sections
  • Change a sections layout
  • Duplicate blocks
  • A cleaner UI that looks more like the published content while editing
  • Soon to be released: Nested layouts Support Inline Layout Blocks Fixed which adds a whole new block builder functionality

Click here to see LB+ in action!

🇩🇪Germany Fabianx

Back to RTBC!

One thing is that we should open a follow up issue to generally track cache events in the caching layer and not in the database layer.

As the performance data is neutral to where it comes from , this is straightforward:

- Collect cache events
- Subscribe to it
- Add to performance data
- Ignore DB cache things instead

🇩🇪Germany Fabianx

Approved!

I am fine with that change.

I agree with the great review, but think that the additional columns should not make a problem in practice.

🇩🇪Germany Fabianx

RTBC - Code looks good to me, fixes an important oversight from the last MR related to this and overall enhances the performance testing capabilities.

Only caveat is that only pages loaded via the Mink driver can be performance tested in that way right now, but that is by design and hence the perfect scope and hence fine for this issue.

🇩🇪Germany Fabianx

#9: If this is false, what should happen? Do we assume it got set with the correct value already and silently skip it?

If this is false, then another process has gotten the "lock" and either calls setWithoutLock() right now, has called setWithoutLock() already or will call setWithoutLock() in the near future.

As this is just about avoiding write stampedes to the same item (first step), there is likely even a process that is still before the apcu_fetch() and will write the data again, so we also don't need to worry about the process that got the "lock" to crash.

Someone else will then set it.

🇩🇪Germany Fabianx

Here is the ChatGPT analysis of this function:

You’ve highlighted an important aspect of how these APCu functions interact with the underlying data store, and you are correct in your understanding of the different lock mechanisms they employ.

### apcu_cas()

- `apcu_cas()` operates with a lighter locking mechanism.
- It only requires a reader lock on the entry.
- It performs a compare-and-swap (CAS) operation, which is atomic.
- This means that it can check the current value and, if it matches the expected value, update it in a single atomic operation.
- Because it only needs a reader lock, it doesn't block other reads from happening simultaneously.

### apcu_store()

- On the other hand, `apcu_store()` requires a writer lock.
- A writer lock is more restrictive; it blocks both other writers and readers until the operation is complete.
- This can lead to longer wait times and potential bottlenecks, especially under high load or when dealing with large data sets.

### Why Your Approach Is Beneficial

- By initializing the lock with `apcu_add()` (which uses `apcu_store()` internally but is only called once at the start), and then using `apcu_cas()` for subsequent lock acquisitions, you ensure that the heavier writer lock is acquired less frequently.
- Most of the synchronization relies on `apcu_cas()`, which uses the lighter reader lock, resulting in less contention and better performance.
- This is especially beneficial in a high-concurrency environment, where many processes are trying to acquire the lock simultaneously.

By understanding and leveraging the underlying mechanisms of these APCu functions, your approach efficiently reduces lock contention, leading to better performance and scalability. This kind of optimization is crucial in performance-critical applications and high-load scenarios.

As background reading.

🇩🇪Germany Fabianx

I think this is independently useful as it replaces one write in 99% of concurrent cases with two reads and just one non-concurrent write plus one additional cache key.

🇩🇪Germany Fabianx

Here is another optimization of it:


public function set($cid, $data, $expire = CacheBackendInterface::CACHE_PERMANENT, array $tags = []) {
  $key = $this->getApcuKey($cid);
  $lock_key = $key . ':apcu_update_lock';

  $results = apcu_fetch([$lock_key]);
  $old_lock_value = $results[$lock_key] ?? FALSE;

  // Ensure lock key exists.
  if ($old_lock_value === FALSE) {
    // Initialize the lock with a random value using apcu_add().
    $old_lock_value = mt_rand(0, 2147483647);
    apcu_add($lock_key, $old_lock_value);
  }

  $lock_value = mt_rand(0, 2147483647);
  if (apcu_cas($lock_key, $old_lock_value, $lock_value)) {
    // Lock acquired, you can perform your update here.
    parent::set($cid, $data, $expire, $tags);
  }
}

Here is the ChatGPT analysis of this function:

You’ve highlighted an important aspect of how these APCu functions interact with the underlying data store, and you are correct in your understanding of the different lock mechanisms they employ.

### apcu_cas()

- `apcu_cas()` operates with a lighter locking mechanism.
- It only requires a reader lock on the entry.
- It performs a compare-and-swap (CAS) operation, which is atomic.
- This means that it can check the current value and, if it matches the expected value, update it in a single atomic operation.
- Because it only needs a reader lock, it doesn't block other reads from happening simultaneously.

### apcu_store()

- On the other hand, `apcu_store()` requires a writer lock.
- A writer lock is more restrictive; it blocks both other writers and readers until the operation is complete.
- This can lead to longer wait times and potential bottlenecks, especially under high load or when dealing with large data sets.

### Why Your Approach Is Beneficial

- By initializing the lock with `apcu_add()` (which uses `apcu_store()` internally but is only called once at the start), and then using `apcu_cas()` for subsequent lock acquisitions, you ensure that the heavier writer lock is acquired less frequently.
- Most of the synchronization relies on `apcu_cas()`, which uses the lighter reader lock, resulting in less contention and better performance.
- This is especially beneficial in a high-concurrency environment, where many processes are trying to acquire the lock simultaneously.

By understanding and leveraging the underlying mechanisms of these APCu functions, your approach efficiently reduces lock contention, leading to better performance and scalability. This kind of optimization is crucial in performance-critical applications and high-load scenarios.

As background reading.

🇩🇪Germany Fabianx

Here is the code for set() using just the lock mechanism:

public function set($cid, $data, $expire = CacheBackendInterface::CACHE_PERMANENT, array $tags = []) {
  $key = $this->getApcuKey($cid);
  $lock_key = $key . ':apcu_update_lock';

  $results = apcu_fetch([$lock_key]);
  $old_lock_value = $results[$lock_key] ?? FALSE;

  // Ensure lock key exists.
  if ($old_lock_value === FALSE) {
    apcu_add($lock_key, 0);
    $old_lock_value = 0;
  }

  $lock_value = mt_rand(0, 2147483647);
  if (apcu_cas($lock_key, $old_lock_value, $lock_value)) {
    parent::set($cid, $data, $expire, $tags);
  }
}

This however brings to mind the question why APCu itself could not protect it's apcu_store() mechanism in this way.

But it would take quite some time to get it into APCu, so still useful here.

🇩🇪Germany Fabianx

Last to explain the lock:

The lock is for ensuring that only one item ever is written at the same time to the same cache key.

This can again be independently done:

It works like this:

- Retrieve the old value of the lock entry
- Create random value
- Do a CAS on the old value (compare-and-swap)

If we win the race to swap, then we will write the new entry.

Note: There can still be several items written after each other, but at least not concurrently.

Again the check for if the data is still the same, can help here.

🇩🇪Germany Fabianx

An optional optimization that the IS patch had done was also:

- Because we already retrieved the items from the fast backend, we can continue to make use of it.

No need to re-get the timestamp and cache_item when we just did the same before.

This is why ChainedFastBackend.php was patched to have an additional argument to the ->set().

Probably better if this is explicit to have a different set() method for when items are given that had just been retrieved from cache - if we wanted to do that.

🇩🇪Germany Fabianx

Let me quickly write up the idea behind the code:

When a cache entry is retrieved from primary cache backend and it has not changed in the fast backend, then all we do is write the invalidation timestamp again and again.

We also might have several processes at once writing to the same item - again: the same data -- except for the timestamp. While before, the conflict would be resolved naturally by: last writer wins, now the conflict is resolved using an apcu lock.

This lock is the second part of the patch, but should probably be a follow-up as it makes an already non-trivial architecture harder to understand.

Given these properties we can split up each cache item into two parts:

- The data to be written
- The timestamp of the cache item

By splitting it up, we can "store" the new timestamp using a compare-and-swap operation without having to write the whole item.

So what we do on a set() from the chained fast backend:

- Retrieve items from APCu: [$old_cache_item, $timestamp] = apcu_fetch(...)
- Compare if $old_cache_item->data === $data [and probably same for tags now]

If it is not the same -> write: $cache_item normally [todo: locking code for sets]

compare-and-swap the new timestamp in, because we know the data is correct right now

- On gets:

[$cache_item, $timestamp] = apcu_fetch(...)

Overwrite the timestamp in $cache_item with the $timestamp from APCu - unless it's 0.

🇩🇪Germany Fabianx

While we never got #94 in, it also has other problems.

That said I think we can fix the remaining edge cases, by doing this:

diff --git a/includes/module.inc b/includes/module.inc
index 494924f..24e5dd7 100644
--- a/includes/module.inc
+++ b/includes/module.inc
@@ -833,6 +833,11 @@ function module_hook_info() {
  * @see module_implements()
  */
 function module_implements_write_cache() {
+  // The list of implementations includes vital modules only before full
+  // bootstrap, so do not write cache if we are not fully bootstrapped yet.
+  if (drupal_static("drupal_bootstrap_finished") != TRUE {
+    return;
+  }

and then set the drupal_static at the end of the bootstrap full function.

That way - that at phase is set when it's entered, not when it's completed - would no longer bite us here.

🇩🇪Germany Fabianx

Oh - I love that idea of just storing the returnValue in the performanceData.

++ to less boilerplate.

if ($return) {
  $performance_data->setReturnValue($return);
}

$return = $performance_data->getReturnValue();

both feels really good.

And given it's an edge case that probably is not needed in 90% of cases, it's good to optimize for the common case.

🇩🇪Germany Fabianx

Re-reviewed and the changes look good to me.

Originally I thought that the collectPerformanceData() could just return the $performance_data object, but as it also needs to return the result of the inner closure, passing it as an argument is the best solution indeed.

I agree that a trait is the best compromise between an inflexible class and a hard-to-register MiddleWare.

A MiddleWare can easily be created in the future to test normal page requests using the new trait.

Back to RTBC!

🇩🇪Germany Fabianx

RTBC!

Thanks, Nat! The new API looks splendid and works fantastically well locally.

🇩🇪Germany Fabianx

Tested locally!

Works really well.

For those that want to do that on a M1/M2 from the google-drupal repo, here are the steps:

cd .ddev
rm docker-compose.chromedriver.yml
ddev get ddev/ddev-selenium-standalone-chrome

The rest works normally.

Reviewed:

There had been 4 things I flagged:

- a) One nit / question
- b) A suggestion to add a proper API, so that object properties do not need to be used as API
- c) I read now that JS wants to be avoided to load on the page directly, so it's probably answered, but would still be good to make this clear. (as this design decision is only in the chromium bug report)
- d) It's unclear how spans are working in relation to scopes; might be good to add some comments to explain how that works

Overall looks really really good and once these questions are answered, we can go back to RTBC!

🇩🇪Germany Fabianx

Thanks for the answer!

New version does not need the patch anymore.

🇩🇪Germany Fabianx

Should be fixed now, but for LAT / LON will need 'additional_fields' => TRUE.

WKT is best.

🇩🇪Germany Fabianx

This looks great to me now.

Assigning to David for review as this is base system core functionality.

🇩🇪Germany Fabianx

Looking at this again with my core committer head on, I think we should revert the string changes.

While it is technically more correct, we should avoid changing strings as much as possible in D7.

The rest still looks good to me.

🇩🇪Germany Fabianx

Should that not then also be done for all the other query parameters like include/exclude?

🇩🇪Germany Fabianx
+++ b/core/lib/Drupal/Core/Asset/JsCollectionOptimizerLazy.php
@@ -125,13 +125,13 @@ public function optimize(array $js_assets) {
-      $ajax_page_state = $this->requestStack->getCurrentRequest()->query->get('ajax_page_state');
+      $ajax_page_state = $this->requestStack->getCurrentRequest()->get('ajax_page_state');

Is that an intentional change?

🇩🇪Germany Fabianx

Overall the patch looks great.

One thing I wondered:

Do we want to make this the default and remove / deprecate the old ways or do we want to keep the old code around and refactor it a little, so that by nice sub-classing we can provide both ways.

Given how critical this is, it might be prudent to allow via configuration to switch back to the old way.

🇩🇪Germany Fabianx

#185: Just ping me once you have something you want to have reviewed :) - I am also silently and eagerly following.

🇩🇪Germany Fabianx
+++ b/core/modules/system/src/Controller/CssAssetController.php
@@ -0,0 +1,282 @@
+      'Cache-control' => 'private, no-store',

We will need to put this headers into a helper function / service or configurable via the container or settings.

Also I really think we should add Edge-Control: bypass-cache if we go to the lengths here or at least give an akamai-8.x module a change to influence those headers ... (if we don't want to add proprietary headers)

--

Overall looks like a great start to me! Needs some more work obviously ...

🇩🇪Germany Fabianx

No, there will be one app per command for now and a shell script to wrap it.

Conversions won't happen until 8.1 - per this being a feature request.

You can play all you want for 8.1, but lets get D8 shipped first, please.

🇩🇪Germany Fabianx

No, that critical will just use a simple application for that one command in core/scripts/some-app.sh, which is three lines of code.

My favorite example is still:

https://github.com/msonnabaum/XHProfCLI/blob/master/src/XHProfCLI/App.php

Clear, easy and concise.

Production build 0.71.5 2024