Re #99. That makes sense. Static cache is the fastest cache ever. It would be not wise to throw it away. I think the potential performance gain from static cache might be comparable with the one from skipping bootstrap phase. The application needs to have a control on what needs to be reset.
Wonder if it's better to create a new cache backend.
Relevant conversation: https://www.reddit.com/r/PHP/comments/vlyw1g/benchmarking_serialization/
I did some benchmarking locally. Getting cached items back is almost instance when using VarExporter.
Here are some results of profiling Drupal site with cache.container
implemented with PhpBackend
Here are the results:
For local Drupal 11 installation the performance boost was about 5-10% for anonymous visitors.
I didn't expect that someone still uses it.
Still trying to comprehend how this event loop will work with new MySQL driver (
📌
[PP-1] Create the database driver for MySQLi
Postponed
). As I understand the revolt/event-loop
is based on streams. stream_select is essentially a backbone of its async operations. That means the DB driver should be implemented through PHP streams like amphp/mysql
.
Did I miss something?
This would all happen in the non-async database connection
I meant a single HTTP request without any concurrency. In that case in non-async database all queries will happen sequentially. So no locks expected.
I created a module to test options described in #20. It builds sort of landing page with blocks that can be slow down.
https://github.com/Chi-teck/sample_catalog
Results are quite interesting though predictable.
When blocks are too slow it doesn't matter which way you are doing parallel processing. The results are always quite good. I've managed to get 12x boost using 12 CPU cores. However, when blocks are relatively fast, the cost of spawning new processes becomes significant. In that case the best results were achieved with "co-pilot" server powered by Road Runner. It demonstrated its usefulness even when building each block takes about 5-10 ms.
Overall, landing pages are not the only use case for this. For instance, I have a project with very heavy API endpoint for a collection of entities. Each item in that collection is personalized and is frequently updated. So caching is not possible. Building items in parallel using one of the above mentioned options can potentially improve API performance a big deal.
Re 18: There are a couple things that can potentially break that calculation.
1. Database locks
When cache is empty the request will likely trigger lots of cache updates that potentially may case raise conditions. Especially when same cache items need to be updated in different blocks.
2. Building vs rendering
Those terms are often misused. Building means creating a render array while rendering is creating an HTML presentation of the content (string or Markup object). The problem here is that blocks typically relies on lazy builders, pre_render callbacks, theme functions etc. That stuff delegates rendering process to theme layer.
Consider this render array. It costs nothing for CPU to produce such content. The main work will happen later when Drupal is rendering content. And that means that async orchestration have to cover theming as well.
$build['content'] = [
'#theme' => 'example,
]
Any way for sites where SQL queries is not a bottleneck the only solution would be having additional PHP processes.
Could think of a few options here:
1. Spawning new processes with proc_open
(Symfony Process). For example, for landing pages you described about those could be very simple PHP scripts that do just one thing - render a single block.
2. Rendering blocks with PHP-FPM through hollodotme/fast-cgi-client. That should be faster than CLI workers as it compatible with Opcache and APCu. And supports async requests as well. Also FPM has some options to control number of workers in the pull.
3. Having a few workers connected to a queue. They should listen for jobs (block rendering) and reply instantly. That means Drupal DB based queue is not quite suitable as it would require frequent polling.
4. The most extreme option. Having a multi-thread socket server written in PHP with Drupal bootstrapped. It should be able to handle multiple connections simultaneously. So when a Drupal site needs to render a dozen heavy blocks it just send tasks to that sever to render blocks in parallel.
I suppose nothing of that can be fully implemented in Drupal core. However it could provide some async API to allow delegation of block rendering to external systems.
Re: #18. We can start rendering blocks with fast queries while waiting for query results from blocks with slow queries? Is it correct?
Additionally, other CPU intensive tasks (like rendering results) could be happening while waiting for all three queries to come back.
That's unclear. How do we render results without having those result yet?
Drupal is both i/o and CPU bound, but we can do both database query execution and CPU-intensive tasks in parallel once we have async database queries implemented.
Having async DB driver in place we can run queries in parallel. But is it really possible for CPU tasks like rendering?
@catch Those number are not for measuring site performance. The point was to check how many IO bottlenecks we potentially have. And again checking it in CLI SAPI is not correct.
As of benchmarking, testing just front page may not be sufficient. I think Drupal needs a comprehensive set of load tests that can help to figure out performance gains and track performance regressions.
I created a few K6 scenarios to for testing performance of Drupal site. Wonder if Drupal core could implement something similar as part of CI workflow.
https://github.com/Chi-teck/k6-umami
Re #13. It was custom project with a few millions entities. However the front page is just a user login form without extra blocks.
Here are results with brand new D11 installation.
Cold cache
real 0m0.638s
user 0m0.346s
sys 0m0.103s
Warned cache
$ time php index.php > /dev/null
real 0m0.123s
user 0m0.086s
sys 0m0.030s
Though standard profile does render many things on front page.
Yes, there are lots of IO operation
I guess those were mostly file operations performed by Composer autoloader, because in CLI Opcache and Apcu do not make much sense. In real HTTP request served by FPM the difference between real
and user
will be much smaller.
That's not clear how will Drupal benefit from this.
This command can give a very rough estimation of how much request time we could potentially save. Compare real and user time.
$ time php index.php > /dev/null
real 0m0.242s
user 0m0.152s
sys 0m0.055s
Yes, there are lots of IO operations but not many of them can be taken out of main code flow because subsequent steps typically depend on previous ones. I believe the earlier mentioned cases (Cache Prewarm and Big Pipe) won't give big performance gain. They need benchmarks to prove their usefulness.
Re #18. It happens without this MR as well. Right?
a separate Twig extension
An example of such extension.
https://github.com/jawira/case-converter-twig
Integration it into Drupal should be as simple as declaring a service in yml file.
There is a couple reason why this shouldn't be part of Twig Tweak.
1. Twig Tweak does not have to incorporate all possible Twig functions and filters. There is no such a goal to become the only Twig related module in Drupal. Specific use cases should be addressed in specific modules. Twig casings covers quite specific use case. Btw, I did similar filters in Drupal Code Generator.
2. Twig Tweak does not aim to improve Twig itself. It's primary focused on integration with Drupal. Сheck out list of our functions and filters. They are almost all related to Drupal. I think general things like string transformation should be implemented in Twig core or in a separate Twig extension (ordinary Composer package, not a Drupal module).
Thanks.
I wonder if $entity->toUrl()->access()
is more appropriate here than $entity->access('view')
.
I actually think all cache backends should use currentTime
.
chi → changed the visibility of the branch 3483370-drupalcorecachememorybackend-should-not to hidden.
chi → created an issue.
alexpott → credited chi → .
Once core is converted, it will be possible for a site to run without any procedural hook implementations at all.
This requires that none of the installed contributed modules implement procedural hooks. Right?
@nicxvan yes.
Stack-trace looks like follows
(
[0] => Array
(
[file] => /web/core/lib/Drupal/Component/Annotation/Doctrine/StaticReflectionParser.php
[line] => 332
[function] => parse
[class] => Drupal\Component\Annotation\Doctrine\StaticReflectionParser
[type] => ->
)
[1] => Array
(
[file] => /web/core/lib/Drupal/Core/Hook/HookCollectorPass.php
[line] => 164
[function] => getMethodAttributes
[class] => Drupal\Component\Annotation\Doctrine\StaticReflectionParser
[type] => ->
)
[2] => Array
(
[file] => /web/core/lib/Drupal/Core/Hook/HookCollectorPass.php
[line] => 122
[function] => collectModuleHookImplementations
[class] => Drupal\Core\Hook\HookCollectorPass
[type] => ->
)
[3] => Array
(
[file] => /web/core/lib/Drupal/Core/Hook/HookCollectorPass.php
[line] => 74
[function] => collectAllHookImplementations
[class] => Drupal\Core\Hook\HookCollectorPass
[type] => ::
)
This somehow broke tests on a custom module. Cannot reproduce it locally, but CI output is full of such errors.
Notice: file_get_contents(): Read of 12288 bytes failed with errno=21 Is a directory in web/core/lib/Drupal/Component/Annotation/Doctrine/StaticReflectionParser.php on line 184
After some research I found a contributed module that faced same issue.
🐛
phpunit (next minor) failing
Active
Right. It needs update hook.
probably because that output was not in PHPUnit 9
In PHPUnit 9 that line is printed only when --verbose
option is provided.
I wonder if we really need to update the version manually. Can we fetch the version from Composer instead?
Composer\InstalledVersions::getVersion('drupal/core')
@socialnicheguru this issue is about storing keys in settings not state.
Could not reproduce.
This might be somehow related to your site timezone.
Is there at least one contributed project that ships with its own phpinit.xml file?
Maybe somehow the phpunit.xml config is not being loaded?
It's certainly not loaded.
Confirm the date value was updated to tomorrow in the display and the edit form.
Did you check the date value in the database? I wonder if it's only related to formatter.
I just created one more generator for preload script.
https://www.drupal.org/project/opc →
However, while profiling my project I found another way to optimize opcache.
There are a couple opcache settings which default values are not suitable for production.
opcache.validate_timestamps=1
opcache.revalidate_freq=2
A typical Drupal sites loads about 1k - 2k files per request. With above configuration it'll revalidate opcache every 2 seconds.
Turning off opcache.validate_timestamps
gave me almost same speed boost as preloading. I think, as long as you don't edit code on production that setting can be disabled. The invalidation should happen through php-fpm reload on deployment.
For those who might be stucked
The currently used constraint should not block you from using twig/twig v3.14.0
https://semver.madewithlove.com/?package=twig%2Ftwig&constraint=%5E3.10.3
I suppose core_version_requirement in twig_twig.info.yml needs to be updated accordingly.
chi → created an issue. See original summary → .
I get 404 at https://www.drupal.org/project/dtt/releases/new →
What about https://www.drupal.org/node/add/project-release/3433020 → ?
- Downloading nusje2000/phpstan-monolith (v0.1.4)
- Downloading php-parallel-lint/php-parallel-lint (v1.4.0)
- Downloading psr/http-server-handler (1.0.2)
- Downloading psr/http-server-middleware (1.0.2)
- Downloading riverline/multipart-parser (2.1.2)
- Downloading lstrojny/fxmlrpc (0.22.0)
- Downloading supervisorphp/supervisor (5.1.0)
- Downloading drupal/supervisor (2.0.0-rc2)
- Downloading league/openapi-psr7-validator (0.21)
- Syncing weitzman/drupal-test-traits (2.3.0) into cache
From my experience Composer typically hangs on this step.
I think this would require configuring groups for each test
https://git.drupalcode.org/project/date_point/-/jobs/2463696
For me it's now failing with the following message
Could not use "\Drupal\Tests\Listeners\HtmlOutputPrinter" as printer: class does not exist
https://git.drupalcode.org/project/date_point/-/jobs/2377976
After I switched to MR-236 the error dissappeared.
https://git.drupalcode.org/project/date_point/-/pipelines/247639
Something like this.
{{ drupal_field('created', 'node', nid, {settings: {date_format: 'long'}}) }}
You've granted all permissions to this module. Feel free to commit whatever you think is appropriate.
I am not currently working on this project. If you would like to take ownership create an issue here → .
Thank you.
As you hardcode the alt in your Twig template you need pass it to t
filter.
"the media's image's alt"|t
Changed field is not configurable for Nodes. You have to pass formatter options in Twig template.
{{ drupal_field('changed', 'node', 1, {type: 'timestamp', label: 'above', settings: {date_format: 'medium'}}) }}
# Short version if you are happy with defaults.
{{ drupal_field('changed', 'node', 1, {}) }}
drupal_field
renders field using formatter configured for a given display mode.
You need to enable this field in display settings. Alternatively you can pass formatter settings as argument to the drupal_field
.
As stated on project page the module does not provide new widgets in Drupal 10. It offers an entity selection plugin instead.
I think it's out of scope of this module. Twig Tweak is about integration between Twig and Drupal. Given url_decode
has nothing to do with Drupal it needs to be submitted to Twig core.
Thank you.
I cannot tell from the origin commit why this is needed.
I think the idea was to move the menu content into cache placeholder so its cache metadata would not bubble up and affect the entire render cache of the page. This how it works when a menu is rendered via Drupal blocks.
Is there a way we can get the "drupal_view" and "drupal_view_results" calls to properly type these values?
Those Twig functions reference corresponding Drupal core functions directly. We can create wrappers for them. However, the issue itself needs to be clearly identified. I mean we need to figure out the exact change in Drupal core that triggered this problem.
For now you can try to escape the argument like follows.
{{ drupal_view('display_media', 'thumbnail', nid|e) }}
Merged. Thank you.
I didn't spot activity in this issue as it was labeled as "Automated Drupal 11 compatibility fixes". Most of deprecations have been fixed in
🐛
Drupal::service('renderer')->renderPlain is deprecated since 10.3.0
Active
. For other things there will be a different approach taken. We need to drop support for some outdated PHP and Drupal versions.
Thank you.
This needs tests and documentation.
Twig Tweak is for rendering content it does not provide helpers for loading entities anymore.
The entity field can be rendered as follows.
{{ drupal_field('created', 'node', 1) }}
Once 📌 Allow parsing and writing PHP constants and enums in YAML files Needs work is resolved we could start using these constants in Yaml files
To summarize. The requirement for upgrade path made this bug unfixable. Updating existing configuration especially Views is tricky and can cause more problems. I think changes without upgrade path should be allowed in major Drupal release. Otherwise this bug will likely exist forever.
The code below will go to infinity loop.
$lock = \Drupal::lock();
$result = $lock->acquire('example', 5);
while ($lock->wait('example', 0.5)) {
echo '.';
}
echo 'Done!', \PHP_EOL;
I know, that per \Drupal\Core\Lock\LockBackendInterface::wait()
the $delay
parameter must be an integer but it doesn't have native typehint so no one stops your from passing floats. Note that timeout parameter in \Drupal\Core\Lock\LockBackendInterface::acquire()
is a float, even though it has same default value. That's confusing.
I think this is neither a bug nor a feature. It's a task to improve DX of this service.
Removing lock file resolved the issue.
Package operations: 1 install, 4 updates, 0 removals
- Downgrading symfony/finder (v7.1.1 => v6.4.8)
- Upgrading symfony/psr-http-message-bridge (v2.3.1 => v6.4.8)
- Installing symfony/mailer (v6.4.9)
- Downgrading symfony/filesystem (v7.1.2 => v6.4.9)
- Upgrading drupal/core (10.1.8 => 10.3.0)
14 package suggestions were added by new dependencies,
Following steps #6.
$ composer why-not drupal/core 10.3.0
drupal/core 10.3.0 requires symfony/filesystem (^6.4)
drupal/recommended-project - does not require symfony/filesystem (but v7.1.2 is installed)
drupal/core 10.3.0 requires symfony/finder (^6.4)
drupal/recommended-project - does not require symfony/finder (but v7.1.1 is installed)
@for this build a set environment variables through GitLab UI pointing to this MR. So it could be considered as AFTER link.
https://git.drupalcode.org/project/date_point/-/pipelines/213599
You may download artifacts for Composer job to see how the composer.json file is encoded.
Basically links to any other builds of the module can be considered as BEFORE link.
{ node|entity_link('Add new comment'|t, 'canonical',
fragment: 'comment-form'}) }}
Don't you think it looks weird? That's why Twig Tweak uses own PHPCS configuration.
Confirm. The MR fixed the issue.
"php": ">=8.3",
"symfony/validator": "^6.4",
"psr/clock": "^1.0"
Re #43. Merge query does not support multi-value queries.
The use case for closing and reopening connections is to keep their number under limit. The limit can be easily exceeded when you have many PHP processes that are doing slow operations, i.e. HTTP requests to external servers.
So theoretically you could deal with this like follows.
$this->database->closeClientConnection();
$this->httpClient->get('https://some.very/slow/service');
$this->database->openClientConnection();
Also this workaround needs to be removed.
https://git.drupalcode.org/project/drupal/-/blob/11.0.0-beta1/core/lib/D...
This needs steps to reproduce on fresh Drupal environment.
Actually I think the default behavior is wrong. tableExists()
method should not look for views. First of all it's not comply with its name. Second it causes errors when doing schema alterations as other Schema
methods relies on tableExists()
.
A simple example to demonstrate the problem.
$schema = \Drupal::database()->schema();
if ($schema->tableExists('some_view')) {
$schema->dropTable('some_view');
}
The above code returns the following error on MySQL database.
SQLSTATE[42S02]: Base table or view not found: 1965 'd10.some_view' is a view
Filed and issue 🐛 Schema::tableExists does not find views for PostgreSQL Active for #29.
I wonder if {{ node.field_image|view }}
works for your case.
How many commands in Drush core can work on non-booted Drupal installation?
I found just three.
- site:install
- archive:dump
- archive:restore
I believe those commands deserve own console endpoint.
+1
I've built a POC a few years ago. That proves how easy to create a CLI endpoint for commands that only working on fully booted Drupal site. That's an example of Pareto principal. Just a few commands (like drush site-install
) that need to support non-booted environment, brought so much complexity to Drush.
One thing I'd like see in this implementation is supporting command discovery in vendor directory. So you would be able bundle console commands to a Composer package (not Drupal module).
If the template has a node
variable then you can print any fields of it using Twig Tweak module.
@catch all other classes in that project follow that pattern.
For example https://github.com/typhoon-php/typhoon/blob/0.3.x/src/Reflection/Metadat...
::fromFileContents()
has @param
tag for $file
parameter only because it need extra clarification for Psalm (non-empty-string
). All other parameters in that class have no tags. Also note missing @return
tags as the return type is clear from typehint.
The scope of this issue is a bit unclear. Initially it was about @var
tag, but in comments mostly reference @param
tag.
It think it should apply to @return
tag as well.