I'm having whitescreens with ai_automator active, with ECA Helper enabled: 'TypeError: Drupal\ai_automators\Plugin\AiAutomatorProcess\DirectSaveProcessing::__construct(): Argument #3 ($messenger) must be of type Drupal\Core\Messenger\Messenger, Drupal\eca_helper\Decorate\Messenger given, called in /var/www/html/freelock.com/modules/contrib/ai/modules/ai_automators/src/Plugin/AiAutomatorProcess/DirectSaveProcessing.php on line 58 in Drupal\ai_automators\Plugin\AiAutomatorProcess\DirectSaveProcessing->__construct() (line 48 of /var/www/html/freelock.com/modules/contrib/ai/modules/ai_automators/src/Plugin/AiAutomatorProcess/DirectSaveProcessing.php).'
... this MR fixes it, and it looks like the correct fix to me.
We are now successfully adding local extensions on several projects, including:
- PHP extensions (redis, sqlsrv, others)
- PHP configuration (ini settings)
- Other nix packages (weasyprint, python packages)
- Extra devShell items (env vars, other things possible)
... so general extensibility is all working fine.
What's next - new services added to the Services Flake config for process-compose - mailpit, solr being top contenders. I think those will be implemented in a different way -- probably as optional plugins activated by setting values in the .env file.
Marking this issue as done.
This is substantially easier with the Group Purl → module. The 3.x-dev version supports Group 3 now, and is just waiting for a compatible version of the PURL → module to get released so the tests pass.
With Group PURL active, you configure pathauto for your groups, and set content types that go in groups to "keep context." And then you have a Views contextual filter default argument for the active group ("Group ID from Purl"). No need to muck about with relationships -- if a block is placed on a page in a particular group, this will set the default argument for you.
Aha! Our CI/CD logs show the problem...
> [notice] Update started: redirect_update_8110
> [error] Exception thrown while performing a schema update. SQLSTATE[01000]: Warning: 1265 Data truncated for column 'enabled' at row 1: ALTER TABLE "redirect" CHANGE "enabled" "enabled" TINYINT NOT NULL; Array
> (
> )
>
> [error] Update failed: redirect_update_8110
[error] Update aborted by: redirect_update_8110
[error] Finished performing updates.
... so it looks like the Schema alter worked but threw an error, and so it did not complete.
The site where I experienced this is running Drupal 11.2.4, php 8.3.25, MariaDB 11.4.7.
This does not work for existing redirects.
Looking at function redirect_update_8110(), it does add the field and set the default value for the field -- but THIS ONLY AFFECTS NEW VALUES GOING FORWARD -- it does not set the value on existing redirects. Looking in my database, I see the new "enabled" column has 2000+ rows where this is all set to null, and we had some consternation on our team when some redirects they rely upon suddenly quit working.
There should be a post update hook that sets existing redirects to Enabled.
A quick workaround for anyone affected by this is a sql query in the db:
`UPDATE redirect SET enabled = 1;`
... that will enable all your existing redirects.
Successfully added Weasyprint, a Python package, using this local-extensions.nix mechanism -- I did have to add support for adding to the php-fpm process's $PATH and fix a couple other internal issues.
Ok! This is now working -- but you need to apply the fix in 🐛 PHPUnit functional tests cannot connect to MySQL database with a specified unix_socket Active to run any functional tests that require a database. So far I have only used this on a Drupal 11.2 site.
What's now added to the flake, and how you use this:
1. Get your site up and running using Drupal flake.
2. In your shell, run nix develop or use direnv with the current flake installed.
3. If you have not installed phpunit or other necessary code to your site, use phpunit-setup -c to install the composer dependencies.
4. If you have not applied the patch, run the command given to you by phpunit-setup -c, to quickly add the patch to composer.json, and then run composer install.
5. Run phpunit-setup with no flags to set up the phpunit.xml file and the test directories.
You're now set up to run tests! Inside this shell you can use phpunit and give it a directory of tests to run, use --filter to specify particular tests, phpunit should work as is.
There are two additional helper commands available:
- phpunit-module [modulename] - specify a module you want to run tests, and this will search for the module in the current project, find the tests, and run them.
- phpunit-custom - run all phpunit tests in all custom modules and themes.
The code for this is committed and pushed -- it should be available right now after using "refresh-flake" to update your local flake, or "nix flake init -t git+https://git.drupalcode.org/project/drupal_flake .
Leaving this issue in Needs Review for two other features that I have not checked yet:
1. Setting up the environment correctly when the flake is not used -- e.g. on a dev server that uses Docker -- it is supposed to pick up the correct database connection from the running environment.
2. Browser-based tests -- have not tried running any of those yet.
It is working successfully for unit and functional tests!
Fixed code standard issues, moved the tests into an existing UrlConversionTest classes, limited the scope of the change to just "unix_socket" to fix test failures, and sorted out the gitlab MR challenges - thanks @cilefen!
Struggling with gitlab here... when I first opened a MR, it picked up the entirely wrong commit. And now when changing the branch target it's showing 1000+ commits instead of just the one!
Fix in merge request, along with test... still needs a little cleanup for code standards.
freelock → changed the visibility of the branch 3546633-phpunit-functional-tests to active.
freelock → changed the visibility of the branch 3546633-phpunit-functional-tests to hidden.
Created a merge request, including the fix for #9.
freelock → made their first commit to this issue’s fork.
This is working, and tests are passing when run locally with a patched version of PURL. The main test failures are due to the Purl module not yet compatible with Drupal 11.
Tests are currently failing mostly due to the purl module not yet compatible with Drupal 11. @rbrandon says he can review/publish a release with the various merge requests next week, which should lead to our tests passing.
Ok I'm not sure if this was an issue from the original, but the issue I hit was that the form alter was not correctly altering the menu item content form.
Fixed the form alter for the correct form id pattern, and now I get the ability to set the PURL context per menu link (using group_purl, at least).
It seems like this might be better handled as a field widget? As a form alter I have no control as a site builder on where to place it on the menu item form.
So this is just a minimal fix for broken existing code -- I think it should be refactored to provide a better UX (but probably as a new issue).
freelock → made their first commit to this issue’s fork.
Added a merge request to make this easier, bump up the list. Also sent to the collection route.
freelock → made their first commit to this issue’s fork.
Fixed the whitescreens and other validation issues making this module break in Drupal 11. With these fixes, module works for me!
Several things broken in D11. Updating the MR...
3.x-dev is working very well -- it does depend on the D11 fixes for the Purl module. Waiting to tag a release until we have the tests running successfully.
This was a big lift, greatly helped along by defining clear tests, and using Claude.ai for much of the legwork.
The result is far better than what we had before, where we're using this it has stopped the corner cases and made this module work as expected!
This is done -- although there might be some of the compatibility changes included on the fixes for 🐛 Refactor URL generation into its own processor to run after Purl Active . I tried to isolate the Group 3 compatibility to this issue/commit to make the other issue easier to backport for Group 1/2 support if desired.
This is done. The .module file has the suggested wrappers to maintain Drupal 10 compatibility.
Initial set of tests -- I gave Claude a bunch of scenarios to test to get this working for Group 3, covering all the issues we've seen in this module about multiple group contexts, switching in and out, generating entity urls, etc. And then I worked through fixing them. Some of the corner cases I don't have passing yet, so these are in the @failing test group -- but these are all corner cases not needed for actual correct functioning.
This is working, for adding php redis support. Leaving open to test other kinds of additions.
freelock → created an issue.
Hi,
I don't think the code in #4 will fix this issue -- the issue is not with the setNewRevision(), it's with the updateLoadedRevisionId() call.
Workspaces compares the loadedRevisionId with the RevisionId. It might work if we remove the $entity->updateLoadedRevisionId(); line entirely.
I think the loadedRevisionId is meant to contain the revision_id before the entity got saved, so changing that during the update hook is the crux of the problem here.
This might be a good use for a new Action plugin, which you could use with ECA -- a "send webhook" action could then be used with any conditions you want before submission...
Hi,
I think until recently, handling incoming webhook payloads involved code. The webhooks module simply published an event with the data, and any other module could subscribe to that event and do whatever it wanted with the data. I was using a custom module to do this, sending the data to code to sync with content in Drupal.
It looks like the 4.x development version of this module is turning each incoming webhook submission into a new Drupal entity, which can then be shown in views, hooked up to actions in ECA, or do anything you can do with any other entity. I don't know the status of that version, have not tried it out yet to know whether it works or not.
I just created an ECA plugin on 🌱 I suggest planning an ECA integration for D10 Active so you can create an ECA model that gets any incoming webhook, and can do whatever you want from there. If you create that event in an ECA model, you then get the payload in [event:webhook:payload], which you can then deserialize and do as you please!
This is working as expected! Tested with two different inbound webhooks.
New Webhook Received event, provides the full webhook structure as [event:webhook], with the payload under [event:webhook:payload]. You can use the "deserialize" action to turn the payload into a data structure for access to what's sent in the payload. And when you add the event, you can select from a dropdown which webhook to trigger on, or all webhooks.
Pushing up to a production instance for internal testing now.
So I wanted to actually use this, and am vibe-coding a plugin -- and I see a bunch of work going into the 4.0.x-dev branch around the new webhook entities? That seems like a great improvement -- and I see the text on the project page saying this now works with standard entity events so we get ECA integration for free.
How far along is this? Is it ready to go? Is a generic webhook handler event still useful?
Have a prototype created, going ahead and sharing. This is probably more relevant for the 3.x branch, so setting back to that...
Another option would be to change the module weight to make the update hook run after Workspace's hook...
I'm not sure what the correct solution is here -- but I'm thinking that the behavior here is wrong. This is inside the update hook, after an entity has been saved -- it says it's preventing any further changes from creating another new revision. Is that necessary to prevent infinite recursion?
If not, it seems like maybe it's best to go ahead and create a new revision, since at this point the node has already been saved? Is this a concern in many models? Generally it's a really bad idea to save a node inside its own update hook -- is this too defensive to have here?
freelock → created an issue.
Closed.
freelock → created an issue.
freelock → created an issue.
Hi,
Plugin looks basically fine, but should use dependency injection to get the bsky.post_service service on line 24...
Should be along the lines of implementing the ContainerFactoryPluginInterface with a create() method and a constructor...
Cheers,
John
Added some options to the nix run .#demo target, along with the start-demo script.
You can now pass a package name, project name, and composer args to the start-demo script to install packages for other site templates/projects. This does support the one-line approach, though to do anything further with the project (such as use drush, etc) you will need to install the flake to get the appropriate environment.
For example:
One-liner to start XB Demo
Make a new directory and cd into it, and then run this command:
nix run github:freelock/drupal-flake#demo -- phenaproxima/xb-demo xb-demo --stability=dev
On my laptop that takes between 2 - 3 minutes to run before it opens a browser window with the Experience Builder demo loaded up. At the end of the "cms" job in your terminal window, it prints the admin username and password you can enter at /user, and then the dashboard shows the XB page you can open and edit, and you're in Experience Builder!
Start a new project from this point
The one-liner is a slick demo trick, but to continue development on a new project, you need the nix flake installed. You can do this after running the one-liner, but you might need to reinstall to use a different project name. Installing the flake first gives you more options:
nix flake init -t github:freelock/drupal-flake
direnv allow (if you have Direnv installed), or
nix develop (if you don't have Direnv available)
... Then you can use the start-demo script to install/configure a project:
start-demo drupal/core-recommended my-new-project
Renamed the custom phpfpm service to php-fpm, to avoid name collision with services flake.
Keeping the custom service configuration because the upstream one for services flake does not currently support selecting the php version or adding custom modules. See https://github.com/juspay/services-flake/issues/569 .
Looking at services-flake closer, their version of php-fpm does not provide any way to select a different version of PHP -- which is one of our core needs with Drupal Flake.
So at this point I'm going to keep our current php service, but rename it to avoid the naming collision.
The upstream uses phpfpm, so I'm renaming our local custom service to php-fpm.
abenbow → credited freelock → .
freelock → created an issue.
I'm struggling to get these working in Drupal's CI infrastructure -- the tests pass when I run them locally, but when run in Drupal CI, I'm wondering if it's not allowed to fork the processes needed to make this all work -- if anyone has some thoughts on how to get the tests to run successfully, would love some help here!
Ok! Whole new process handling is now working. See the new commands available in the devshell with '?'.
In short, after refreshing the flake to get the latest, and entering the dev shell with `nix develop` or direnv, there are new commands for starting without the process-compose TUI (text user interface). This is built for running in existing Drupal projects -- the demo still uses the TUI, at least for now.
So, to get started with this, go into a local Drupal copy and either:
nix flake init -t gitlab:project/drupal_flake?host=git.drupalcode.org (new install)
or
nix develop
refresh-flake (existing install).
Then:
start-detached
and then you can follow the URL printed to see your site!
There are a few new shell scripts that start with "pc-", which are wrappers for process-compose:
- pc-status -- check the status of process-compose
- pc-attach -- attach to the TUI (you can detach by pressing the F10 key -- when attached this way this only closes the TUI, it does not shut down the project).
- pc-stop -- Stop the current project
- stop-all -- stop all process-compose processes, including other projects
Finally, there's an extra feature added here -- if you use the Starship prompt - https://starship.rs/ - you can install a starship module that will show when you are in a Drupal-flake project that is running. When process-compose is running in your project, you'll see a waterdrop and snowflake emoji, followed by your project name, right in the prompt:
... you can install it in your user profie using "setup-starship-prompt".
This is now working locally. Going to test more broadly.
I've also stubbed in a Starship prompt module, but it's not working...
freelock → created an issue.
freelock → created an issue.
The pushed commit did work for profiling, but broke debugging. Pushing a new commit that turns off the xdebug.log and restores debugging -- profiling is not working with this now.
Rebased for 2.0.0-beta3.
Needs testing across multiple projects, scenarios.
freelock → created an issue.
Preliminary support added. Still need to add documentation and test that it works as expected.
Added timeout to dev.
freelock → created an issue.
chadhester → credited freelock → .
Fix in MR.
freelock → created an issue.
freelock → created an issue.
Fixing on new 3.0.x branch.
freelock → created an issue.
The cacheability comment here reminds me of the Access Policy API - https://www.drupal.org/docs/develop/drupal-apis/access-policy-api → -- the main point being if you can define a cache context as a policy, you can fully use the cache. And use it to define access.
And... this seems like a great thing to integrate (somehow -- I don't fully understand how this works) with feature toggles. For example, I want to roll out a new chat widget on an enterprise site, and they don't want it to go live until a particular date that's in the next fiscal year. Putting it behind a feature flag and rolling it out now makes it very quick to enable when they are ready -- but if we could "enable" the feature for a particular set of users, activated by a magic link or user role, this would allow for much more complete testing in the production environment before turning it on to the world...
The page cache kill switch is a sledgehammer to prevent the current request from being cached in the page cache. I think it could be called on any event.
@jurgen any concern on the event id?
Fixed, rolling, and lifetime are the 3 basic models I've seen.
Fixed is a specific term -- usually monthly or yearly, sometimes quarterly or weekly -- and all memberships expire at the same time. This model typically needs pro-rating if there's a payment involved.
Rolling can start any time, and just repeats every week/month/quarter/year. Generally the main issue here are purchases on the 29th-31st of the month on a monthly basis, handling those dates when the month doesn't have that many days.
Lifetime terms don't expire -- but this is really hard to get working in the same views with other membership terms -- needs a lot of special casing. When I've had to do these, I've punted on handling the hard stuff and just set the expiration out to a date 25 years in the future, sometimes a specific one so an alter hook can display it as "lifetime".
ECA largely uses the token system to access data. So if there's an Event plugin that is dispatched when a webhook is received, that plugin would define what tokens are available -- presumably the headers, payload, etc.
Likewise, for sending a webhook, the action plugin(s) might have different options for taking a pre-serialized JSON string, or automatically serialize an object or array to JSON -- those are the main decisions that need to be made. Can start with a simple case...
This is working now.
Some packages for PHP aren't available for php74 - codesniffer, phpunit. So there's more to doing this than originally thought...
Merge request is working for us, for users using the global token at least.
freelock → created an issue.
Should also consider if/how SimpleNews might fit into this...