Locally tested, adding the mentioned MR changes eliminates the warning log and resolves the issue.
I tried to resolve the test locally but didn't work for me. Trying it here
Still a WIP MR.
I need to add tests and doc.
However, most of the new form element work is complete. Some cleanup and refinement yet needed to be done. I will move it for review after that.
I just wanted to share my progress.
Tests are green now and I have addressed some of the feedback. This is up for review once again.
I see the mistakes in format I was doing. I was following Example: title: "[node:title]". And didn't added '{}' to it.
I realized that after seeing the code using `yamlParser`
Thank you for the prompt response.
Added @jurgenhaas in credit as it was managed by them in the 📌 No extra event required, simplify overall code and execution Needs review issue
Thank you for reporting this issue. This is resolved now, and a new release has been created.
abhisekmazumdar → made their first commit to this issue’s fork.
abhisekmazumdar → made their first commit to this issue’s fork.
Reviewed this locally needed a small changes which I updated in the MR.
Closing this in favor of: 📌 No extra event required, simplify overall code and execution Needs review
Thank You for reporting the issue. I'm working on creating the fix.
I followed the steps and was able start Chunking the content.
I took a quick glance at the code. Everything appears good, except for a single inline comment.
Here sharing some progress for the concept of a custom module "AI Experience Wizard"
Created a concept of AI Provider Registry(a single source of truth for all supporting AI Providers):
This makes it easy to adopt a new AI provider.
One page which list all the supported AI Providers
Batch process which pull the module and installs it
Final form which create a key and setup the AI provider
This is the same form which any AI provider shows to setup / select the key.
The key_select type field get replaced with input text field which on one click create the key and set it with the selected AI provider.
Next step is to apply any AI recipe with the selection of AI Provider.
Please Note: This is still a concept and any suggestion and inputs are always welcome.
As a sub-module maintainer of AI Validations I lean toward following the core deprecation policy with additions:
- Release 1.3.0 with deprecation notices + composer suggestions
- Update hooks to assist migration
- Keep a transition period where both old and new work
- Clear documentation with step-by-step migration guides
Also creating specific issue for migration tooling/documentation that applies to all sub-modules.
Nicee 🎉
Thank you for resolving the mystery. I can take care of the rest changes.
Thank You for the review. This is merged now.
Functionally, it resolves the issue. Code changes are also good.
@nikro
I just review the https://git.drupalcode.org/project/ai_provider_dropsolidai/-/commit/38ae... changes.
The code changes look good, and they work fine locally. I see the LangFuse module not installed warring message, and if it’s not configured, it ask to configure it.
So a RTBC for me.
Thank You @jurgenhaas for the quick feedback.
I missed the ai_integration_eca_install hook I reverted those suggested changes and made some improvement.
I followed the steps mentioned in comment #13 by @littlepixiez (thanks for the steps, that saved a lot of time to figure out the issue)
I feel we should keeping the plugin ID as eca_ai_trigger_agent rather than changing it to follow the ai_integration_eca_* pattern.
I made come changes and added a commit to it.
The changes basically add support for migrating ai_eca_agents submodule too
I realized we were only handling the main ai_eca submodule and I feel that we need need to do that for
ai_eca_agents. So added migration support for that submodule as well.
Also cleaned up the code a bit, instead of having separate loops for each
submodule, I made it use an array so it's easier to maintain.
The eca_ai_trigger_agent plugin stays the same since it doesn't start with
"ai_eca_" so it won't get touched by the migration anyway.
I see that no release was made which covers these changes. I'm making on right now 3.0.0-beta6
I see this was merged. Marking this as Fixed.
I made some updates for the phpcs and phpstan issues. Created a deprecation take care issue as well: www.drupal.org/project/mautic_eca/issues/3550474
abhisekmazumdar → created an issue.
@nikro Thank You for the inputs.
Here what I'm thinking and have started working on a POC as per the suggestion you gave:
- A contrib module 'AI Experience Wizard' sounds good to me.
- But yes we need to write a single install task to hook into the installer so that this custom module is the next thing a user sees after a site is installed or a form for the selection of the providers.
- After the installer does it job of getting the provider name this custom module can kick in and use package manager to pull the required module and set it up.
I will work on this concept and let you know what is possible and what is not. I currently have some confusion, but it will be resolved as I begin working on this.
abhisekmazumdar → created an issue.
abhisekmazumdar → created an issue.
Also unassigned the issue from @arisha so that anyone else can pick this up for review.
This shouldn't be marked as fixed. It should be marked as needing review so that the maintainer and others are aware that it requires attention before the MR can be merged.
Just one phpstan error. I don't see any issues with the code.
I'm not sure what I'm missing here:
------ --------------------------------------------------------------------
Line modules/field_widget_actions/src/Hook/FieldWidgetAction.php
------ --------------------------------------------------------------------
38 Attribute class Drupal\field_widget_actions\Hook\Autowire does not
exist.
🪪 attribute.notFound
------ -------------------------------------------------------------------- abhisekmazumdar → created an issue.
The new changes done by @a.dmitriiev is good and something we should have in this action.
Thank You for the improvements. I hope this get merged soon :-)
Thank You for the inputs I have merged this and created a new release.
Nice, made me so excited to test this.
I needed to pull the "drupal/modeler_api": "1.0.x-dev" as ai_agents needs that.
Rest I see that I was able to Overrides a existing AI agent by following the mentioned steps:
I have few question:
We add a new Config setting page for this ?
Or keep it under the existing config page: https://git.drupalcode.org/project/chromium_tool/-/blob/1.0.x/chromium_t...
That form will have a selection option for different image styles available in the site.
image style on installation that scales on height or width to <1500px on the largest one.
This will be a default config for this module
Current Industry Standards after comparing few providers
OpenAI
- auto (default): The model decides whether to call functions and which ones to use
- none: Disables function calling, ensuring text-only responses
- required: Forces the model to call at least one function
- function: Forces the model to call a specific function by name
Then there is Parallel tool calls. Not sure how can do / or will do this ?
Groq
- auto: Model decides tool usage autonomously
- none: Text-only responses, no tool calls
- required: Mandates tool usage
Anthropic
- auto: Equivalent to OpenAI's auto mode
- any: Similar to OpenAI's required mode
- tool: Forces specific tool usage
OpenRouter
- auto: Let model decide (default)
- none: Disable tool usage
- function: Force specific tool
LiteLLM
- It has something similer to OpenAI itself: auto, none, required
- It also can do Parallel tool calls
I have updated the MR. Please review.
I'm also unsure about the #9 from @murz. I'll leave it to @marcus_johansson to decide if we still need a library here.
abhisekmazumdar → created an issue.
I see that the comment #18 was addressed in the latest commit. I see everything else has already been reviewed and approved, so I'm moving this to RTBC.
This is reviewed and merged. Thank you.
Yes, I like the new approach. Both make the summer preview in a Slack link more useful and readable.
I will try to get this merged next week. Cleaning the linting issues is good to have.
Thank you all for moving this forward. I have created a new release: field_validation 3.0.0-beta5.
With some AI assistant coding, I tried to find the possible cases that needed to be tested. Please review.
abhisekmazumdar → made their first commit to this issue’s fork.
abhisekmazumdar → created an issue.
abhisekmazumdar → created an issue.
I can take care of that. Yes, I agree the test cases should be more broader.
Thank You for clearing my confusion. I tested it again and it looks good to me.
I merged these changes on top of 📌 Use a different model for LLM evaluation Active
Here what I see when running the drush command:
Command: ddev drush agetes --group_id=1 --uid=1 --eval_provider=openai --eval_model=o3 --detailed
[notice] Running tests as user: admin
[notice] Running test group: Basic Pages Test Group
[notice] Overriding LLM evaluation model: openai/o3
[notice] Running test 1 of 3: Test ID 1 (nomic-embed-text:latest)
[error] Error invoking model response: "nomic-embed-text:latest" does not support chat
[notice] Running test 2 of 3: Test ID 2 (nomic-embed-text:latest)
[error] Error invoking model response: "nomic-embed-text:latest" does not support chat
[notice] Running test 3 of 3: Test ID 3 (nomic-embed-text:latest)
[error] Error invoking model response: "nomic-embed-text:latest" does not support chat
[notice] Test group completed. Results available at: /admin/content/ai-agents-test/group/result/9
------------- --------- ---------------------- --------------------------------------------------------------------------
Id Result Label Message
------------- --------- ---------------------- --------------------------------------------------------------------------
test_1 Error Is Basic Page Error: Error invoking model response: "nomic-embed-text:latest" does not
content type sticky support chat, Line: 309, File:
by default /var/www/html/web/modules/contrib/ai/src/Plugin/ProviderProxy.php
test_2 Error Is Basic Page Error: Error invoking model response: "nomic-embed-text:latest" does not
content type support chat, Line: 309, File:
promoted by default /var/www/html/web/modules/contrib/ai/src/Plugin/ProviderProxy.php
test_3 Error Is Basic Page Error: Error invoking model response: "nomic-embed-text:latest" does not
content type support chat, Line: 309, File:
published by default /var/www/html/web/modules/contrib/ai/src/Plugin/ProviderProxy.php
group_total Failure Group: Basic Pages 0/3 tests passed | Duration: 0s | Group Result ID: 9 | Success rate:
Test Group 0.00%
------------- --------- ---------------------- --------------------------------------------------------------------------
I think I can move this at least with the initial work. Then later on, we can decide about the validator thing.