Agree with @b_sharpe - it would make sense to move provider code out of AI Core and use another smaller module for the "OpenAI SDK / base provider" which all other "OpenAI-like" providers can extend and depend on.
Added the "Try Drupal AI experiences" to this issue
Working on the repos / initial databases for them
Closing in favour of #3540794
The code looks good to me.
Asked in Slack for testing instructions:
- Install AI Agents and AI Agents Explorer as well.
- Goto the AI Test provider:
/admin/config/ai/providers/ai-test
- Check both checkboxes and save.
- Visit the Agents Explorer for the Field Agent:
/admin/config/ai/agents/explore?agent_id=field_agent_triage
- Try against some provider like Amazee
What is the max resolution on the image fields on the article?
- It will now do some turns, calling some calls using your provider.
- Now go to the mock provider results: /admin/content/ai-mock-provider-result
- Click edit on each of the saves responses, and check "Mock Enabled". You can also for fun set sleep time to 1. Save them
- Go back to the explorer page, but pick EchoAI - GPT Test.
- You should now see the exact same result as before - and a lot faster if you changed the time.
Step #10 is using the changes here to replicate the results from an array.
Works as expected for me.
Thanks @vasi1186
I've created an MR from your patch:
https://git.drupalcode.org/project/ai_provider_amazeeio/-/merge_requests/20
Will test/review shortly
Pushed 1.2.0-alpha1
https://www.drupal.org/project/ai_provider_litellm/releases/1.2.0-alpha1 →
Thanks for reporting, having a look into this now!
And testing with the Extended Logger as per #10
Downloaded and enabled Extended Logger (specifically https://www.drupal.org/project/extended_logger/releases/1.3.0-beta2).
Enabled also extended_logger_db
.
Tested again the provider with the AI Chat Explorer.
Output in the Extended Logger DB page at /admin/reports/extended-logs shows like this (for the call and response):
Seems to be working as expected.
Based on #10, I created a new Drupal project, added the MR from this issue, and added the amazee.ai provider.
I installed the provider, along with all AI related modules, dblog
, and also enabled the ai_observability
module from this MR.
After enabling, I played around with the Chat Explorer, and these actions get logged to dblog
like so:
This seems to work as expected.
I will test the "Extended Logger" version shortly.
I did the full review in #16 but forgot to RTBC
As for the quote in #17, it would be a feature to be added to the amazee.ai provider, but it could potentially be ported to other providers or to AI core... however it can wait for 1.3.x or later.
Hmm, I'm actively maintaining the AI-related modules for amazee.ai / LiteLLM and accompanying recipes. Recipes don't have any PHP code though.
-
https://www.drupal.org/project/ai_provider_amazeeio →
Recently merged some open MRs into these modules:
-
https://www.drupal.org/project/graphql_extras →
-
https://www.drupal.org/project/npm →
Created this old module based on a similar D8 module:
-
https://www.drupal.org/project/responsive_image_formatter_link_to_image_... →
dan2k3k4 → created an issue.
dan2k3k4 → created an issue.
I quite like Vercels suggestion to use a new script tag in the header of a page with a new type so that it doesn't impact client browsers
https://vercel.com/blog/a-proposal-for-inline-llm-instructions-in-html
It would allow fine tuning for LLMs per page. Instead of one global llms.txt file.
Having watched the video in #21 by @b_sharpe, it makes it quite clear and easy to set up multiple Automators. I am not sure if a non-technical person will easily understand the UI/UX behind it, but I imagine that they would be accompanied by some training prior to better understand the setup.
If we can get this in for 1.2.x then wouldn't that make the demos for DrupalCon Vienna look better for marketing purposes?
Rewrote the provider class a bit, removing OpenAI related code, as we don't need to depend on that since the main base OpenAI provider is in core.
1.2.x works again, models show up and the API explorer worked for me.
I pushed a new RC release just to have a working version.
Code could be refactored a bit but needed a working solution in time for the Drupal AI hackathon.
Approved MR - left one comment about fixing "Th" to "The"
Tested, the warning shows up like this on the LiteLLM provider settings page:
And once filling in the OpenAI provider with a key, the warning would no longer show up as expected.
Tested this with a new Drupal site.
I used this issue's MR for the AI module and grabbed the ai_provider_amazeeio_recipe then edited the recipe.yml to include a check for the provider.
Test Steps
# drop the DB
drush sql-drop -y
# install drupal based on config, note none of the ai modules/providers are in core.extension.yml
drush si --existing-config -y
# test the recipe
drush recipe ../recipes/ai_provider_amazeeio_recipe
For a failing test, I checked that openai was attempted to be set up with this in the recipe.yml
(note: minified recipe.yml, config.actions has other things and not just ai.settings)
config:
actions:
ai.settings:
verifySetupAi:
provider_is_setup:
- openai
For a passing test, I checked that amazeeio was set up with this in the recipe.yml
config:
actions:
ai.settings:
verifySetupAi:
provider_is_setup:
- amazeeio
Read through the MR, I like the idea and agree that we do need a way to check for a provider being set up and/or the different models being set up.
One thing that we're thinking about for the amazee.ai provider is to have a hidden feature flag, something like "redirect_to_auth_if_not_setup" which would do as described, redirect the user (check has access to configure the provider) to the provider settings page after login.
In the sense that if you install the module via a recipe, and then boot up the site, you would get redirected (as admin) to the provider settings page to login/authenticate the provider.
I will try to test the MR with a modified recipe (of the ai_provider_amazeeio_recipe) to test out how it looks and feels.
Will re-test against recent changes
@Marcus, should this be "Needs review" or RTBC? It was "Active" when I tested it but I wasn't sure if you were planning further changes to the MR
Merged
Merged
dan2k3k4 → made their first commit to this issue’s fork.
Merged MR
Ported the VDB provider code into the amazee.ai provider module.
Released these fully stable versions for these providers/recipe:
ai_provider_amazeeio
versions1.0.1
and1.1.1
ai_provider_litellm
versions1.0.0
and1.1.0
ai_provider_amazeeio_recipe
versions1.0.2
and1.1.0
Marking this as fixed now.
Code looks good to me. Works as expected.
RBTC
dan2k3k4 → created an issue.
Thanks, merged MR and credited everyone using the new Contribution Record system
https://new.drupal.org/contribution-record/11415156
Created a Merge Request:
https://git.drupalcode.org/project/ai_provider_litellm/-/merge_requests/9
dan2k3k4 → made their first commit to this issue’s fork.
Merged to 1.2.x
RTBC
RTBC
Will review this in detail
Marcus asked me to test this alongside https://www.drupal.org/project/ai_provider_dreamstudio/issues/3531285 ✨ Add ImageToImage capabilities Active
Marking this as RBTC.
Testing Steps
Set up a new local Drupal, then added modules with source via:
composer require 'drupal/ai:1.2.x-dev' -W --prefer-source
composer require 'drupal/ai_provider_dreamstudio:1.0.x-dev' -W --prefer-source
Enabled modules via (AI gets enabled as it's a dependency):
drush en -y ai_provider_dreamstudio
drush en -y ai_api_explorer
Added a new Key with my API key from DreamStudio / Stability AI
Set up the provider for the Image-to-Image
Went to the Image-to-Image Explorer page at: /admin/config/ai/explorers/image_to_image_generator
I tried a few options.
The Outpaint does not seem to generate a new image.
The Erase sort of works, at least the image is edited but it might have been the combination of the image and mask image that produced a weird overlay effect.
The Remove Background worked as expected.
One thing I noticed, when I first loaded the explorer, I think I clicked on Upscale but the "prompt" field was not showing up on first load, so when I clicked generate I got this error - but I think this is unrelated to this provider issue
Error invoking model response: The Dreamstudio API returned an error: {"errors":["prompt: string must contain at least 1 character(s)"],"id":"f470c1eeea8ba89398fc93769c9adfaa","name":"bad_request"} with code 400
Marcus asked me to test this alongside https://www.drupal.org/project/ai/issues/3531212 ✨ Create Image-To-Image operation type Active
Testing Steps
Set up a new local Drupal, then added modules with source via:
composer require 'drupal/ai:1.2.x-dev' -W --prefer-source
composer require 'drupal/ai_provider_dreamstudio:1.0.x-dev' -W --prefer-source
Enabled modules via (AI gets enabled as it's a dependency):
drush en -y ai_provider_dreamstudio
drush en -y ai_api_explorer
Added a new Key with my API key from DreamStudio / Stability AI
Set up the provider for the Image-to-Image
Went to the Image-to-Image Explorer page at: /admin/config/ai/explorers/image_to_image_generator
I tried a few options.
The Outpaint does not seem to generate a new image.
The Erase sort of works, at least the image is edited but it might have been the combination of the image and mask image that produced a weird overlay effect.
The Remove Background worked as expected.
One thing I noticed, when I first loaded the explorer, I think I clicked on Upscale but the "prompt" field was not showing up on first load, so when I clicked generate I got this error - but I think this is unrelated to this provider issue
Error invoking model response: The Dreamstudio API returned an error: {"errors":["prompt: string must contain at least 1 character(s)"],"id":"f470c1eeea8ba89398fc93769c9adfaa","name":"bad_request"} with code 400
Will take a look into testing this today.
It requires this issue/MR too from the ai_provider_dreamstudio module:
https://www.drupal.org/project/ai_provider_dreamstudio/issues/3531285
✨
Add ImageToImage capabilities
Active
valthebald → credited dan2k3k4 → .
Tested locally using a key for amazee.ai as the LiteLLM provider.
Works as expected, able to select models and run chat generation.
I believe we need to push a 1.2.x branch and target 1.2.x of the AI module, same as the other provider modules
Unsure if this is because I'm testing with latest 1.2.x version of the AI module (SHA: 302ba82a8d89)
On route `/admin/config/ai/ai-external-moderation`
TypeError: set_error_handler(): Argument #1 ($callback) must be a valid callback or null, class Drupal\ai_provider_mistral\Plugin\AiProvider\MistralProvider does not have a method "errorCatcher" in set_error_handler() (line 35 of modules/contrib/ai_provider_mistral/src/Plugin/AiProvider/MistralProvider.php).
Also on the default settings route: `/admin/config/ai/settings` - I have no models listed for Mistral AI.
Will take a look into this
It appears to work, at least if I use amazee.ai endpoint as the LiteLLM and use a key from there, then I can see LiteLLM as an available provider
However, I don't see models but I think that's an issue on the amazee.ai provider (api) side and not the LiteLLM module (as I can reproduce the issue for the amazee.ai provider module).
We may need to bump this to 1.2.x branch like for ai_provider_openai.
Works locally for me.
The chat() method that @admitriiev mentioned, can be reworked at a later stage if needed.
Will test locally
valthebald → credited dan2k3k4 → .
Testing locally
It looks like we also need this issue for https://www.drupal.org/project/ai_provider_amazeeio → v1.2
Taking a look at the MR
valthebald → credited dan2k3k4 → .
@andrewbelcher perhaps, we should default to FALSE? Providers should explicitly pass TRUE if they support it?
@andrewbelcher perhaps, we should default to FALSE? Providers should explicitly pass TRUE if they support it?
We've opted into Security Coverage, and I've pushed up a stable 1.0.0.
I'm in touch with the maintainer of ai_vdb_provider_postgres to push forward with getting a stable release for that for 1.1.x / 1.2.x.
I think we can also push up a stable 1.1.x too.
dan2k3k4 → created an issue.
Tests passed, merged MR, and backported to 1.1.x too
The team_id should work now, however MR fails on phpunit tests but seems unrelated to the code
valthebald → credited dan2k3k4 → .
dan2k3k4 → made their first commit to this issue’s fork.
jjchinquist → credited dan2k3k4 → .
valthebald → credited dan2k3k4 → .
Looks good to me. RBTC