@marcus Would I be right in thinking this needs to be resolved in https://git.drupalcode.org/project/ai_provider_openai/-/blob/1.0.x/src/P... rather than this module?
Can you confirm which provider you are using?
All good :)
I suspect there is something wrong with your set up as the provider should never be null: your work around is probably not going to be something we'd want in the module unless there's a more generic bug to be addressed. Can you recreate this issue on a clean install on a D10 test site?
I don't think your change is ideal as the $response variable isn't created when there's an error, which may cause other errors.
Looks OK, but do you have any idea why you are getting definitions that don't appear in the form state?
I'll close this as it relates to a third party module then.
Is there anything in this that is particular to providing an AI search? I'm just wondering if it can be achieved in custom code using the existing SAPI hooks or events, or if it's something that might be better suited to the main SAPI module.
Looks fine to me
I suspect that will be different on every installation so I'm not sure there's a lot can be done about it, unless we have a maxikmum weight we can use?
@marcus Does this need a change in the code, or possibly a trouble-shooting section in the docs?
@valthebald I'm an idiot and we've both over-engineered the much easier solution that should fix it for all translatable entities.
This is caused by the assumptions we have to make in https://www.drupal.org/project/ai/issues/3498696 📌 RouteSubscriber for automator chain assumes node types Active about the structure of entities' translation routes, so there may well be other modules that cause similar errors. The Mr adds special handling for the commerce variation entity so should resolve this specific instance.
mrdalesmith → made their first commit to this issue’s fork.
@anjaliprasannan Your screenshot shows the button being visible so I'm a little unsure of the issue? Can you confirm you've followed the instructions in https://project.pages.drupalcode.org/ai/modules/ai_ckeditor/ - especially item 5 under Installation & configuration which explains the individual plugins have to be enabled as well as the AI Assistant being added to the ckeditor toolbar?
I didn't say it wasn't on purpose - I just said that you couldn't do it :) If you have a text field, you can't fill it with the output of telling a chat operation type to describe the image - the best you currently can do is set the alt text on the image in an image field, unless I'm misunderstanding the current set up. This feels like something someone would want to do (the media entity for example is often set up with a separate "caption" field which is different to alt text) but currently can't.
LGTM :)
There are failing tests https://git.drupalcode.org/issue/ai-3505528/-/jobs/4303939. The maintainers have mentioned that any code changes that might cause other custom code or contrib modules to receive errors can't go into the module until it has a 2.0 release, so this may need some caution.
Added some queries and noted missing documentation.
You can't select a media field as the base field in basic mode: working on a Drupal CMS set up with the "Blog" recipe, the only things I can use as context for a text field are other text fields.
mrdalesmith → created an issue.
I'm not sure that's the way to go: if the $this->prePrompt() is empty you're just going to get the same error: it might be better if we check if the values exist before passing them to str_replace()?
You can expose views settings as part of the viewsreference field settings - like I say, it's going to be better to handover the "views in a field" side of it to a module that's dedicated to doing it, otherwise you're just going to duplicate and/or possibly introduce issues.
Looks like the Automators are quite picky about what types of field they'll let you pick as the base field in the basic mode, but what I have so far is:
1. Install and enable Viewsreference
2. Add viewsreference field to your node.
3. Add a text_area field to your node.
4. Enable an Automator on the text_area field: use Advanced mode (tokens) and a prompt similar to: "Summarise the following context to a description of not more than 500 words in a light and breezy tone. Context: [node:field_views]"
When you create a new node, you are able to select a view: on save, the view is rendered and passed to the AI as context to the prompt. The AI response is then added to your text_area field.
It all works as above with no code changes required: possibly we might want to make some changes so that the Basic mode will let you select an extended range of fields as context, but I'm not sure on whether the basic mode sends the plain field value (which would just be the view and display name) or actually renders it (which is what we'd need for it to work there).
I'm going to knock this back to needs review so someone can test it who didn't do the code changes.
@marcus Can I double check what it is you are expecting from this automator? I think you're saying an automator that can:
1. Allow a user to select a view
2. Pass the output of that view to the AI as context
3. Fill in the value of a field based on the AI output.
If that is what you're after, I *think* the existing automators should already be able to handle that if you use a https://www.drupal.org/project/viewsreference → field as your input for the automator without anything new needing coding. This would also mean the permissions on whichever view you were using would be respected.
The easiest fix for this would be to allow the model to be a null value, but it feels like this would lead to more issues further down the line. Rather than this being a separate issue to how we handle keys that can't access model lists, I think that's the issue we need to focus on: if we decide that these keys can't be used, this specific issue goes away.
I think we need a discussion here about whether there are circumstances under which a key that can't list models can still be used by the AI modules, and if not block them from being used. If we decide they can be used, that's going to need a bit of thought about how we handle the fact that we might not know which model can be used.
If you have changes to push you need to click the green "get push access" button above and make sure you've followed the instructions at https://www.drupal.org/docs/develop/git/using-gitlab-to-contribute-to-dr... → for whichever method you want to use to contribute.
OK there's a couple of things wrong with your field configuration that explains why it wasn't working:
1. You should use the LLM: Taxonomy Reference AI Automator Type to populate a taxonomy term, rather than the generic LLM Entity Reference one.
2. Your Automator Base Field is currently the Revision Log message - presumably this is where you're getting the error if the automator is trying to extract the value of this field and it hasn't been set when you save your node. The Automator Base Field should be the field that contains the context you want to send to the LLM.
3. The LLM: Taxonomy Reference and LLM: Entity Reference automator types don't support sending an image as context to the AI - that requires using a different type of AI to analyse it and currently the automators for most fields only support using the "chat" operation type. Because of that, what you're trying to do won't currently be possible.
4. The prompt you are using is using Drupal tokens to embed the value of the field: the Automator Prompts don't currently decode Drupal tokens so the image won't get sent to the AI this way either.
5. Your prompt doesn't include the {{ context }} placeholder, so the base field you have selected won't get sent to the AI either.
You might need to read through the instructions for the automators to help you configure them correctly - https://project.pages.drupalcode.org/ai/modules/ai_automators/
However, I'd agree that the automator should handle misconfiguration of this kind in a more graceful manner, so I'll leave this issue open so we can prevent the fatal error.
The Deepchat chatbot requires an additional optional library to render markdown - if you check on your site's status page, it should tell you if the library is not present - can you check that for us?
Can you provide more details of which specific AI functionality you are using, and where you are expecting the markdown to display? Bear in kind that if the markdown is being set as the value of a text_format field you will need something like https://www.drupal.org/project/markdown → for Drupal to actually be able to render it: markdown is not supported by default in Drupal.
Can you provide the settings you put on the field for the automator? We'll need to see what the AI was being asked to do to properly recreate the issue.
Can be merged ASAP
Code LGTM
LGTM
LGTM - needs testing by someone on a sub-directoried site.
I'd be happy with two separate recipes, but I suspect that would go against the ethos behind them: a chatbot recipe would necessarily require the "basic AI" recipe so you'd end up with a confusing UX, I think. The use case of this specific issue is fairly straight forward: if you want to use the recipe to put an AI chatbot on your site but you want it to use the new DeepL AI instead of OpenAI, you currently can't. You'd have to pretend you were going to use OpenAi or Anthropic, and then manually disable those providers after you'd got the chatbot configuration you wanted. This seems to go against the idea of people being able to set up Drupal to do things without having to know how.
I don't see the issue with some stages of configuration being optional - many modules have features that are optional, so why can't recipes reflect that? But perhaps this is just a limitation of the recipes idea in general: ideally what you'd be able to do is pick your modules and have them downloaded and configured through the recipe, but the reliance on composer require means - in this specific instance - you will probably get at least one module downloaded and enabled that you don't want.
That all looks good to me - it probably needs a follow up ticket to then use this new provider with the other AI translation methods, but I don't think that needs doing here urgently. @marcus are you happy with this?
LGTM
I'm aware @valthebald, but not everyone using the module will be: I am suggesting that the documentation for the module in the docs folder needs updating to explain this as part of this MR before it is merged so that everyone can be aware of why this is needed when they come to configure the module.
@rhristov should this be in needs review?
mrdalesmith → changed the visibility of the branch 3503980-the-translation-submodule to hidden.
This looks OK to me, but I'll let a maintainer review the actual message.
@wouters_f this can't be in Needs Review until you create an MR to review.
Is there a reason the plugin name is given as THE_NAME_OF_THE_CHOSEN_ACTION when all the other placeholder values use spaces instead of underscores?
Looks good to me
You have a PHPCS fail from using the backslash within your t() declaration. https://git.drupalcode.org/issue/ai-3503853/-/jobs/4215205
I believe the easiest way would be to create a custom exception and throw it, allowing other code to react to the event if it needs to.
You having PHPCS fails because you aren't properly injecting the request: these will need to be fixed.
There would also need to be some consideration about what to do where multiple references were found, and limiting the search to only appropriate words (and, if, I, being examples of words it wouldn't be appropriate to carry pout a search on).
What we did for the providers may be useful here: ai_update_10301() checks the various possibilities for whether the deprecated or replacement module exist and are enabled and reacts accordingly. Something similar would be useful for ai_eca as well as deprecating the module.
Still nothing in the way of documentation as to why you would want a provider for translation that duplicates any other you might be using.
That error comes if you don't clear caches after adding the new ca he context: can you confirm if you did that?
Can confirm the issue exists, and is because the entire translation process has been coded on the assumption that all fields have only one translatable string that is contained in a "value" key. As the text with summary field stores any custom summary in a "summary", this gets skipped.
The solution would seem to be to create a specific FieldTextExtractor plugin that specifically handles text_with_summary field types for pushing the data into the translated field, but we will also need to make changes to the AiTranslateController so that it actually translates both values: I wonder if we should include in the extractor plugins some kind of field template that the controller can use to identify what values need to be sent for translation? Otherwise we might need to include a lot of if statements in translateSingleText().
I'm a little nervous of doing anything with this until https://www.drupal.org/project/ai/issues/3478721 📌 Discuss: Unify translate operations over ai Active (and possibly https://www.drupal.org/project/ai/issues/3480683 📌 Use core Json::encode() and Json::decode everywhere possible Active ) have been merged, however. We may be best to postpone whilst those are reviewed?
Can you confirm that you set the Entity Reference Translation options at the bottom of the form at /admin/config/ai/ai-translate?
Tested and translation works without error using the ai_translate module's form. The issue description mentions unifying the provider with the AI CKEditor module as well, but the settings for that still only allow me to set the chat provider for my OpenAI provider, not the spoofed Chat Proxy to LLM provider that I set up in the AI Default settings. Not sure if it's out of scope for this work or not though, which as I say works fine.
I too was confused about why we wanted to have this new provider for translation, until I read the explanation in the MR discussion: it would be good if we could find somewhere to show this on screen, but at the very least I think we need to update the documentation for ai_translate as part of this work.
It sounds to me that what you are looking for is too specific an implementation for this general module and probably needs to be its own module.
If I'm understanding you correctly, what you're looking to do is provide an AI with a known "good" paper so it can "mark" students papers against that. This might be something that a general AI could do if provided with example data, but for more robust results you'd really need to be training a custom LLM on a large number of "good" papers to achieve this, which would definitely be out of scope for this module.
Your first step would be to do some testing against a few of the existing LLMs with your own prompts defining what you want it to do, and assess the quality of the results you get back. That would then give you some basis for deciding which way to go next.
I may be confused about the issue you're raising here, but can you not set up the automators on the media entity's image field (probably at /admin/structure/media/manage/image/fields/media.image.field_media_image depending on your specific field names)?
You have test failures: https://git.drupalcode.org/issue/ai-3478721/-/jobs/4148041
LGTM
LGTM
I've gone a little beyond the original scope on this to also include updating all the files in any /src folder to have proper php8 create statements and to document as many of the functions and methods as I can possibly can, in the hope that this will make the project a little more manageable.
Tests are passing, but I'd recommend a couple of people test their own specific areas to make sure the code was actually doing what they though - there were a lot of base classes and interfaces that specified return types for messages in doc comments (so not enforced) that conflicted with with implementations were actually returning, so I may have chosen the wrong side in those battles.
As part of this, I *think* that the test in tests/src/Kernel/OperationType/ImageClassification/ImageClassificationInterfaceTest::testImageClassificationBroken() may be giving a false positive: working through it, it is supposed to check for a specific error if a binary isn't provided to a provider doing image classification, but it is also not specifying a model in its call: if you do set a model, then the test fails. I've amended the code to replicate the existing test pass, but it may need someone who knows the intention behind the test to rejig it to be sure that's what is being tested.
It would be good for this to get merged ASAP as it's going to go stale very quickly.
Tests failing
No problems with the code and works as designed. Might want some work to tidy up the form in future, as AI generated content wouldn't be sensible for a lot of the tags (I can only really see it being useful for summarises articles into a meta description?) but appreciate that will be a lot of work.
Feels like a SAPI issue rather than an AI issue (what would the index do if it was set up without fields?) but code looks good and works so RTBC.
It could, although I've put reservations about how manageable that would be on that ticket too. TBH nested paragraphs and complex content type structures are put in place to assist human content creators and make sure they stick to proper semantic markup: I don't see why you would be using them if you were also getting an AI to generate your text.
Both tickets feel to me like a lot of complicated difficult work without a robust general use case.
I put the paragraph support in, and I deliberately didn't recurse because I felt it would be a lot of additional processing tbat created a very confusing UI: think of a node that references a paragraph that itself references a paragraph with the same fields - in the UI we would need to present a clean, simple process to the user where they could select the field within the correct paragraph and generate AI content for it. That would - to my mind - vet unwieldy very quickly.
Given there are currently multiple ways of getting AI generated content into fields using just the core AI module, I'd be minded to say this particular module isn't suited to managing nested entities beyond the first level (and possibly not even that): if we needed a method of doing this that wasn't for WYSIWYG fields (for which it would be better and cleaner to use the AI CKEditor sub-module) I think we'd need a strong use case to make it worth the effort.
I decided we'd be best off not trying to automate this because the existing form alter includes string field types and the majority of content entities have at least one string field: filtering out all the ones we assume it won't make sense to use the AI on would add a lot of conditionals and mean we might block users from doing something they want to do.
Instead I've added a settings form to let users decide for themselves, and an update hook that sets the settings to preserve the previous behaviour for anyone who was already using the module.
Raised https://www.drupal.org/project/ai/issues/3501661 📌 Documentation incorrect for AI CKEditor Active to resolve.
mrdalesmith → created an issue.
You wouldn't need to invalidate the context - it invalidates itself whenever any new provider is added - but if we need to have a cache clear if the plugin settings are changed as well, then we'd need to add the cache tags of the VDB providers to the form (and create cache tags for them if plugins don't get those by default). I think that would be the most Drupal way to resolve the issue?
@marcus_johansson I don't think it makes sense to merge both, does it? The first MR adds help text that says you have to enable the Source Editing plugin, and the second makes it so you don't have to enable the Source Editing plugin. Won't the documentation be confusingly wrong now?
All looks good to me.
Looks good
mrdalesmith → made their first commit to this issue’s fork.
I'm not 100% on how the model form gets built, but the only mention of response_format having a default setting I can find is in definitions/api_defaults.yml where it looks to me like it is already set to a default of url.
If I'm wrong about that, can you indicate where you're expecting to see a change? I see the field isn't marked as required so are you instead wanting to make it required so that a value MUSt be set?
mrdalesmith → made their first commit to this issue’s fork.
The planning agent sounds like it would be the first place to start. I think both those tickets though should cover the issue raised here and since the issue is with a separate module, we might be best to close this issue now.
TBH at some point we need to bite the bullet and make these a Drupal Plugin, surely?
We've got ai/src/Cache/Context/AIProvidersCacheContext.php that gets invalidated every time a new provider is added (VDB or otherwise) so it should just be a case of ensuring that context is added to the search api form?
If the AI recipe was only setting up the AI Provider then I'd agree, but it also sets up the configuration for the chatbot. Given how complicated and difficult it is to get the prompts right for the chatbot, there will be cases where people want that configuration but don't want to use the two providers configured. The only options I can see are to allow people to set up a provider separately afterward (the request made here) or to have the wizard show every possible provider, which will add a lot of modules users don't need (given that as I understand it the recipes are intended to enable every module included in the recipe regardless of whether they are needed).
So this is different to GA - a recipe that has no additional configuration you might want if you aren't also using the mandatory GA key - and may well be something that only applies to the AI recipe. But it does apply, and represents a barrier to a section of people being able to use the recipe.
It would be good if we could come up with a solution that works for all cases, but I'm not seeing how making some stages of the wizard option would be that much of a breaking change.
mrdalesmith → created an issue.
Yep, I can recreate the issue using those instructions. Interestingly, if I recreate MY test, the Featured Image field is correctly copied across as a media reference field.
What I notice going through the code itself that that the FieldType Agent does not provide details of existing fields to the AI when it makes its initial assessment of how to respond to the query. In fact the FieldType agent is never given details of existing fields, which means it is very unlikely that it is using existing settings to create the new field: it looks like my original test may have accidentally passed because for unknown reasons the AI determined in this case that a Media Entity Reference field was required and it was just coincidence that it was right.
It may be that the simplest fix for this would be to provide the AI with details of existing fields along with the rest of the context, but that may be too much data for most use cases. Possibly we might be better to create a "copyField" Agent that the AI can use when it is asked to copy fields.
Either way, this looks to be an issue at the AI Agent level so we'll need to move this issue over there at some point, but I'll leave it here for now to give the maintainers a chance to pitch in on possible solutions.
Changes all good. Tested the new plugin and it correctly found the address and set it in an address field - managed to get the full address from a mention of "10 Downing Street, SW1A 2AA" in the text which is good (but might suggest it's augmenting the addresses from its own training data, but I think we should cross that bridge if we reach it).
mrdalesmith → made their first commit to this issue’s fork.
Changes look good and tests all passing.