wouters_f → created an issue.
The Mistral API errors were not caught by the underlying library.
It did not have to do with the translations.
@Marcus: I'm not sure why the Mistral integration throws this error
PHP message: Uncaught PHP Exception TypeError: "array_map(): Argument #2 ($array) must be of type array, null given" at /var/www/html/vendor/openai-php/client/src/Responses/Chat/CreateResponse.php line 49
So it works now.
This is the translation screen if translations is enabled for the Content type.
This is how it looks after the Translation.
Remarks
tested with
- Openai: Works
- Mistral: Does not work
wouters_f → created an issue.
The danswer default models have a short description like.
- if you're looking for multilingual content and search this is probably what you want.
- if you want a really light fast model with only english content this is probably the right model for you.
That really helped for choosing (at least for me)
Also adding a small notification like
"if you change embedding models it's best to do a re-index of the site. Otherwise your search might react in strange ways."
wouters_f → created an issue.
Tested. Seems to work now.
Yes Marcus you can do that (sub readme files).
I've added some sub pages, check it out.
Is it something like this you're looking for?
We should add this also to the list:
https://www.drupal.org/project/search_api_aais →
Altered it a tiny bit that allows the following too (type as a input param).
Have only tried out the chat version yet however.
$embedding = ai('Who built you'); //chat
$embedding = ai('Who built you', 'chat');
$embedding = ai('Who built you', 'embedding');
$embedding = ai('Who built you', 'moderation');
$embedding = ai('Who built you', 'text_to_image');
$embedding = ai('Who built you', 'text_to_speech');
Looking forward to meet and brainstorm!
Maybe I'm still in an old paradigm/mindset :D
I've cooked up a simple version that allows one to do:
$response = ai('Who built you');
Check the branch / merge request for the code, it works!
Tested to work with
- Mistral
- Openai
So the abstraction layer works very fine :D
// Example implementation of the first one
function ai_based_on_settings($input) {
$instance = \Drupal::config('ai_function')->get('ai_function_provider');
$model = \Drupal::config('ai_function')->get('ai_function_model');
$service = \Drupal::service('ai.provider');
$provider = $service->getInstance($instance);
$messages = new ChatInput([
new chatMessage('system', 'You are helpful assistant.'),
new chatMessage('user', $input),
]);
$message = $provider->chat($messages, $model)->getNormalized();
return $message->getMessage();
}
I'll try to make a submodule that does this on the dev days, so we can play around with it.
I love the analogy with for example cache_get . I don't want to know if we're using the memcache or db cache system (or redis).
It would be great if things would keep working if I switch to another AI provider.
Calling AI() ideas, good and bad
You can go multiple directions with this.
Point directly to the provider and model.
ai('openai', 'gpt-4o', 'Summarize ' . $value . ' in 2 sentences');
This would not allow switching to another LLM and keeping things working. But it's more verbose and clear.
No provider
I don't think going to the model without provider would be a good idea (different vendors with same name models).
ai($model, 'Summarize ' . $value . ' in 2 sentences');
We should not do this.
Just take the first Available
If you take abstraction of Providers and models you could think of this
ai('text_completion', 'Summarize ' . $value . ' in 2 sentences');
It's the simplest, with the most assumptions.
This takes the defaults you've configured (I've configured Openai and GPT4-0 for example).
Coming back to the cache analogy, swapping out LLM's would not break this integration.
I absolutely agree Multiple outputs are possible here. (Generate text / generate an asset).
In short: It takes the first available (or configured) text model (LLM) and gives it the prompt.
I like this approach best.
I also think that most users of ai()
will just use one LLM.
Expected behavior (ideas)
Then whatever it returns, will be returned (maybe even "dumbed down" ) so that only the output or some basic wrapping comes back.
I'm not knowledgable on the "normalisation" process (yet) so it could be naive.
If you want to choose providers or have more options. I absolutely agree that you should not be doing things with ai().
In other words ai()
can be your training wheels, we'll hold your hands, Anything else should be done differently.
MEDIA
If you start working with media I think you're not in the target group for the ai()
function. and you should be doing things differently.
But if you would really want to make that work however:
You could test if the input is not a string (prompt) but an array (prompt,asset) and "just" work with that.
ai('image_to_image', [$asset, $prompt] );
But I don't know enough about what is expected in the back to give sound examples here.
This does feel off.
I could be oversimplifying things here, and I'm sorry for that.
Also: If it's not the direction we want to go: I also understand. (I can just make a submodule that provides the function call to try it).
wouters_f → created an issue.
If you end up on this page apart from the breadcrumb it seems that you're on a generic page. Just adding AI to the title to improve awareness of where you are in the docs.
Completely agree with mindaugasd here.
I'll do a suggestion commit.
Let me know what you think.
wouters_f → made their first commit to this issue’s fork.
Should I make a separate issue for the visual validation?
I created this ticket in google_vision
https://www.drupal.org/project/google_vision/issues/3456401#comment-1565...
✨
Add google vision validators to AI_validators (ai submodule)
Active
to plug that one in.
wouters_f → created an issue.
wouters_f → created an issue.
Marcus_Johansson → credited wouters_f → .
I've created the module and a plugin for textual validating.
You can now configure the AI text validator
And then you can select a prompt and error message for this field
If you then submit the form
You will see the validation being triggered.
1 task left
in AiTextConstraintValidator.php
there is 1 TODO left.
I was not able to use the AI with the example code.
SO if you could replace this with the call to the LLM. I'd be happy to test this.
Configuring the keys went fine,
Finding the Huggingface models (i'm not a huggingface expert).
I saw this on hugginface :
<a href="https://huggingface.co/intfloat/multilingual-e5-small">https://huggingface.co/intfloat/multilingual-e5-small</a>
So I enter this in the embedding autocomplete, but it automatically goes to
hotchpotch/vespa-onnx-intfloat-multilingual-e5-small
SO I'm not sure if this is to be expected.
When I test this in the chat interface (model ReBatch/Reynaerde-7B-Instruct):
I see this:
POST https://api-inference.huggingface.co/models/ReBatch/Reynaerde-7B-Instruct` resulted in a `400 Bad Request` response: {"error":"Authorization header is correct, but the token seems invalid"}
Of the following exception type:
Drupal\ai\Exception\AiBadRequestException
Apparently you should after configuring hugginface check these boxes.
Find the settings user > Settings > Access tokens > Inference:
Might be interesting to set a little instruction on the mistral config page (/admin/config/ai/providers/huggingface).
Something of sorts: "Make sure your tokens have the permission to call the inference API, This is not enabled by default. (or similar).
I'm no huggingface expert but can imagine other people bumbping into this.
Tested and working!
(apart from the cache clearing issue I told you in slack about but thats not related to mistral, rather something more general I suppose.
Jean-Paul Vosmeer → credited wouters_f → .
Marcus_Johansson → credited wouters_f → .
I'm also not sure if automators is the place to be.
inspiration
I know a safe search has been built in
https://www.drupal.org/project/google_vision →
I have integrated it and have some code snippets in a slideshow i did in 2019
https://drive.google.com/file/d/1dNVaFdjeEnMceFC-SEcUwQfk55FXXE-O/view?u...
(skip to slides 55 and further)
Might serve as inspiration?
The simplest version
- Widget (only shows a error after inputting wrong content) shows nothing if correct) Later we could provide custom widgets.
- Overview of AI validations (entity?) (could be a bit like metatag, have a separate place).
- Add validation form
1. Select field (could later even be multiple fields using same validation)
2. if TXT: show input field for prompt (this is the MVP case I think)
2. if IMG: select vision api and output evaluation "rule" (visions modules should provide these "nudity detection" "person is smiling" based on what the api allows).
- ai_validation would then just call the validation rule in the vision module.
Some more examples
Rules (for images) that should be provided to the validation module (by the vision module)
- "image has label [inputfield] "
- "image should not have label [inputfield] "
- "image main color [inputfield] "
- "image without color [inputfield] "
- "image (ocr) contains text "
- "image (ocr) contains text [inputfield]"
- "image (ocr) does not contain text "
- "image (ocr) does not contain text [inputfield]"
wouters_f → created an issue.
wouters_f → created an issue.
I'm missing the following modules:
Google vision
-
https://www.drupal.org/project/vision →
-
https://www.drupal.org/project/google_cloud_vision →
-
https://www.drupal.org/project/google_vision →
Inspired by the intro from search_api (and a bit from token and metatag)
This module provides a framework for easily applying Artificial Intelligence ino Drupal, using any kind of AI model.
Editors can use their LLM/GPT/model of choice for generating or manipulating content and Developers will find the framework easy to use and extend. The AI module enables very simple AI implementations (enable and forget) and complex workflows (eca/ai automator).
AI submodules
The explanation here for all the submodules.
Features
- list of features each linking to more details or docs →
- LLM in ckeditor
- Image generation (in ckeditor)
- Image generation (image field)
- Image generation (media field)
- Image manipulation
- RAG (chat with your content)
- ECA integration
And much much more.
Using AI Programatically
An example for the old school developers
ai('text_completion', 'Summarize the following text in 2 sentences: ' . $input_text);
How it should be done using the service:
\Drupal::service('ai.provider')->getInstance('dreamstudio')->textToImage('A cow with earrings', 'sd3')->getRawOutput();
Make sure to check the developer documentation or the api.php file for more examples.
It's really cool. I like the colors a lot too.
Very strong logo!
Personally I think image to image transitions (dreamstudio) could be a separate module (personal gut feeling)
Marcus_Johansson → credited wouters_f → .
wouters_f → created an issue.
wouters_f → created an issue.
I have added the hooks to make this configurable in the code in the BRANCH.
Please review.
wouters_f → created an issue.
wouters_f → created an issue.
BramDriesen → credited wouters_f → .
"The default language is require to make the chatgpt translation available"
wouters_f → created an issue.
Interested in picking up this module as we intend to do a project that uses this.
it should be "dall-e-3" and even better it should be configurable
wouters_f → created an issue.
nice. Thanks!
SO the Building blocks would be
1. A form with a question field and a submit button
2. A backend server On sponsored infrastructure providing responses.
We dont want every drupal site to crawl drupal.org for these responses.
We are shifting the goal of this module.
Ask Drupal will be more of a Drupal COPILOT.
I talked with Borisson and we are thinking of making a help menu where you can prompt any drupal question.
How to make a view
Which module do i need to X
How do I do Y
I have a little google cloud server that will give the responses in a semantic way.
So anybody that installs this can prompt drupal (and thus consume the drupal handbook and information) in a logical way.
I think if you leave the namespace empty (and a message: leave empty for no namespace) you could refrain from sending it.
It seems not allowed on any api call (on first sight, not checked on all of them).
The 70 (it was more I think) euro's is what it said on the pricing pages.
But I see that that already changed to some sort of pay as you go pricing.
wouters_f → created an issue.
wouters_f → created an issue.
borisson_ → credited wouters_f → .
And that would also mean ditching the hook mechanism then - right?
Sorry haven't worked with that mechanism before.
I assume you mean per bot that you can configure the requst paths with
$this->condition->setConfiguration($config->get('request_path'));
like https://git.drupalcode.org/project/bothive/-/blob/1.0.x/src/Form/Bothive....
and then on the page:
$condition = $this->manager->createInstance('request_path');
$condition->setConfiguration($this->configFactory->get('request_path'));
if ($condition->evaluate()
but can we differentiate this so it works for the multiple bots?