Once this issue is ready to go, we would need to also create follow up tickets on every provider in order to clean up the openai-php/client
library from them.
Thanks @svendecabooter!
The fix will be added in the next release!
Thanks @andreasderijcke and @mrdalesmith for the effort here.
The fix will be included in the next release.
Thanks @andreasderijcke and @mrdalesmith for the effort here.
The fix will be included in the next release.
Thanks @jofitz @anjaliprasannan
The fix will be included in the next release.
Moving this issue to fixed as it has been already released into 1.1.0-rc1
Thanks @ayrmax and @scott_euser!
I've been testing the issue and it seems to be working as expected, so moving to RTBC.
@mtalt
Yes, you are totally right. Sorry to not double check it.
gxleano → created an issue.
Or maybe we should specify that we should only update it if the default provider is not selected.
Same issue than https://www.drupal.org/project/ai_provider_openai/issues/3528590#comment... 📌 Add update hook for Chat with Tools and Chat with Structured Output Active
// If its set, we just return false.
if (!empty($default_providers[$operation_type])) {
return FALSE;
}
This piece of code is not doing what we are expecting, right now it is detecting $default_providers[$operation_type]
as not empty even when the model_id
is not selected, we should check:
// If its set, we just return false.
if (!empty($default_providers[$operation_type]['model_id'])) {
return FALSE;
}
Because the provider_id
will be always there, see:
After testing the changes, I’ve identified two important points:
- The index has become approximately twice as long compared to the previous version.
- The number of items indexed depends on the value set in the "batch" option. For example, if it's set to 5, indexing stops after 5 items. In my opinion, this is not an optimal solution—indexing should works as usual by default, while batching should be handled in the background without needed to re-run after each batch is finished.
See evidences:
I find the behavior of the index during the processed items progress bar quite misleading. In the default version, it’s clear when the index begins processing content, what exactly it’s processing, and when it finishes. However, in the current version, the process is more complicated and it’s harder to understand what’s actually happening.
marcus_johansson → credited gxleano → .
Reviewed latest changes, everything works as expected.
See evidences:
Deepchat chatbot configuration
Chatbot verbose response
After following the testing steps on #6 ✨ Add Javascript orchestration for each loop in the Chatbot/Assistants API Active it is duplicating the Deepchat form creation.
Testing stack:
- Drupal 11 (latest) with Umami
- Drupal AI modules (all modules)
- OpenAI provider
- Issue branch
I've been testing this issue on 1.1.x-dev
and everything works fine now when Allow history option is enabled.
See evidences:
The response also appears immediately in https://www.drupal.org/project/ai/issues/3526074 🐛 Deepchat response not displayed until page reload when stream option is enabled Active , the problem is that the FE doesn't show the answer until the page is reloaded
I think that both issues are reporting the same problem https://www.drupal.org/project/ai/issues/3526074 🐛 Deepchat response not displayed until page reload when stream option is enabled Active , adding it as related.
I believe Drupal's navigation is a special case and shouldn't be handled the same way as other components. Its structure is unique, and supporting the before/after/no text options could introduce inconsistencies.
Looking at the UI Icons Menu logic, I noticed that the icon is currently wrapped inside <span class="toolbar-button__label">
, which doesn't align well with the new navigation requirements. For it to work correctly, the icon should be placed outside this <span>
. The previous implementation introduced by @plopesc seems more appropriate here, as this behavior is specific to the Navigation component.
Thanks @mogtofu33 for the feedback.
I've been testing the latest changes but it is breaking down the logic, we come back to the current "buggy" state.
It will be included in release 1.2.40
Thanks @robbiehobby to report the issue!
I have been checking your changes and everything seems to be working fine, but now we are getting an error in the browser console when the updated field is in use.
See:
Thanks Simon!
It will be included in release 1.2.40
Would it be needed in 1.1.x
?
gxleano → changed the visibility of the branch 3521601-server-context to hidden.
Test are failing, so we should check this before to move to RTBC.
Moving changes from 1.0.x
to 1.1.x
in order to move forward this topic.
Someone is trying to embed some piece of content and it fails the Embeddings call due to moderation api. Right now in OpenAI module this is hardcoded to do moderation checkups if you have moderation enabled. When this fails, that tag is to be forwarded into the moderation call so this can be logged somehow for editors to check where its failing to embed.
Could we consider that this is going to be handled by https://www.drupal.org/project/ai/issues/3526710 🐛 [Error] The Prompt is unsafe: The prompt was flagged by the moderation model, stop the indexation Active ?
gxleano → changed the visibility of the branch 3525311-1.0.x-fix-gitlab-ffi to active.
gxleano → changed the visibility of the branch 3525311-1.0.x-fix-gitlab-ffi to hidden.
After applying the changes everything works as expected.
At the end of the indexation we will get an error message pointing to the logs, where we could check which content has been flagged by moderation.
See evidences:
Closing this issue, for now we are going to use the LoggerChannelTrait
in the extended class.
This issue needs to come together with #3526710: [Error] The Prompt is unsafe: The prompt was flagged by the moderation model, it stop the Search API indexation
gxleano → created an issue.
Here we have an example of wrong output: