Cáceres
Account created on 9 October 2018, over 6 years ago
#

Merge Requests

More

Recent comments

🇪🇸Spain gxleano Cáceres

Once this issue is ready to go, we would need to also create follow up tickets on every provider in order to clean up the openai-php/client library from them.

🇪🇸Spain gxleano Cáceres

gxleano made their first commit to this issue’s fork.

🇪🇸Spain gxleano Cáceres

Thanks @svendecabooter!

The fix will be added in the next release!

🇪🇸Spain gxleano Cáceres

gxleano made their first commit to this issue’s fork.

🇪🇸Spain gxleano Cáceres

Thanks @andreasderijcke and @mrdalesmith for the effort here.

The fix will be included in the next release.

🇪🇸Spain gxleano Cáceres

Thanks @andreasderijcke and @mrdalesmith for the effort here.

The fix will be included in the next release.

🇪🇸Spain gxleano Cáceres

Thanks @jofitz @anjaliprasannan

The fix will be included in the next release.

🇪🇸Spain gxleano Cáceres

Moving this issue to fixed as it has been already released into 1.1.0-rc1

🇪🇸Spain gxleano Cáceres

Thanks @ayrmax and @scott_euser!

I've been testing the issue and it seems to be working as expected, so moving to RTBC.

🇪🇸Spain gxleano Cáceres

Or maybe we should specify that we should only update it if the default provider is not selected.

🇪🇸Spain gxleano Cáceres

Same issue than https://www.drupal.org/project/ai_provider_openai/issues/3528590#comment... 📌 Add update hook for Chat with Tools and Chat with Structured Output Active

🇪🇸Spain gxleano Cáceres
    // If its set, we just return false.
    if (!empty($default_providers[$operation_type])) {
      return FALSE;
    }

This piece of code is not doing what we are expecting, right now it is detecting $default_providers[$operation_type] as not empty even when the model_id is not selected, we should check:

    // If its set, we just return false.
    if (!empty($default_providers[$operation_type]['model_id'])) {
      return FALSE;
    }

Because the provider_id will be always there, see:

🇪🇸Spain gxleano Cáceres

After testing the changes, I’ve identified two important points:

  1. The index has become approximately twice as long compared to the previous version.
  2. The number of items indexed depends on the value set in the "batch" option. For example, if it's set to 5, indexing stops after 5 items. In my opinion, this is not an optimal solution—indexing should works as usual by default, while batching should be handled in the background without needed to re-run after each batch is finished.

See evidences:


I find the behavior of the index during the processed items progress bar quite misleading. In the default version, it’s clear when the index begins processing content, what exactly it’s processing, and when it finishes. However, in the current version, the process is more complicated and it’s harder to understand what’s actually happening.

🇪🇸Spain gxleano Cáceres

Reviewed latest changes, everything works as expected.

See evidences:

Deepchat chatbot configuration

Chatbot verbose response

🇪🇸Spain gxleano Cáceres

After following the testing steps on #6 Add Javascript orchestration for each loop in the Chatbot/Assistants API Active it is duplicating the Deepchat form creation.

Testing stack:
- Drupal 11 (latest) with Umami
- Drupal AI modules (all modules)
- OpenAI provider
- Issue branch

🇪🇸Spain gxleano Cáceres

I've been testing this issue on 1.1.x-dev and everything works fine now when Allow history option is enabled.

See evidences:

🇪🇸Spain gxleano Cáceres

The response also appears immediately in https://www.drupal.org/project/ai/issues/3526074 🐛 Deepchat response not displayed until page reload when stream option is enabled Active , the problem is that the FE doesn't show the answer until the page is reloaded

🇪🇸Spain gxleano Cáceres

I think that both issues are reporting the same problem https://www.drupal.org/project/ai/issues/3526074 🐛 Deepchat response not displayed until page reload when stream option is enabled Active , adding it as related.

🇪🇸Spain gxleano Cáceres

I believe Drupal's navigation is a special case and shouldn't be handled the same way as other components. Its structure is unique, and supporting the before/after/no text options could introduce inconsistencies.

Looking at the UI Icons Menu logic, I noticed that the icon is currently wrapped inside <span class="toolbar-button__label">, which doesn't align well with the new navigation requirements. For it to work correctly, the icon should be placed outside this <span>. The previous implementation introduced by @plopesc seems more appropriate here, as this behavior is specific to the Navigation component.

🇪🇸Spain gxleano Cáceres

Thanks @mogtofu33 for the feedback.

I've been testing the latest changes but it is breaking down the logic, we come back to the current "buggy" state.

🇪🇸Spain gxleano Cáceres

It will be included in release 1.2.40

🇪🇸Spain gxleano Cáceres

Thanks @robbiehobby to report the issue!

I have been checking your changes and everything seems to be working fine, but now we are getting an error in the browser console when the updated field is in use.

See:

🇪🇸Spain gxleano Cáceres

Thanks Simon!

It will be included in release 1.2.40

🇪🇸Spain gxleano Cáceres

gxleano made their first commit to this issue’s fork.

🇪🇸Spain gxleano Cáceres

gxleano changed the visibility of the branch 3521601-server-context to hidden.

🇪🇸Spain gxleano Cáceres

Test are failing, so we should check this before to move to RTBC.

🇪🇸Spain gxleano Cáceres

Moving changes from 1.0.x to 1.1.x in order to move forward this topic.

🇪🇸Spain gxleano Cáceres

gxleano made their first commit to this issue’s fork.

🇪🇸Spain gxleano Cáceres

gxleano made their first commit to this issue’s fork.

🇪🇸Spain gxleano Cáceres

Someone is trying to embed some piece of content and it fails the Embeddings call due to moderation api. Right now in OpenAI module this is hardcoded to do moderation checkups if you have moderation enabled. When this fails, that tag is to be forwarded into the moderation call so this can be logged somehow for editors to check where its failing to embed.

Could we consider that this is going to be handled by https://www.drupal.org/project/ai/issues/3526710 🐛 [Error] The Prompt is unsafe: The prompt was flagged by the moderation model, stop the indexation Active ?

🇪🇸Spain gxleano Cáceres

gxleano changed the visibility of the branch 3525311-1.0.x-fix-gitlab-ffi to active.

🇪🇸Spain gxleano Cáceres

gxleano changed the visibility of the branch 3525311-1.0.x-fix-gitlab-ffi to hidden.

🇪🇸Spain gxleano Cáceres

After applying the changes everything works as expected.

At the end of the indexation we will get an error message pointing to the logs, where we could check which content has been flagged by moderation.

See evidences:

🇪🇸Spain gxleano Cáceres

Closing this issue, for now we are going to use the LoggerChannelTrait in the extended class.

🇪🇸Spain gxleano Cáceres

Here we have an example of wrong output:

Production build 0.71.5 2024