Hey @herved
The jQuery dependency here is quite minimal, but we'd need to refactor the once()
usage and update all the jQuery calls.
So, if you have the time, feel free to try it :)
Thank you Dieter to keep an eye on this!
It will be included in the next release 1.2.42
gxleano → made their first commit to this issue’s fork.
Thank you @saidatom @dieterholvoet @alorenc for you contribution!
It will be added to the next release 1.2.42
Thanks @herved and @pcambra,
It will be included in next release 1.2.42
Testing on AI and AI Agents 1.1.x with Drupal 10.5.2 the changes make it works.
Setup to be reproduce it
- Drupal 10.5.2
- AI 1.1.x
(AI Assistant API, AI Search and AI Chatbot)
- AI Agents 1.1.x
- Action (Core module, so it just needs to be enabled)
- Key latest_version
- Search API latest_version
- Ollama Provider (ddev addon) latest_version
- Milvus VDB Provider (ddev addon) latest_version
After applying the MR into AI module the error from the next pages is gone:
/admin/config/ai/explorers/tools_explorer
/admin/config/ai/explorers/chat_generator
/admin/structure/ai_agent/add
Moving the issue to RTBC, thanks everyone involved.
Tested with https://www.drupal.org/project/ai/issues/3543112 📌 Make model id required on the OpenAiBasedProviderClientBase Active applied to AI module and everything works as expected.
The code also look good to me Marcus.
Thanks!
It looks good to me Marcus!
Thanks!
Remember that you would need to run drush updb
command once you get the MR before testing the functionality.
Regarding #35 📌 Improve AI Search Module Indexing to Handle Long-Running Chunk Embedding Processes Needs work I would try move forward a working version like it is right now, then open a follow up ticket to improve how the progress bar should looks like.
I am with you that your approach would be more descriptive for the end user, but we would need to expend some time to make this works.
And when everything is working and validated, we could also go with #33
What do you think?
I've just did a push with the fix for #36, thank you very much Scott to point this out.
Tested and everything works as expected.
See: https://www.drupal.org/project/ai/issues/3541470#comment-16239682 📌 Remove canChatStream Active
Tested and everything works as expected.
See: https://www.drupal.org/project/ai/issues/3541470#comment-16239682 📌 Remove canChatStream Active
Tested and everything works as expected.
See: https://www.drupal.org/project/ai/issues/3541470#comment-16239682 📌 Remove canChatStream Active
Tested with:
-
https://www.drupal.org/project/ai/issues/3541471
📌
Add finished reason to stream iterator
Active
-
https://www.drupal.org/project/ai/issues/3541472
🐛
Return streamed message and set tokens on normal message on the OpenAiBasedProviderClientBase
Active
-
https://www.drupal.org/project/ai/issues/3541473
📌
Add token usage to OpenAiProviderClientBase
Active
And everything works as expected.
Steps followed
1. Create a test branch from ai module including mentioned MRs
2. Install AI API Explorer module
3. Go to Chat Generation Explorer
4. Test the output using Streamed option
Evidences
Output
Logging
Thank you very much Dieter and Christian for your effort here.
It will be included in the next release 1.2.42
Thanks to report this issue @damienmckenna
I've been testing it with next configuration:
Drupal 11.2.2
AI Core 1.1.x
AI Search 1.1.x
AI Assistant Api 1.1.x
AI Agents 1.1.x
Huggingface Provider 1.0.x
Sqlite Provider 1.0.x
Search API 1.38
From Huggingface provider I have used the next models:
Chat -> Meta Llama-3-8B
Embeddings -> MxBai Embed Large v1
And I have needed to apply the next patch in Search API in order to can create the view:
https://www.drupal.org/project/search_api/issues/3531256
🐛
Cannot create Search API based views On Drupal 11
Active
I was NOT able to reproduce the error that you are describing. Maybe the problem could be the fields that you are using in Search API Index, and the filter criteria added to the view.
I am using the next fields in Search API Index:
- rendered_item
with type fulltext and main content as indexing option
- url
with type string and Contextual content as indexing option
- title
with type fulltext and contextual content as indexing option
Then in the view I am adding Fulltext search
as filter criteria, then when I perform a search query everything works as expected.
Moving this issue to Postponed until we get more info about the issue in your side.
I will take over of this issue
gxleano → changed the visibility of the branch 3487487-improve-ai-search-table to hidden.
gxleano → changed the visibility of the branch 3487487-improve-ai-search-table to hidden.
Thanks Scott to take a look on this!
Regarding #27 📌 Improve AI Search Module Indexing to Handle Long-Running Chunk Embedding Processes Needs work , we're currently displaying the item being indexed and the chunks being processed from that item. We're also showing the progress percentage for both. From my perspective, this provides clear and continuous feedback to the user about where we are in the process, which I believe is quite helpful.
Do you have any specific suggestions for improvement?
Thanks for reviewing Jibran!
Updated MR
Thinking twice, the Tagify Select widget is also able to drag and sort the tags when the field is unlimited, so this information should be there to let the user knows about this functionality.
I will try to think in a better way to can just get rid of this if you don't want it there, but from now I will keep it.
Thanks @harlor for the fix.
It will be included in release 1.2.41
This fix will be part of release 1.2.41
Thanks to open this discussion @yaqbick
You're absolutely right, the description shouldn't appear in the Tagify Select widget.
However, it's intentionally included in the Tagify widget to inform users that they can drag and sort the element. This is important for UX purposes.
gxleano → changed the visibility of the branch 1.2.x to hidden.
gxleano → made their first commit to this issue’s fork.
Thanks @chr.fritsch for the contribution.
It would be added into the next release 1.2.41
Thanks @volkerk for the contribution!
It will be added to release 1.2.41
Thanks for review Artem!
As far as I was able to see, the chat() method in this case could just be taken from base class, I do not see any specific logic in the OpenAI provider.
What do you think Marcus?
Thanks Artem to take a look on this!
If it doesn't have specific logic, them I would say that we can just remove it and take the logic form base class.
Thanks @pmelab!
BTW, the latest changes look good to me @pmelab.
Just waiting until branch 1.2.x
is created, then we can move it to RTBC.
Yes, in order to avoid breaking changes, we should create 1.2.x
branch, then just point the changes on this MR to there.
This issue should be tested with MR https://www.drupal.org/project/ai/issues/3526390 ✨ Improve the AI Search recursive retrieval of a specific quantity of results Active , from where some new methods are coming.
Changes include the logic to handle the limitations of recursive vector search in scenarios involving:
- Large content split into many small chunks
- Numerous access-controlled nodes
- Insufficient retrieval due to 10-iteration (
maxAccessRetries
) cap
At the meantime, I've also added related changes in https://www.drupal.org/project/ai_vdb_provider_milvus/issues/3526393 ✨ Make use of Milvus' Grouping functionality Active .