Ai Chatbot does not search the database

Created on 25 October 2024, about 2 months ago

My Ai Chatbot is not searching the database for answer and it gives me generic AI answers.

I use Zilliz database, when creating the database on drupal i pass the connection test. I index the content and the cluster on zilliz updated to 11. I went on to the vector DB and when i prompt a search, threshold and the title of the items appear. I set the database on the AI Chatbot but when i ask the chatbot question about items that i know are in the database i don't get a reply matching those item and instead get generic AI answers

πŸ› Bug report
Status

Active

Version

1.0

Component

AI Assistants API

Created by

πŸ‡ΊπŸ‡ΈUnited States fessouma

Live updates comments and jobs are added and updated live.
Sign in to follow issues

Comments & Activities

  • Issue created by @fessouma
  • πŸ‡¬πŸ‡§United Kingdom scott_euser

    If you turn on the AI explorer submodule do you get results? Just checking if by results for zilliz you mean directly in zilliz UI or within Drupal.

    AI Explorer submodule then VDB section will confirm you can get results.

    If its AI assistant submodule best to check you have the RAG search action enabled and a relevance threshold low enough.

    You can also use the AI logging submodule to check what's going on.

    If its AI chat I don't know much about it myself but someone else might be able to help. I'm any case J would suggests starting with confirming in the AI Explorer first.

  • πŸ‡ΊπŸ‡ΈUnited States fessouma

    Hi when i go on the AI explorer everything works fine. I am able to go on the Vector DB Explorer and when i prompt a search i get answer that are from the database and to the RAG specifications. I am not sure what i am missing, the RAG on the AI assistance is enable and the index it set up correctly. I tried 0.1, 0.2 .... to 0.6 for the RAG threshold and it only works for the Vector DB Explorer and NOT the AI Assistance.

  • πŸ‡©πŸ‡ͺGermany marcus_johansson

    Could you give back what you have set as pre-action-prompt and assistant message, unless they are business secret?

    If not, could you check with the AI Logging if the embeddings call happens when you write something.

    The process for a working end-2-end message is:
    1. Figure out what to do with LLM with the pre-action-prompt.
    2. Embeddings call for the question(s)
    3. Vector search for the questions
    4. Assistant Message takes the result and tries to answer.

    If you get the latest dev, you can set the chatbot configuration to not stream, then it will log each response so you shuold be able to figure out where it goes wrong.

  • πŸ‡ΊπŸ‡ΈUnited States fessouma

    the pre prompt system role is "You are an assistant that look through the database and gives the best possible answer to the user with a professional tone."

    Here is the assistance message "Based on the users question, you will first be given a result that were fetched from a database and the chat thread, check if you can answer the question truthfully. If you can not answer the question, please respond that you do not have enough information to do so. If there is an error from the agent, please just forward it. Do NOT make up information, but you may answer fairly freely based on the database lookup. You may reframe words that appear there and concise them or express them, but not make up stuff. Answer in a laidback and informal manner. If a link is provided with the article, use markdown to link to the article using the articles title. Please also answer with the author name at the end if its known. Use american english.

    When you get assistant messages of results from RAG use them when you answer.
    Please answer using markdown with the link as this [title](uri). Always link when you found a chunk useful.

    Please use paragraphs, bolded and italic texts and lists to make the answer more readable."

  • πŸ‡©πŸ‡ͺGermany marcus_johansson

    Based on your log it seems like its searching for "chicken recipe" and then the assistant message is not triggered. This usually means that no responses hits the threshold.

    Can you use the AI API Explorer and search for the word "chicken recipe" and check if there are responses with higher weight than the threshold?

  • πŸ‡ΊπŸ‡ΈUnited States ultimike Florida, USA

    I think I'm in a similar situation as the OP (@fessouma) where I can't figure out what I'm missing. Here's some details:

    • I just pulled the latest -dev.
    • I have "Stream" unchecked in the "AI Chatbot" block configuration.
    • I can put my chatbox prompt in the "AI Vector DB Explorer" and get 3 results that are above my context threshold (0.1).
    • I am only getting the generic answers from the chatbot ("I'm sorry, I can not find a database to look this up in.")
    • Looking at the logs, after I use the chatbot, I only see a call to OpenAI to the "gpt-4o" modul - I do not see a call to the "text-embedding-3-small" model which makes me think that chatbox prompt is not getting embeddings-ized in order for the RAG search of my local Milvus database to happen. I should always see an initial call to the "text-embedding-3-small" model for the initial prompt, right?

    In my head (which may be incorrect), I'm thinking that the following things should happen:

    1. The chatbox prompt gets embedding-ized.
    2. The local Milvus database is searched using the embedding-ized chatbox prompt.
    3. The results of the previous step are added to the prompt instructions for the gpt-4o call.

    So, when I look in the AI Logs, I am expecting to see the "Extra data" section contain the prompt instructions text along with results from the local RAG search. Am I not thinking about this the right way?

    thanks,
    -mike

  • πŸ‡©πŸ‡ͺGermany marcus_johansson

    So what is supposed to happen is the following with the AI Assistants API.

    You have a setup that can do a specific type of actions, in your case if you enabled RAG search that is it. This would then happen in three steps, if you follow it in the logs:

    1. It takes the preprompt and replaces the placeholders with specific values and then runs the user query around it. The preprompt is looking to either answer with a raw answer or with actions to take. This would be the first chat call you see in the logs.
    2. If the question aligns with the actions of the preprompt, the preprompt will answer with an action that is to search the RAG database. This would be an embeddings call you see in the logs.
    3. If the answer contains any chunk that has higher score then the minimum threshold these would be forwarded to the Assistant Message to use to answer the question. This would be the second chat call you see in the logs.

    If everything worked correctly as it should be you should see three calls in the end.

    The reason for it to fail before the first chat can be:
    1. A bug in the AI module.

    Some reasons for it to fail before after the first chat message, but before the first embeddings can be:
    1. A bug in the AI module.
    2. A bad preprompt.
    3. A question that doesn't align with the actions of the preprompt. It would then give a textual answer back in the chatbot.
    4. Using an LLM that is simply not advanced enough to handle JSON output.

    Some reasons for it to fail before after the first embeddings, but before the second chat call can be:
    1. A bug in the AI module.
    2. A too high threshold.
    3. A question that doesn't find any relevant information.
    4. Something wrong with the AI Search setup.

    Some reasons for it to fail after the second chat call is:
    1. A bug in the AI module.

    I hope that this answers the question - there should be a fairly generic preprompt inside the AI Assistants API module under resources that works in most normal RAG cases, however it might need tweaking to fit your specific use case.

    Hope that gives some insights in why it could be failing for you.

Production build 0.71.5 2024