- Issue created by @yautja_cetanu
- Status changed to Closed: outdated
about 10 hours ago 9:35am 13 May 2025 - 🇩🇪Germany marcus_johansson
We have vector search in Automators now, and you an use Automators as a tool, so this is possible.
When providing an answer to a user's query they may want to use RAG. However it may be the case that there are a number of services that could provide extra metadata to the question to help the LLM provide an answer.
For example, perhaps for "I want to recreate reddit in Drupal", for each chunk the system found the attached module. It then used AI Automators to do a lookup on the Issue Queue health or number of downloads. The LLM was then prompted to take into account this meta data before answering. (For example, if the problem is quite niche than low number of downloads might not be a problem).
Instead of just a "Pre-prompt" before asking, maybe its possible to insert whole Automator chains before and after the vector search?
Active
1.0
AI Search
We have vector search in Automators now, and you an use Automators as a tool, so this is possible.