- Issue created by @marcus_johansson
- Merge request !1490Resolve #3543583 "Actionable hallucinations happen" → (Open) created by marcus_johansson
While testing other issues I saw that around 10-20% of the time the orchestraition agent would say that it will process or do something and just get stuck there, without actually using any tool call.
See the following screenshot as an example:
For an end user that does not have developer tools or some other debugging tool, they might just sit and wait for the AI to do something and get frustrated when it says something, but doesn't take action.
This is testable using OpenAI gpt-4.1. It's not reproducable using Anthropic models.
There were two direct issues I could see that would cause this - one is that the tool description for the experience_builder_component_agent is very unclear. Tool descriptions are generally more important than system prompts for picking the right tool and to big prompts can lead context overflow, where it stops following instructions.
The other issue is that we need to be more specific that it always needs to take action/use a tool, when it claims that it does or have done.
Fix the system prompt and the description.
Active
1.0
AI