- Issue created by @narendraR
- 🇮🇳India narendraR Jaipur, India
Moving it to needs work, as message loading icon disappear in case of looping.
- 🇮🇳India narendraR Jaipur, India
narendrar → changed the visibility of the branch 3531000-integrate-incremental-agent to hidden.
- Merge request !1282#3531000: Integrate incremental agent loop execution in XB AI → (Open) created by narendraR
- First commit to issue fork.
- 🇮🇳India kunal.sachdev
kunal.sachdev → made their first commit to this issue’s fork.
- 🇮🇳India kunal.sachdev
kunal.sachdev → changed the visibility of the branch 1.x to hidden.
- 🇩🇪Germany marcus_johansson
Note that this will most likely be possible to stream in 1.2.0 as well, see ✨ Allow tool calling in streamed chat Active .
- 🇺🇸United States Kristen Pol Santa Cruz, CA, USA
Switching to the correct tag
- Merge request !1395#3531000: Integrate incremental agent loop execution in XB AI → (Open) created by kunal.sachdev
- 🇮🇳India kunal.sachdev
Adding the video to show how it's working currently →
Currently, only messages from executing agents are displayed. I think we should also find a way to display messages from the tools that are executed.
- 🇩🇪Germany marcus_johansson
@kunal.sachdev - the issue is two fold here. OpenAI usually do not provide a text and a tool usage in the same response, unless very specifically prompted to do so, and even then it happens in like 25% of cases. Anthropic on the other hand, usually you have to tell to not include it.
What we did in the AI Assistants API was that, if you do not get a text message back we write Calling X tool as the text message, so there is some feedback.
In this case you could even make assumptions on what its trying to do and maybe write something more intuitive.
We have thought about in the AI Assistants API, to ask with a simple prompt "Look at the following request and the following response and explain in one sentence what it will do.". Since the token generation is usually what takes, the extra effort will make the process quicker anyway. But that is just an idea - the thing above should be a good start.
- 🇫🇮Finland lauriii Finland
Great to see some progress on this! Some feedback:
- We should include implementing the designs for this as part of this issue; it looks currently too unpolished. I've attached video of the designs for this.
- We should never show the user "Calling X tool" or "Calling X agent". We need to always convert these to more user friendly messages.
- 🇬🇧United Kingdom yautja_cetanu
There are a couple of things we need to make this nice status of a "Plan of tools"
- One issue is the pure UI as discussed (whether or not we use the word tool, etc).
- The other issue is, can we make it so that AI creates a plan? (Not a fake plan in the prompt) (can look at minikanban)
- The other issue is, what happens if the plan breaks halfway through (it tries to create something that already exists?) do we update the plan?
Might these are 3 seperate issues.
Next steps:
- It would be good to come up with a prompt which will produce a plan we think is good, to see if we can make AI write up the plan and see if we can make it execute.
- 1 example (Go through all 100 pieces of content and check if it says this inccorect fact?) - 🇺🇸United States tim.plunkett Philadelphia
See also #3533079-4: Introduce AI Agents and tools to create entire page templates using available component entities → for a potential incremental improvement from #20 that isn't quite as nice as #21
- 🇮🇳India kunal.sachdev
I worked on the feature allowing the AI to generate an execution plan, which is then displayed on the screen. The next step is to figure out how to check off each item in the execution plan as the corresponding tool completes its task. The main challenge here is that tool results are only provided once the entire agent called from orchestrator has finished.
For eg - In case of page builder task, AI creates a plan something like:- Adding components to the page:
- component 1
- component 2
- component 3
- Updating title of the page
- Updating metadata of the page
However, all tool results become available only when the page builder agent completes, so we miss out on intermediary progress updates for each item as they're finished.
- Status changed to Needs work
27 days ago 11:40am 19 August 2025 - 🇬🇧United Kingdom yautja_cetanu
I've uploaded the above mp4 as an unlisted Youtube video. Just makes it easier when sharing the designs.
- 🇬🇧United Kingdom yautja_cetanu
📌 Allow the Assistant and Chatbot access to the tool calling within the sub-agents behind agent tool calls. Active - Created this related issue.
- 🇬🇧United Kingdom yautja_cetanu
From the XB AI Meeting we discussed how to make the above mp4 happen:
There are three architectural options of how we get the information of the status:
- Looped HTTP calls (will be hard for multiple layer)
- Other Option is polling.
- Streamed response style approach
(Anand, has a fourth option he will look into)
Three options of how we get the plan:
- Pure Text - We provide a written plan to the end user (like claude code) (We already do this, we should move away from this).
- 2. Tree of summarised tool calls (Ananda’s suggestion) We take the plan from each agent one at a time and update a plan to an end user when each sub-agents tells Drupal of its plan. (This is an extension of what exists now but to all levels of agents, it means the plan will change in real time and grow as the agents do more).
- 3. True abstraction of a plan created by AI and orchestrated by Drupal( Akhil’s idea) We get the orchestrator to tell us a plan in a structured format we “Store” in Drupal and keep a record of the steps. We get the plan in some kind of JSON blog with IDs and statuses, get Drupal to orchestrate them and keep a record of the store. (The hard part of this is how do we tie an item on the plan back to what Drupal is actually doing, we almost need a constant LLM call looking at the logs and updating the plan)
- 4. True Blueprints approach - The entire plan is an actual deterministic list of commands (YAML, JSON) that is built by AI but nothing is implemented or run and then the final thing a human clicks a button and it runs it all deterministically, its like writing a drush script on the fly. (We this will never work as too many agents, depend on the outcome of the previous agent. )
Issues with different options:
2. Tree of summarised tool calls
- This will be useful to do regardless, in the API explorer just for prompt engineering if nothing else.
- This will feel wierd as we won't have a "Full plan" the first plan will usually just be an assistant asking for an agent to come up with a plan. So we'd show that and then the list of tasks would grow.
- It might be tough to make the plan make sense to an end-user, it might have lots of noise. We have to figure out how to filter out the noise (We could ask LLMs to provide metadata with the tool call for how to report a title of the task and whether or not an enduser would want to see it.
- It means if the plan changes, we have a method of making that happen, but we need the UI to make it clear otherwise it will get confusing if items on the plan keep dissappearing.
3. True abstraction of a plan created by AI and orchestrated by Drupal
- We never really know if the plan is an accurate look at what its doing, and Drupal might start doing things diverging from the plan for good or bad reasons (Just because, or because it had to create a content type that already exists).
- It's going to be difficult to know if a specific thing AI is doing (a tool call for example) is for which specific item on the plan.
- The Orchestrator already gives us a plan, so its just a case of capturing it, showing it to the end-user as a thing that can be updated and keeping it updated in real time.
- Long term this will be awesome if we can do this right
Below is an image
Akhil has a video I'll upload to youtube.
-------
- Plan is to move forwards in one big branch, that we never intend to merge because it will have all options in one branch and a picker between them.
- We definitely want to do something in 2, it needs to happen just for devs at least. So we want it to happen in the API Explorer and so we want to put it in the AI module anyway.
- We have a basic way forward with 3 as well - Ahkil plans to put something together to extra what it already does.