- Issue created by @sushichris
- π©πͺGermany marcus_johansson
Which provider are you using for it? The Deepseek provider, the Fireworks provider or the Azure provider?
The provider would have to take care of this, unless its a common architecture.
- πΊπΈUnited States sushichris
Ah, I am using Ollama, I was going to create a bug issue for the ollama provider but I thought it might be more associated with the AI assistant sub module, the assistant module appears to be responsible for the output format. The only Ollama ai model that behaves this way is the deepseek-r1 model, phi4 llama3.1 llama3.2 don't have the "thinking" output so they behave as expected.