Account created on 23 February 2012, about 13 years ago
#

Recent comments

πŸ‡ΊπŸ‡ΈUnited States sushichris

Ah, I am using Ollama, I was going to create a bug issue for the ollama provider but I thought it might be more associated with the AI assistant sub module, the assistant module appears to be responsible for the output format. The only Ollama ai model that behaves this way is the deepseek-r1 model, phi4 llama3.1 llama3.2 don't have the "thinking" output so they behave as expected.

Production build 0.71.5 2024