Error when streaming a chat response

Created on 14 February 2025, 8 days ago

Problem/Motivation

I'm using an AI-Assistant `/admin/config/ai/ai-assistant` with GPT-4o, which then utilises the `AiAssistantApiRunner` Service to render the streamed responses, and everything was working great! Then all of a sudden I started getting this error:

Error invoking model response: Connection refused for URI https://api.openai.com/v1/chat/completions

With the same setup, if I change the response to non-stream - then it works as expected. Normal non-streamed response. I've tested a similar setup using the Deepchat-Block, and the it has the same error with streaming - non-streamed works fine.

What could be happening here?

🐛 Bug report
Status

Active

Version

1.0

Component

AI Assistants API

Created by

🇩🇪Germany dotist

Live updates comments and jobs are added and updated live.
Sign in to follow issues

Comments & Activities

Production build 0.71.5 2024