Support Fibers for collaborative multitasking on LLM io waiting

Created on 25 July 2025, 3 days ago

Problem/Motivation

LLM calls are typically slow and present a bottle neck in processing, leaving the main thread idling. Fibers present a mechanism for collaborative multi tasking, allowing us to support multiple concurrent LLM calls where appropriate.

Fibers are already supported in the render pipeline (in particular big pipe), and may be coming in other areas of Drupal Core.

Fibers could be implemented for specific use cases, e.g. a drush command that processes background AI tasks, or something like automators where we may have multiple non-chained LLM calls that could be processed concurrently rather than sequentially.

Proposed resolution

Support Fiber suspension in OpenAiBasedProviderClientBase, allowing LLM calls to collaboratively multi task.

Note, the PHP OpenAI SDK doesn't currently support async API requests, but we can work around that by using a streamed response which is effectively asynchronous as it uses a generator with an underlying async io.

We should also include some test coverage for it!

Impact

See the attached video for a PoC implemented directly in the OpenAI module (as Remove OpenAI SDK dependency and extend it from Drupal AI core module Active isn't merged yet). The highlight is that 4 AI requests which totalled 20s consecutive execution instead ran in 10s. The requests are deliberately of different durations, 10s represents the longest request.

📌 Task
Status

Active

Version

1.2

Component

...to be triaged

Created by

🇬🇧United Kingdom andrewbelcher

Live updates comments and jobs are added and updated live.
Sign in to follow issues

Merge Requests

Comments & Activities

Production build 0.71.5 2024