- Issue created by @marcus_johansson
- 🇧🇪Belgium aspilicious
I'm investigating this at this moment.
Did you start coding? - 🇧🇪Belgium aspilicious
Here is a starting point, allows us to discuss if this is the direction you want.
- First commit to issue fork.
- Merge request !557Applied provided patch for testing purposes. #3517618 → (Merged) created by MrDaleSmith
- 🇬🇧United Kingdom MrDaleSmith
Added as a fgork to allow tests to run, and you have some test fails so this will need further work.
- 🇧🇪Belgium aspilicious
I learned a lot about contributing 2.0.
The token functions are only available on chat level at this moment.
If it's needed on other output classes we probably should move these to a trait. - 🇮🇳India vakulrai
Just to add to the above , can we also think of a helper method to Add tracking for retry token usage and retry reasons in AI responses.
My thought is :
we are tracking the mentioned properties but it does not explicitly track retries — which can silently increase token usage and costs when outputs are malformed or invalid (e.g., bad JSON, failed function/tool calls, hallucinated responses, timeouts, etc.).These retries consume additional tokens and can skew both performance and cost reporting if left untracked.
While total input and output tokens might include retries, but they dont tell:
- How many times a retry occurred
- Why each retry happened
- Which prompt caused it
Can we do it as a feature in AI and tke it ofrard in a seperate ticket if this really can be a good add on.
Open for suggestionsThanks !
- 🇧🇪Belgium aspilicious
This merge request, together with https://www.drupal.org/project/ai_provider_openai/issues/3519302 ✨ Abstract token usage support Active
Allowed me to create this module: https://www.drupal.org/project/ai_usage_limits → - 🇩🇪Germany marcus_johansson
@vakulrai - we have this already in review that keeps track of parent and child unique id, maybe we should create a retry id or something similar as well, so you can track when multiple request are re-done because of validation errors. This means truly errors that fails writing the response as you like, not that its in an agent loop where the output quality is bad and a validation agent asks it to retry (this fails under normal parent/child hierarchy).
Edit: I should link also :) https://www.drupal.org/project/ai/issues/3515879 📌 Add thread id and parent id to AI calls. Active
- 🇩🇪Germany marcus_johansson
Thanks @aspilicious - added one small comment, could you have a look and fix and then we will merge it.
- 🇩🇪Germany marcus_johansson
Set the tag to ddd2025, so we can track user contributations that happened during Leuven :)
- 🇩🇪Germany marcus_johansson
Hi @aspilicious, based on your comment I added some comments - its probably good if we allow null to be returned to be able to check for the difference between 0 being an actual value and it not being set at all by the provider.
- 🇧🇪Belgium aspilicious
Tests are failing but I don't think this patch is causing it.
- 🇧🇪Belgium aspilicious
At the moment the token usage can't be found on streamed responses.
Any guidance would be helpfull...See:
https://www.drupal.org/project/ai_provider_openai/issues/3519302#comment... ✨ Abstract token usage support Active - 🇩🇪Germany marcus_johansson
The error can be fixed by merging with 1.1.x-dev branch.
Regarding having it working on the streaming, it would require first that we change the StreamdChatMessage and StreamdChatMessageInterface to take similar methods. It has the same issue that currently it just dumps the meta data if it exists there.
The issue is that when you do streaming, the ChatOutput is already returned when the streaming starts. But since its not finished it will not be aware of how many tokens its using, so this is why it keeps groing.
So if you add it there it would be possible in OpenAI to for instance use it here with the added methods before it yields: https://git.drupalcode.org/project/ai_provider_openai/-/blob/1.1.x/src/O...
Then whomever consumes that event has to be aware that the event is triggered on every chunk being sent and only the last chunk matters.
So:
1. Add the methods to StreamChatMessage and StreamedChatMessageIterator
2. Implement these methods in for instance the OpenAI iterator object before it yields. - 🇩🇪Germany marcus_johansson
I did a follow up issue for streaming here: 📌 Add token usage to streamed chat Active .
I have merged this with latest 1.1.x to remove any ci errors and will go ahead and merge this now. Thank you for all your efforts @aspilicious.
This will go into 1.1.0 of AI module and the providers that implements it can do so for 1.1.0 as target as well.
@vakulrai - please create an issue for it, issues doesn't hurt :)
-
marcus_johansson →
committed 3927a0d5 on 1.1.x authored by
mrdalesmith →
Applied provided patch for testing purposes. #3517618
-
marcus_johansson →
committed 3927a0d5 on 1.1.x authored by
mrdalesmith →
Automatically closed - issue fixed for 2 weeks with no activity.