- Issue created by @marcus_johansson
- Status changed to Fixed
5 months ago 3:36pm 18 June 2024 Automatically closed - issue fixed for 2 weeks with no activity.
The current logging function only logs the configurations and prompts, and responses if wanted.
There is metadata and other parameters that could be important to log that would need to be stored in its own system. These are variables like used tokens, repsonse speed, response body size etc. that could be interesting for statistics and cost usage approximations.
This does not necessarily need to be inside the core AI module and would be doable in an external module, however the current implementation will not work for this, because the wrapper function that gets the request only gets normalized values when this is set. Instead we could opt for both values being set in the actual implementation and the wrapper taking care of it.This cause issues with large responses like images,
In best of worlds the metadata would be normalized and forced via an interface, but that is way down the scope, so for now the suggestion would be the following.
The following suggestion for proposed resolution, might make things a little bit complex for people generating providers, but the importance of that is less important. Consumers/Developers using the API's should have an easy time.
Create an Interface for the response of the generateResponse function.
Create one implementation per type of operation.
Answer with an object of reponse and metadata
Response is the raw response or the normalized response
Metadata is an array with key values of possible things that are important for the logs, like usage_prompt_tokens, usage_completion_tokens etc.
Fixed
1.0
Code
Automatically closed - issue fixed for 2 weeks with no activity.