Create a Prompt Logger

Created on 14 June 2024, 6 months ago

Problem/Motivation

Currently many different modules are making AI calls and are using all possible types of clients, the idea is that anything that goes as "normal" calls will use the AI abstraction layer.

Being able to choose to log certain calls or even all would make it easier to understand the actual prompt modules like AI Interpolator or Augmentor is using in the background with all contexts coupled from Tokens or similar. Together with the explorer modules it should even be possible to move these over with a push of a link into one of the Explorer modules. That way you could iterate over these prompt with the configuration exactly as they were.

Proposed resolution

Create a module that uses the PostGenerateResponseEvent from the AI module to log requests.
Make it possible to log responses as well.
Make it possible to log configurations.
Make it possible to log type of operation (Chat, TextToImage etc.)
Make it possible to log provider.
Make it possible to log model.
Make it possible to log based on operation type.
Make it possible to log based on Request tags.
Make it possible if the AI API Explorer module is enabled and dblog is turned on, to link directly to the appropriate explorer with the values set as in the original prompt/generation.

📌 Task
Status

Active

Version

1.0

Component

Code

Created by

🇩🇪Germany marcus_johansson

Live updates comments and jobs are added and updated live.
Sign in to follow issues

Comments & Activities

  • Issue created by @marcus_johansson
  • Status changed to Needs review 6 months ago
  • 🇩🇪Germany marcus_johansson

    Can be tested in DEV.

  • 🇩🇰Denmark ressa Copenhagen

    Thanks for adding this feature, it will be really useful to help understand what goes on behind the scene, like which prompt and parameters are being sent (role, max. tokens, and so on).

    This feature will probably help answer questions like the one I posed recently?:

    Bonus question: Does anyone have any experience with getting fairly fast and short answers (max. 500 characters) from HuggingFace, but getting longer processing time and thereby answers, after upgrading to a paying account?

    HuggingFace answers after only 5 seconds with a short reply, whereas OpenAI, which I used previously, spent up to 30 seconds, returning very elaborate replies with up to 4000 character ...

    From #3454202-4: Enable token support for AI Interpolator Rule Huggingface Text Generation .

  • 🇬🇧United Kingdom yautja_cetanu

    Probably should log:

    - Response time
    - Tokens
    - moderation response
    (Maybe Tokens / minute)

    As you'll want to do tests and then model how long/ expensive something will be when it scales up

  • 🇩🇪Germany marcus_johansson

    Highlighting this comment to still fix reposnse time.

  • Status changed to Needs work 27 days ago
  • 🇬🇧United Kingdom MrDaleSmith

    Work still pending

Production build 0.71.5 2024