Create simple way to iterate for rule

Created on 12 June 2024, 5 months ago
Updated 26 August 2024, 3 months ago

Problem/Motivation

When trying to debug or modify a prompt the workflow is complex and I think could be simplified. (Note this experience is based on ai_interpolator)

Right now to modify a rule you need to:
Edit field setting prompt
Save field setting
Edit content with the field
Resave content

Ideally there would be an interface where you could give the input, have a prompt, set which rule / field it should follow, then a button to trigger the call.

This would significantly increase the developer experience when trying to finalize a prompt or modify a prompt.

Steps to reproduce

Proposed resolution

Expose backend configuration to the frontend, so you could tweak settings as you execute interpolation itself. All within the same form, or in popup modals where needed (assuming I am not missing important details).

Remaining tasks

User interface changes

API changes

Data model changes

πŸ“Œ Task
Status

Active

Version

1.0

Component

AI Automators

Created by

πŸ‡ΊπŸ‡ΈUnited States nicxvan

Live updates comments and jobs are added and updated live.
Sign in to follow issues

Comments & Activities

  • Issue created by @nicxvan
  • πŸ‡±πŸ‡ΉLithuania mindaugasd

    Ideally there would be an interface where you could give the input, have a prompt, set which rule / field it should follow, then a button to trigger the call.

    I feel not sure about this.

    This is developer side:

    • Edit field setting prompt
    • Save field setting

    This is user side:

    • Edit content with the field
    • Resave content

    (User side can be optimized by the developer creating UX for a particular app and use-case)

    Maybe you could expand more (for example, your exact example how you are using this), or better, probably Marcus will comment more since having more (most) experience with interpolator.

  • πŸ‡±πŸ‡ΉLithuania mindaugasd

    One way you could do it:

    • Create a dedicated text field "field_prompt" and have it on the user side, so you could constantly edit it
    • And in the backend: configure to interpolate this field with another field as prompt.
  • πŸ‡ΊπŸ‡ΈUnited States nicxvan

    The thing is iterating over the prompts feels like a longer feedback loop than it needs to be, for example in the open ai playground I send a message, then I can very quickly iterate over the prompt.

    As an example when I was testing the right way to get a specific json response back I was able to submit a new prompt a few seconds after seeing I got the wrong result, I just add detail to the prompt and run it again. I can go through several iterations very quickly.

    In ai interpolator I have to have a window open with the field settings and another tab open with the content.

    I then have to scroll down and open the advanced settings
    Change the prompt
    Save the field (wait for load)
    Go to the content tab and set up a rule trigger
    Wait for load
    Check to see the result and realize I need a new prompt
    Go back to the field settings page
    Find the field with the rule
    Click edit (wait for load)
    Scroll down to advanced settings and open it
    Find prompt and change it again

    This is way, way more steps than is convenient for testing.
    As mentioned in the issue summary having an input, prompt, field type (for ruleset) and response field it would allow for quick iteration.

    I think having these settings on the fields still makes sense, this is just for iterating.

  • πŸ‡©πŸ‡ͺGermany marcus_johansson

    Hi Nic, I can't share so much info on this yet, I will follow up with a ticket when we release some more info on the AI project, but one part of it will be evaluation where you can basically rerun the same workflow using X different prompts Y amount of times and compare results. This ticket was part of the preperation for that: https://www.drupal.org/project/ai_interpolator/issues/3446240 ✨ Add events for important actions Active

    However this might not be exactly what you are looking for, it more sounds like you want a "Rule Explorer" or something where you can test a single rule with some specific input? It is a little bit complex then what OpenAI does, because there are so many moving parts, and more will come when you run a rule.

    There are three components to this:
    The rule settings - these are more then just the prompt, even with LLM prompting, something like temperature might be as important as changing the prompt. But this would be easy to change dynamically due to the events system. So the system itself doesn't neccessarily know what kind of context will be ingested until runtime, which means that its hard to show test fields for this.
    The inputs - this is the hard problem, these are extremely flexible for certain type of inputs. With Tokens you can have something like AET where you load input from completely different entities and when the input Plugin system is working, anything goes. This means that these fields have to be filled out or somehow spoofed.
    The actual output - this might sound easy at first, but think of for instance the rule Entity Reference β†’ . This is in theory a rule where you can trickle down and generate all the ideation content for a whole website from a single prompt. No-one wants to wait that long when exploring. The only way I can see that this works is that you get the raw field output, but then you get stuff like a base64 string for images or audio for instance.

    The only way this could work with the flexibility, is that if you have an entity created with all context fields you need, you use that as the context for the Rule Explorer and you output the raw fields and if it happens to be something that is a file it tries to understand this and showcase it, then it could be possible. I'm not sure if that is the easiest UX though?

    The other idea comes back to the idea of widgets - if we create widgets πŸ› Field widgets? Active for generation, you can also have a "verbose" mode or something next to every widget that would open a modal with the configuration and you can change it before running it.

    Any idea of some better suggestion?

  • πŸ‡±πŸ‡ΉLithuania mindaugasd

    "Prompt widget" can expose more powerful prompt configuration to the user side, assuming if that would be common use-case, to avoid "so many moving parts".
    There should be only few concepts a person has to understand. Like "widgets" could be one of them if we manage to group most common functionality under it.
    I think, you should create "AI tool" for yourself for testing, expose widgets you need to the user side, and use it in the simple way.
    And this way, anyone could be able to create any tool or interface they need. These user interfaces could probably be shared and installed between developers using Drupal recipes.

  • πŸ‡ΊπŸ‡ΈUnited States nicxvan

    @mindaugasd you're missing my request, this isn't about the user, this is a tool for the developer to make iterating on the prompt simpler.

    @Marcus_Johansson I think you're right, a more complex widget like entity reference will be harder, but in my specific case, which may not be typical. I'm giving text into the prompt as context and trying to get JSON back. When I work in the playground that iterative feedback loop is quick.

    Then when I try to move the prompt to Drupal, sometimes something is different because of the layers of the rules. I then have to iterate on the prompt again and it's fairly painful.

    Even just a way to have the prompt and identify the entity to use as a source and a button to trigger would help solve the developer experience issue.

    The root issue is that you have to save entities, field widgets, open accordions and wait for page loads to iterate on a prompt.

  • πŸ‡±πŸ‡ΉLithuania mindaugasd

    @nicxvan sorry to hear that.

    Developer is a user, when using (prompting) AI. I think it it important (and doable) separation architecturally.

    Since you mentioned OpenAI playground, I am referencing that you can build such a tool with widgets. So, a similar separate tool would exist in Drupal as in OpenAI, built using Drupal flexible underlying building blocks,

    I'm giving text into the prompt as context and trying to get JSON back. When I work in the playground that iterative feedback loop is quick.

    Since you are solving a specific problem, could you solve it in the "playground" as you say it works quick, and then come back to Drupal doing the rest of things? By extension, we can replicate this "playground" in Drupal as it is in OpenAI with this architecture, but it would be outside of interpolator configuration side of things?

    Another improvement to the idea: developer could create a separate "form display" called "for developer", where developer could expose the widgets and controls needed to test the interpolation on the frontend side, this way having it separated from general workflow developer might be creating for the real user as well.

  • πŸ‡ΊπŸ‡ΈUnited States nicxvan

    I am specifically talking about the experience when integrating with the rest of the rules.

    I don't need the playground in drupal, the playground already exists for open ai.

    When integrating a rule to Drupal you still need to iterate sometimes and that is the experience I'm speaking about.

  • πŸ‡±πŸ‡ΉLithuania mindaugasd

    layers of the rules

    integrating with the rest of the rules

    I might not be getting it, because I don't have enough experience with interpolator.

  • πŸ‡±πŸ‡ΉLithuania mindaugasd

    But maybe this could work architecturally to solve your problem:

    developer could create a separate "form display" called "for developer"

  • πŸ‡±πŸ‡ΉLithuania mindaugasd

    That form could even be sorted by weight, for developer to follow and test all the execution workflow.

  • πŸ‡ΊπŸ‡ΈUnited States nicxvan

    As long as it has a way to tweak the prompt and execute the workflow without multiple page loads and separate tabs I would be very happy.

    This would make the dev experience much better.

  • πŸ‡±πŸ‡ΉLithuania mindaugasd

    As long as it has a way to tweak the prompt and execute the workflow without multiple page loads

    Yes, it should expose backend configuration to the frontend, so you could tweak settings as you execute interpolation itself. All within the same form, or in popup modals where needed (assuming I am not missing important details).

  • πŸ‡ΊπŸ‡ΈUnited States nicxvan
  • πŸ‡©πŸ‡ͺGermany marcus_johansson

    @nicxvan - check if https://www.drupal.org/project/ai/issues/3454705 πŸ“Œ Create a Prompt Logger Active would solve it for this issue and in general.

  • πŸ‡ΊπŸ‡ΈUnited States nicxvan

    That's part of it, but this issue is more specifically around the developer experience of needing to repeatedly edit the field to change the prompt and then edit content to test it.

  • πŸ‡©πŸ‡ͺGermany marcus_johansson

    I think the way to fix this before widgets exists, is to create a specific developers module where you can open the filed configuration for AI Interpolator under any field getting interpolated and that gets save before the content is modified. That way you have one form and one submit button and the chains can still run as they should.

    I have to investigate, but I think such a module should be easy to implement.

  • πŸ‡¬πŸ‡§United Kingdom yautja_cetanu

    Right you could imagine some kind of debugg or prompt module that looks at a chain within AI Interpolator and then takes a single snapshot of the specific prompt that was sent to OpenAI. This snapshot is opened on a new page where all tokens, context etc are saved in stone but the specific prompt can be interated over to see how the LLM will respond. Once you have found the prompt you like you can bring it back into AI interpolator.

    As its a developer tools if they developer sees the response as JSON (so it doesn't actually end up doing anything), that's fine.

    I definitely think this needs to be resolved as interating over prompts is so important. Similarly if we do Agents or anything we need to be able to "undo" an LLM easily.

  • πŸ‡©πŸ‡ͺGermany marcus_johansson

    This is an old ticket and some of the things are done already, but more will come with reusable prompts.

Production build 0.71.5 2024