Also noting, we are using the AI normalizer in a few places currently without the dependency in place for Tool API, which I'm not intended on adding. That makes this extra important to solve independently of any one tool caller.
Fixed. The solution I landed on varies slightly from the original post. Will get that updated shortly.
michaellander → changed the visibility of the branch 3545828-introduce-dynamic-tools to hidden.
Added an initial set of tools, though some of which are going to be dependent on 🌱 Introduce Dynamic Tools Active . We can still start building them as it won't cause much backtracking, just additional refinement.
If these go quick or we get more help, we can definitely consider doing even more from:
https://docs.google.com/spreadsheets/d/18knLUFa2uUll_nOe4yGFPjDBQmDvqJtM...
michaellander → created an issue. See original summary → .
michaellander → created an issue. See original summary → .
I've made some changes to the original issue. The main thing is that the approach won't really change how tool calls happen now, it will just give flexibility when callers want to take advantage of the added metadata. If you were to use the tool calling exactly as it is now, it really just provides better validation on inputs.
Maybe we should try seeing how often the JsonDeserializer
is even being used. Originally I had it because I assumed most of the incorrect data coming through would still be json encoded, or double encoded, but maybe that's not the case. Ideally we wouldn't need the converter at all.
I've pushed up a commit that automatically handles 'refining' definitions as input values are set. This means that validation should occur with a more well defined definition than the original generic definition.
This change currently is only reflected in the \Drupal\tool_content\Plugin\tool\Tool\FieldSetValue
tool. Basically the tool accepts 3 values 'entity', 'field_name' and 'value', with 'value' being typed as any
. After values are set for 'entity' and 'field_name', the 'value' property then becomes a map
, with a definition that matches the actual field definition(multiple, required, property definitions, etc).
We could additionally add a constraint to confirm the field actually exists on the entity, but that introduces the next challenge.
How best do we communicate prior to execution that a tool(and specific properties) are dynamic?
Using the same field_set_value
as an example, we could technically decide to only show 'entity' as the only starting input definition. Then after adding an entity we would refine the tool to append the 'field_name' input definition, with all available fields as 'options' for the field. Then after selecting a 'field_name' we would append the 'value' definition'. This would make the tool truly dynamic, but means multiple tool calls from AI to fully understand the tool, and generating a form for the tool(in the case of ECA) would be impossible when using tokens. We could alternatively present all top level inputs to start, and only allow existing inputs to be 'refined'. This helps with the form challenge as we can have some sort of form element to display, which could be refined as additional values are provided. This also helps with AI make it clear all inputs a tool is expecting, and may reduce total calls required. Though this does leave definitions in a some what ambiguous state where it's not clear to a form or AI if an input definition is complete or still waiting to be refined.
This part is still TBD.
I merged the part for MapContextDefinition, still need to determine what is necessary for the ListContextDefinition. It's probably similar.
michaellander → created an issue.
michaellander → created an issue.
michaellander → created an issue.
michaellander → created an issue.
michaellander → created an issue.
michaellander → created an issue.
michaellander → created an issue.
michaellander → created an issue.
michaellander → created an issue.
michaellander → created an issue.
I'm going to leave that failing test until I can get another set of eyes that this needs to switch.
We need to test this with json_as_string
and yaml_as_string
data types... hmmmmm...
michaellander → created an issue.
michaellander → created an issue.
michaellander → created an issue.
My understanding is artifacts are generally something that AI creates. In our case we also want pointers to things that may already exist and that we are modifying. Like if we ask AI to create a node, to me it's an artifact, if we ask it to load a node, is it still an artifact? Even if in both cases we intend to modify and save them. I just want to make sure we are using the correct terminology and would love to find some precedent somewhere.
michaellander → created an issue.
jurgenhaas → credited michaellander → .
jurgenhaas → credited michaellander → .
jurgenhaas → credited michaellander → .