Account created on 20 October 2009, over 15 years ago
#

Recent comments

🇬🇧United Kingdom yautja_cetanu

From the sounds of things that operation end type looks like it woild be good in the module? An embedding with llm endpoint?

Similarly the reranking endpoint seems like a good one for the ai module.

Maybe it's worth including though anyway bevause would it be that bad if the future has end points being in their own api modules?

Like you could have a translation endpoint that could swap between Llms and ML or something.

Actually I do agree with this change because I don't like it as I'd prefer if we thought this through and had some plan for operation types but it's sometimes good to do things organically and see what happens. I do wonder if operation types are going to morph into something where there are tons of different specific common functions that you wsnt to swap out (an alt text generation endpoint).

🇬🇧United Kingdom yautja_cetanu

Could you provide any insights into why you wanted a new operation type?

Its possible that the hook alter is good but the abstraction layer stops being one if every provider just has it's own operation type

🇬🇧United Kingdom yautja_cetanu

Hi Poker10, thanks for all your issues! They are very helpful. There are a number of things you've raised here. I'd seperate them out into three issues:

  • How do we let the end-user know the scope of the AI Assistant in Drupal CMS?
  • How do we make sure that the AI Assistant always uses the correct links.?
  • How can we use the AI Assistant to help you install, configure and make use of new models?

Regarding the first - Scope
The purpose of the AI Assistant in Drupal CMS is razor focused on helping use use the AI Agents for Content Types, Fields on Content Types and Taxonomy. It's also aimed at marketeers and sitebuilders with little Drupal experience, not developers at all. (We have another sub-module in the AI_Agents module for creating a form that allows you to mass create fields that is aimed at helping developers, but we think the chatbot is unlikely to be a good UI for developers interacting with AI compared to drush or specific forms).

I was thinking of specifically saying this in the opening message of the Chatbot but the problem with that is that it kind of scares the marketteer is we tell them the limitations as they won't know what "Content Types" are. So we have some ideas of how to handle this for 1.1 (For example having it so that the chatbot can only change fields for the entities of the page you're currently on so its more contextually aware and more focused).

Therefore it is not build to try and resolve your much more advanced issue of "how to use advanced aggregation?". So fixing specifically that issue isn't our priority right now. We do have plans for it (Creating a Project Browser AI Assistant). And are exploring them internally to decide if that will be a focus as it relies on a lot of moving parts and people. (More info below)

Re: Links

We noticed a lot of the hallucinations were due to the links, especially the assistant when giving relative URLs would regularly miss out the / first which caused issues and so we've tried to solve it. I think links are so important for 1.1 we need to build an AI Agent or some deteterministic validation to check every link and validate it.

We tried to solve it with prompt engineering but I think we need to do more.

Re: Advanced Aggregation and AI helping with modules.

Paul is correct that the specific thing you are asking it, isn't what we currently support and can solve. He is right about how those links are generated. As a result to get correct links there are three ways we can resolve it:

Train the model based on the new accurate links (For OpenAI and the bigger models this is too expensive)
Train a small opensource model on the specific links (This is more likely to work but for constantly changing links on a live site also not going to solve it)
Provide the information about the links directly into the prompt we give the AI which we call "In-context Learning" - This is the real solution to the problem.

As a result what we would do is use the AI Search module, or bake it into the drupal.org project browser search itself (or in the short term provide our own new source plugin for project browser). To take your query, go and search for "advanced aggregation" and then get some results that we provide an AI LLM, that will either find the module in question, present options to the end-user or find similar modules and present those options. Then we need to also have another database of drupal documentation that the AI Agent can search to find info about "How to config the "advanced aggregation" module. This would be another thing maybe Drupal.org would provide or maybe something else.

We would then tell the AI to ONLY use information from the context we've given it and never your own learning. Hallucination with this method is still possible but greatly reduced. We have a demo of this working, but quite a bit more work needs to be done.

Summary

Your proposed resolution is a good one: Verify links posted in the chat and do not provide links to D7 documentation.
Also we need the AI Agent to know the Drupal Version you're on (at least the top level if not the specific version from a security point of view).

🇬🇧United Kingdom yautja_cetanu

📌 Change instructions to add a review stage to the assistant Active Issue to implement the changes to the instructions.

Note: Catch, the Details that you can see in the issue above which are "Generating a list of 'created x type with y label' type messages might not be as fluid but it won't misrepresent what happens."

Still exist.

  • The list is generated and stored in a log so devs can use it for debugging.
  • The AI is writing its recap summarising the details.
  • The details is a setting in the module. So we can turn it on if we want to very easily. A siteadmin can turn it on if they want it. We're just turning it off by default for Drupal CMS 1.0.

For Drupal CMS 1.1 we will be meeting weekly with a UX team to figure out how to improve this further which can include:

  • The generated list of details being written more effectively.
  • Expanding "Blueprints" so that preview step generates a list of deterministic commands that are implemented with an explicit button press (instead of the AI agents implementing it).
  • Using more Javascript + Streaming so that the AI manually implements everything before your eyes. (It works well in XB but a bit harder with fields).
  • Workspaces use fields.
🇬🇧United Kingdom yautja_cetanu

Conditional will be 1.1

🇬🇧United Kingdom yautja_cetanu

Yes I think improve the plugin description and leave the title as is. The plugin description can go into a lot of detail about the specifics of why "boost" vs "combine" is important.

I tell everyone about the "Boost with AI" now, I think its SOOOOO much cooler than I thought when we first started speaking and the phrase works so nicely.

So should we add as a todo

- Add Plugin Description?

🇬🇧United Kingdom yautja_cetanu

Re: a small WASM powered-model. The Chatbot library we're using now actually has out of the box support for a model in the browser. So its definitely something we can explore for 1.x

Descisions for the install experience now.

  • We have AI installation working nicely with the dev version of Project Browser. So when PB reaches alpha7 and we have it appearing in the recipe browser in this issue. This issue can be closed. 📌 [PP1] Add AI recipe to project browser allowed list Active
  • We've decided for 1.0 not to include AI in the installer at all. We will work seperately in exploring options for how we can have AI in the installer with no API keys necessary.
  • We will explore whether or not for 1.1 we want to bring the API key back into the installer. There are differing views on how easy getting hold of an API key will be for the target persona. We can test that with the recipe.
  • The Recipe should be "AI Support" as people liked that at singapore (unless there is strong pushback and people prefer AI Assistant)
🇬🇧United Kingdom yautja_cetanu

Descisions:

  • We are happy with AI generating the Review Stage before it does something and the Recap Summary after is does something. This means there is a chance of hallucinations but it means things will be more readable.
  • We will turn off "Details" for now. The deterministic logs of actions will instead be stored in the logs, not directly available to the chatbot user.
  • We have decided that the "Review Stage" and "Recape" SHOULD use accurate Drupal terminology. However if the end-user uses less exact Drupal Terminology (such as categories), AI will explain the the User how and why Drupal uses the specific systems it does.
  • The Recap stage should have links to everything so the end-user can verify everything themselves.
  • We have decided that for Drupal CMS 1.0 we will expect the end-user to do undo's manually with the AI helping them do it for themselves. If Undo doesn't involve deleting things the AI can perform a new actions that undoes the previous action but it is not technically an "undo"
  • Attached is an example of a workflow everyone really likes. We need to edit the instructions so Drupal outputs the below model workflow without the user having to type those instructions in to force it.

Post Drupal CMS v1.0

  • We want to work on an approach for Review and Recap that is more accurate. For example. We may create forms for creating 100s of taxonomy terms that have a single purpose and an undo. We may use Workspaces. We may extend Experience Builder to forms and use their tools. We also may take the YAML file generates by "blueprints" that the AI follows and design a UI around that.
  • Explore approaches to Undo but need to remember that multiple people can work on the site at one time. If AI creates 15 taxonomy terms someone else could delete one before the chatbot user says undo. Need to explore that. (With Experience builder or Views, these issues are sidestepped as they are handled by the editing framework)

Model Conversation:

- Below is a picture that is formatted nicely and text that can by copied and pasted but isn't formatted nicely

AI: Hello! How can I help you today?

User:
I would like to categorize the events into terms of price ranges.

Can you do that for me?
Before you do anything, please explain exactly what you will be doing. If you use Drupal terms can you explain what those terms means in terms of the language I have chosen to use?

AI:
Certainly! Here's what I plan to do to categorize the events into terms of price ranges:
1. Create a Vocabulary: In Drupal, a "taxonomy" is used to categorize content. I'll create a new vocabulary named "Price Ranges". A vocabulary in Drupal is similar to a category type, where you can add terms/options (like Cheap, Moderate, Expensive) that can be selected to categorize content.
2. Add Terms to the Vocabulary: I'll populate the "Price Ranges" vocabulary with terms that represent different price levels (e.g., Cheap, Moderate, Expensive).
3. Add a Field to Events: Once the vocabulary is created, I'll attach it to the "Events" content type. This means I'll add a field to the Event content type that allows the person creating or editing an event to select one or more of the price range terms.
Would you like me to go ahead and set this up for you?

User:
Yes please and when you do it please give me a review step where you explain exactly what you've done and give me links to all the places where you've created things so I can check it myself.

AI:
I have set up the categorization for the events based on your specified price ranges. Here's what I did:
1. Created a Vocabulary:
○ A new vocabulary called "Price Ranges" was created to categorize events based on price.
○ You can review this vocabulary here: Price Ranges Vocabulary.
2. Added Terms to the Vocabulary:
○ I added three terms to the "Price Ranges" vocabulary: Cheap, Moderate, and Expensive.
○ You can view these terms here: Price Range Terms.
3. Added a Field to Events:
○ I attached the "Price Ranges" vocabulary to the Events content type by creating a new field.
○ This field allows event creators to select one of the price range terms.
○ You can review the field settings here: Event Fields and check the specific configuration here: Field Configuration.
Feel free to explore these links and let me know if you want any changes or need further assistance!

🇬🇧United Kingdom yautja_cetanu

I can't get any of the fireworks AI models working either with images

🇬🇧United Kingdom yautja_cetanu

When we go to Anthropic Sonnet 3.5, it doesn't seem to allow for images

🇬🇧United Kingdom yautja_cetanu

We created an approach to the installer that we showed at Drupalcon Singapore where when you click AI support you get "Set up AI in the dashboard". But this doesn't work because the dashboard might not be there. We need to find an agreed upon approach to install prior to building it ideally:

Next steps:

Create a Google Slides that shows the UX of each approach to AI installing:

  • The proposed one we had as an MR
  • Adding a new "block" into the dashboard just for AI
  • Creating a "Warning" across the whole site
  • Having the API key selector built into the installer itself

Complicated thing. How can we assume a marketeer or site builder with no Drupal experience will know to "Click a link on a dashboard" and if they accidently click away how will they know to get back?

🇬🇧United Kingdom yautja_cetanu

I keep calling it boost with AI and it just sounds so cool!

Also combine with AI is dangerous as it sounds like its using AI to add information to the result imo. Technically what you mean is "Take a set of result obtained through traditional search and combine it with some results found from a vector search where AI was used to generate the vectors that make the list works".

You're not actually "combining with ai" its just the results of the vector search where AI isn't used at all. But "Boost with AI" feels more general as whats happening is the embedding enables you to make the results "better" with AI.

🇬🇧United Kingdom yautja_cetanu

Could you do boost and combine?

Boost with ai sounds way cooler than combine.

Also whilst combine is what is happening in some sense the user experience is more that the search results are boosted. The experience is one that the search results feel magically better.

🇬🇧United Kingdom yautja_cetanu

Things we can do:

Error Handling

  • Static Analysis
  • Start at a high level and make sure we have it

Tests

  • Focus on Areas that are most complex or more important (Especially for Drupal CMS)
  • Look at reports of errors we've received and are they commonly happening in similar areas?
  • Agents - All tools should have at least one test
  • Automated tests for validating schemas

Documentation Personas

  • Sitebuilder - With the existing tools how do I meet what I am trying to achieve. What do we support as a module and how do they configure those things.
  • Developers - How do I use the code and make use of it.

Site builder questions:

  • How do I set up a provider?
  • Where do I find a list of available providers?
  • How do I set up a "Hello World", Explorer, Automator, AI Search + Boosted Views, AI Search + Chatbot, Assistant + Agents?
  • A page that goes through all configuration options and what they do for every form in the AI module?
  • Examples of set ups
  • Example prompts for different situations
  • Prompt Engineering help with Agents
  • Prompt Engineering help with AI Search + Chatbot

Developer Questions.

  • What API are there?
  • What does the module do?
  • How do I add new providers, agents, automators, CK Editor Assistants, VDB providers, Chatbots front-ends, Operation Types??, Logging types, various other plugins?
  • What is logged and how it works
  • AI Module Event types that can be used, and how they can be used
  • How do I integrate it with "This part of our site we don't currently support"
  • Hello world, where you module uses the AI module to call an LLM, where you expode a helper form in your module to allow changing models.
  • How do I contribute?
  • How do automated tests work (Unit Tests, Kernel Tests, Tests that need Mock LLM calls, Tests that need real LLM calls)
  • How do I get started locally (Eg DDEV recipes for Milvus, LMStudio)?
  • How do I get evaluations running and test Drupal CMS?

General Process

  • Where do we put this documentation?
  • Where do we manage the Project Management of the documentation?
  • Where do we manage the Project Management of Tests and Error Handling?
🇬🇧United Kingdom yautja_cetanu

https://www.drupal.org/project/ai/issues/3485449#comment-15884039 🌱 [Meta] Bring AI Search, Assistants and Chatbot out of Experimental Active - This comment has some details about the "Lexical vs semantic search stuff"

🇬🇧United Kingdom yautja_cetanu

So the situation I'd like with AI Search + Chatbot is that I search for something and it does the traditional search, where it searches for the words as is and then also the semantic search and then automatically combines them.

I think its probably the case that your "Boost With AI" but providing it as Context to a chatbot would solve this. So its probably just taking what you've already done and making it so the LLM can use it as context, not just a view can use it.

🇬🇧United Kingdom yautja_cetanu

That UI for Search API is soooooo much nicer!!!

🇬🇧United Kingdom yautja_cetanu

For Drupal CMS v1 we want to avoid anything "Destructive" - Which is defined as "Anything change that permanently deletes content we can't get back.

- We don't allow deleting any content, any terms, or deleting a field. AI can help the end-user do this manually.
- We have checked "Removing Alt Text" for an image field. And this doesn't delete anything. The Alt text is still there and when its enabled the field comes back.
- We want to test changing multiple cardinality so if you have cardinality of 3 and content uses 3 and then change it to 1 what happens (We might make it so for Drupal CMS v1 it simply can't change cardinality, or allow it to change it upwards.

In the future I think I'd like destructive changes if there is a single piece of content as its sometimes helpful when building things to have an example piece of content you don't mind if it gets destroyed.

🇬🇧United Kingdom yautja_cetanu

Summary of changes:

With Drupal CMS coming out and needing a release version. Do we really need all these features for a 1.0 release or can they come in a 1.1 release? I think we can leave Search AI in Experimental for 1.0. We can release Automators as 1.0 right now and improve on them for 1.1. We have almost completed what is needed for Assistants apart from putting the consent gathering in-line with Drupal CMS' approach.

🇬🇧United Kingdom yautja_cetanu

There are things I'm finding difficult to understand with the reports:

Mostly UX changes

  • I think there should be a clear place for the "System" prompt vs the actual prompt that is happening.
  • This is especially true for Agents, the specific question vs system prompt needs to be seperate with the system prompt being hidden in a collapsed thing. I really need to see the prompt given to the Agent and its response.
  • Can we do anything about the formatting of the responses? Hard to read what is going on.
  • Need to figure out how we can highlight the specific thing being evaluated, the specific prompt/ prompt to solve. I'm getting a lot of the history but I want the specific think to be evaluated. (Maybe highlight the specific message where it was ticked yes or no) and have that open by default?
  • It seems the thing that says "Prompt" is really the "System Prompt" and the thing that is called "Question" is "Prompt" - It's kind of a User prompt (even if the user is an Agent). Maybe Input Prompt is good?
  • I think we need to be able to see the most important things first with everything else in a drop-down. It's the User prompts + Response that matters the most.
  • Similarly configuration such as model should again be in a dropdown.
  • In the screenshot without agents, it doesn't show the Drupal_agent_assistant, why does it sometimes for message history and other times it doesn't?
  • I think it might be better to show a UI of the specific user message we're dealing with and its history of agents seperately from the general message history as we can't see any of the agents within historical user messages anyway so having a drop-down for each thing doesn't make much sense.

May require a refactor of Agents

  • I think "Comments" should be Message history and there should be a consistent method of showing message history for the assistant vs Agent. (This might need a little agent refactor itself). We probably don't need "Task Name" "Task Description"? Unless we want to keep using those features so it works better with MiniKanBan in which case we should make the Assistant come up with a Task Name and Description.
  • I think we should always ask the Agent to respond with something and than also offer an explanation for their response. We should have a consistent format for the "Response Message" vs "Explanation"
  • I think it makes sense that we can't see the agent history for each previous user message. However I think we should at leave a record of some of the agents called so that we could query the agents called by that user message.

Screenshots

Evaluations with no Agents called

Evaluations with sub-agents called

🇬🇧United Kingdom yautja_cetanu

I want to see if it helps push drupal CMS forwards even if it is in the ai module and I think the answer is yes it does if we make sure the wizards first step sort of degrades naturally depending on the scenarios

  • There are no providers installed at all.
  • There are providers downloaded but not installed or setup.
  • There are providers downloaded and installed but not set up, (no keys)
  • There are providers fully set up and working

By work I don't mean it has to fully help all those scenarios. For example it might just point you to the section of the AI module that shows the list of providers and tells you to download them. It just has to not break when you go to it on a site that has all of the above config and do what you might expect.

🇬🇧United Kingdom yautja_cetanu

We've had a proper test of the AI Agents and they are mostly working. Most of the issues were problems with the drupal UI itself and the testers being confused by drupal or people getting confused by the goals of the test. We tried to make this one very simple with specific tasks to do.

The next one we will try and ask then to be more creative like create a recipie page so they might explore what fields they want instead of us telling them.

One major issue is the level of preview / review steps. Generally I'm seeing drupal developers are happier when the preview steps are more accurate but those tend to confuse people because they are then told a whole bunch of drupal terms before they get to see what it looks like. The review steps again are nice for Drupal developers as they give you links to all the places where it's created something but for someone with no Drupal experience this was scary and they were worried that if they didn't write down those links they would be gone for good.

But it's nice that we are getting to a stage where improvements will be working on the prompts and flow rather than fixing bugs in code.

We've also had developers try them out and found it hard to get everything installed. So we will work on that next week including improved documentation. I think we'll make a video to show people how you can install the recipe and get started.

Further write up to follow

🇬🇧United Kingdom yautja_cetanu
  • What do they mean by "re-rank" in this context.
  • Could you help explain the context by giving a specific real-world use-case?
  • How would this be used by the AI module specifically, how would Search API / ai Search or Assistants make use of this or is it intended for other things?
  • Why use "operation type" as the abstraction here.
  • Is there anything else cohere can do?
  • Where would this fit within the UI of the things we currently do?

My thinking is that it would be a setting in an index in Search API. You could click "Rerank with AI" and then type in a prompt. I can't quite tell if its a opertation type or if its a something we should create in the vector database abstraction layer.

It seems other VDBs can do this but can other LLMs do this? Could you decide to use ChatGPT to do the reranking?

Also I think it might need to be AI_Reranking as reranking is a common thing for search that doesn't use AI a lot of the time right? It just uses a variety of algorithms.

🇬🇧United Kingdom yautja_cetanu

The post-action prompt should likely provide the LLM information about what has actually been performed.

Currently:

user
Can you change the content type called "Hard Cheeses" to be called "Soft Cheeses" instead?
assistant
The following are the results the different actions from the node_content_type_agent action:
The node type <em class="placeholder">Soft Cheeses</em> (<em class="placeholder">page</em>) has been updated.

Provides the answer:

Thank you for your question. I am looking up the answer. - The content type has successfully been changed from "Hard Cheeses" to "Soft Cheeses."

So it infers from the User question to change it from Hard to Soft and the response which "Soft has been updated" that Hard has changed to soft.

But actually even the LLM doesn't really know if Hard has been changed to soft. Soft Cheeses could have been updated some other way (a settings change) and maybe it already existed before the User asked for "Hard" to be changed to "soft". So the LLM has been essentially told to guess what it has done which may make it more prone to hallucination.

🇬🇧United Kingdom yautja_cetanu

- The AI Agents don't have the context provided by default. So if a User says can you change this page to be called Wine Tours and we are looking at events. It will only know if the Assistant looks at the context, decides this information is important and passes it onto the agent. One test showed it did do that. But it does mean if there is any contextual information important for the agents the assistant didn't know matters than it won't have it. This is probably fine.

🇬🇧United Kingdom yautja_cetanu

The post action prompt has this text:

"Start the message with the following information:
Thank you for your question. I am looking up the answer.
"

Where does that copy come from? It's not in the assistant message?

🇬🇧United Kingdom yautja_cetanu
  • Evaluations need to store the Agents that are working behind the scenes behind a assistant message history in details, with all the history and prompts and back and forth that exists there.
  • For each "thing" in the evaluation (like message) I should be able to click a button and see the actual POST message sent to the LLM (maybe in the logs).
  • We also want to store the "Drupal bit" the logs of the things Drupal has done between the agents working so we can trace.
🇬🇧United Kingdom yautja_cetanu

Make it easier to visionally see what is a "Sent to LLM" vs "Configruation" vs "Response"

Perhaps have Headings for "Sent" and "Response"
Have the prompt appear in a text box, similar for response. Or maybe a code box?
Have the configuration appear seperately

See the actual text of the thing sent to the LLM unedited.

It would be good to click a button (maybe its hidden in a drop-down) of the exact code sent to the LLM formatted exactly as it is, like the raw JSON.

Maybe see the response from the LLM visually better

Currently we see the raw JSON response. Is there any way we can pull out the "JSON fields" into something so we can see what the response is clearly? So if its writing a message I can see that. Like maybe some syntax highlighting so that the messages in each JSON field are bold or something?

Have links to other logs in the logs (So what fired off that message? Where was that message received and what did we do with it - Goes to assistant)

Some Entity references to at least other logs so I can follow the chain and know where this log came from.

Filters

Might be good to filter by at least the module that generated this like AI Agents, Assistants API etc.

🇬🇧United Kingdom yautja_cetanu

It currently looks like a warning as its red and pops up when I hover over it making me look at it all the time. Instead could we make it a button the top that looks like a sweep. Thinking top right.

🇬🇧United Kingdom yautja_cetanu

Important that the thread ID contains no PII or information about the user at all even pseudo anonymous info as we may want to use this for training models in the future.

🇬🇧United Kingdom yautja_cetanu

Will this be useful for doing roll-backs?

🇬🇧United Kingdom yautja_cetanu

What we are currently working on

We've created the Evaluations module and created a report however, the actual flow of how the Agents is not very understandable atm. We're working on providing more information, especially about how the flow of how the different AI Agents and prompts talk to each other and making it something a sitebuilder can understand.

🇬🇧United Kingdom yautja_cetanu

Hi Tony.

Thanks for your thoughts and your ideas and yes what you've said is very important. Our initial demo on prompts was to show that it COULD do something but actually one benefit of AI agents is we can feed into it best practises exactly as you've mentioned. We can prompt the AI Agents to ask specific questions from Users or to assume certain things.

For example, If a user wants to categorise the Content Type. We can prompt the AI to decide whether it should be a List (Text) Field or Taxonomy by looking at the type of field they are asking for and maybe asking the end-user questions. Above you can see discussion about whether or not "Select Lists" or "Tagging" are best practises.

Our Goal right now is the evaluations module. When you ask the AI Agent to do something you can then click thumbs up and thumbs down on what the AI Agent did which is then stored and reported. It can be exported and so if someone has issues, the eval can be exported and someone else can then debug it, by importing the history of prompts and responses.

This will also allow end-users who are ambitious sitebuilders (though probably need some knowledge of Drupal) to see what the prompts are and then change them in real-time to see if you get a better result. Once we have this (It's almost there, its just the reports are a little confusing to follow atm). It means community members like yourself can try out the AI Agent, click an evaluation, open it up, see the prompts for all the agents involved and suggest changes.

If you use Claude or OpenAI, they have been trained on publically available Drupal data, the code and likely drupal.org and so they have a good idea themselves of these kinds of practises. However for smaller opensource models we need to put it into our prompts directly.

The prompts for the AI agent are stored

  • Initially in a YAML file that comes with the AI Agent module.
  • But they can overriden in the DB for a specific site.
  • Also there is a place in the AI Agents settings where you can add your own instructions on top of the provided YAML (Will be better for taking updates to the underlying prompt later).

As a result testers like yourself, could then post patches into the AI_Agents module or a Drupal CMS recipe for these kinds of best practises going into the prompt itself.

🇬🇧United Kingdom yautja_cetanu

- We should have more details in the details. So if we're changing the name from X to Y it should say that. If its doing taxonomy terms it should show the term itself, not just the ID

🇬🇧United Kingdom yautja_cetanu

Possibly in addition to this.

- Have a simple export that just says "Thumbs up / thumbs down" (Maybe which agents its done) that can be integrated with telemetry. Can monitor in aggregate if its going well.

Production build 0.71.5 2024