Account created on 30 October 2008, over 16 years ago
#

Merge Requests

More

Recent comments

🇩🇪Germany marcus_johansson

Set the tag to ddd2025, so we can track user contributations that happened during Leuven :)

🇩🇪Germany marcus_johansson

Thanks @aspilicious - added one small comment, could you have a look and fix and then we will merge it.

🇩🇪Germany marcus_johansson

@vakulrai - we have this already in review that keeps track of parent and child unique id, maybe we should create a retry id or something similar as well, so you can track when multiple request are re-done because of validation errors. This means truly errors that fails writing the response as you like, not that its in an agent loop where the output quality is bad and a validation agent asks it to retry (this fails under normal parent/child hierarchy).

Edit: I should link also :) https://www.drupal.org/project/ai/issues/3515879 📌 Add thread id and parent id to AI calls. Active

🇩🇪Germany marcus_johansson

Thank you @ultimike, getting merged.

For preserving and reference, looking good in Gin:

and in Claro:

🇩🇪Germany marcus_johansson

I'm currently using Claro and it looks like this:

🇩🇪Germany marcus_johansson

Thank you all, with the minor change this is getting merged.

🇩🇪Germany marcus_johansson

The code looks good, but I agree with Paul here. We should not enable this by default. Site builders have taken a deliberate choice with what plugins they enabled here, the same should be true for this.

As soon as this is removed, I will merge it.

🇩🇪Germany marcus_johansson

Hi everyone, awesome function, code reviewed and merged

🇩🇪Germany marcus_johansson

Its a little bit complex because of a dumb architecture choice in some provider agents. We assumed OpenAI provider would be in the forefront of all developing, so it is working with an opt-out mechanism for capabilities, instead of an opt-in.

This means that when 1.0.0 is installed, it will still show that it has models for function calling, which is true in the sense that the model supports it, but since the provider itself won't, it will not work. And since it has graceful fallback, it will run the call, but say that it doesn't have any tools (which is true).

And since OpenAI is by far the most used provider, this will be a common problem.

The other issue is that there is no clear connection between the provider being in 1.1.x and it supporting function calling. We can allow that for each provider we control, but we do not control all of them.

The option I can think of right of the bat, is that we do something similar to the return array in the AI core that controls whether an operation type is supported, that we give back something like that. And if that method is missing, we can assume 1.0.0 version. This is however a breaking change, though with support in the base class it should be fine.

We can also do hook_requirements on the two by far most popular providers (OpenAI and Anthropic). Ollama is also popular, but we can assume that this is more technical people, that will ask on Slack/issue queue when issues happens.

Both are not optimal solutons of course.

🇩🇪Germany marcus_johansson

Hi @prashant.c - this is on purpose. Later on we will even filter down these to Orchestration agents most likely.

We are phasing out the code agents probably by 2.0.0 unless someone finds one reason to keep them, and there will be config agents for each of those three agents in 1.1.0 as configuration files on install.

🇩🇪Germany marcus_johansson

Oh, yes - good catch. We will merge 1.0.0. Setting it back to RTBC for that.

🇩🇪Germany marcus_johansson

Ah, this is already merged from another issue. But since you found it and wrote the code before that happened, I'll credit you!

🇩🇪Germany marcus_johansson

marcus_johansson made their first commit to this issue’s fork.

🇩🇪Germany marcus_johansson

Just a note for a follo up issue on this - I think as a second step we should look into creating MCP Plugins for this, to expose the prompts you want to expose via MCP. See the specification here: https://modelcontextprotocol.io/docs/concepts/prompts

I've yet to test out the actual prompt part of MCP, but my understanding is that you can give example prompts for instance for how to invoke multiple tools in the right order, when using something like Roo Code or Cursor or give suggested prompts in general to the end user.

🇩🇪Germany marcus_johansson

The issue was that text messages are not always returned when tools are being used.

🇩🇪Germany marcus_johansson

@chris_hall_hu_cheng - its only valid if we want to backport for 1.0.x branches. For 1.1.x we would recommend anyone to use agents. See the video here (you can skip to the RAG segment) https://www.youtube.com/watch?v=YHvwYM4IL90&ab_channel=DrupalAIVideos.

You should be able to test this today already.

🇩🇪Germany marcus_johansson

Set back to NW, I'll add some tasks here, that are kind of dependant.

Production build 0.71.5 2024