Account created on 3 June 2015, over 10 years ago
#

Merge Requests

More

Recent comments

🇩🇪Germany a.dmitriiev

The change makes sense in POV of the change that all input settings are moved from provider object to ChatInput instead.

🇩🇪Germany a.dmitriiev

I can confirm that gpt4 in any name doesn't appear anymore. All others are there like gpt5, 4o, etc.

🇩🇪Germany a.dmitriiev

It seems that still one gpt4 model is still in the list:

🇩🇪Germany a.dmitriiev

It was decided to have the plugin in custom_field project. Closing as won't fix.

🇩🇪Germany a.dmitriiev

For testing:

1. Install this module
2. Install ai_automators module.
3. Install field_widget_actions module (AI submodule)
4. Create a custom field
5. Create an automator for this field. Set the worker to "Field widget actions"
6. On manage form display of the entity bundle where the field was created go to settings of the field widget for the custom field. There will be a setting for "Field widget actions". Choose the plugin "Automator Custom field" and then select your automator in the action settings.
7. When you are on the edit form of the entity there will be a button near the custom field. When clicking this button the automator is triggered and the custom field properties are populated

🇩🇪Germany a.dmitriiev

a.dmitriiev made their first commit to this issue’s fork.

🇩🇪Germany a.dmitriiev

I have also added the token for markdownify-url

🇩🇪Germany a.dmitriiev

I have added the suggestion from @iambatman to MR

🇩🇪Germany a.dmitriiev

a.dmitriiev made their first commit to this issue’s fork.

🇩🇪Germany a.dmitriiev

a.dmitriiev changed the visibility of the branch 3550681-add-field-include to hidden.

🇩🇪Germany a.dmitriiev

I just found out, that this recipe doesn't require the node module to be installed. For the field storage to exist for all content types, the dependency is needed to Node module itself. Maybe then this functionality should be added to this recipe https://www.drupal.org/project/ai_recipe_llm_optimized_content instead, and it will extend LLM Support recipe to keep the latter untouched and be a base for all future recipes.

I am moving this feature to another project.

🇩🇪Germany a.dmitriiev

Created an issue for "Include in llms.txt" functionality here Add field "Include in llms.txt" for all content types Needs work

🇩🇪Germany a.dmitriiev

a.dmitriiev created an issue.

🇩🇪Germany a.dmitriiev

Thank you @b_sharpe! This is definitely the better and elegant approach! I have tested it and it works.

🇩🇪Germany a.dmitriiev

For testing, make sure that the changes from related issues are applied to AI Core module.

🇩🇪Germany a.dmitriiev

This issue 📌 Make setup field widget action config action support bundle wildcards RTBC is also needed to implement this recipe feature.

🇩🇪Germany a.dmitriiev

Related issue Field widget action for automator on custom field Needs review is needed to setup the field widget action on a custom field that uses Automator.

🇩🇪Germany a.dmitriiev

a.dmitriiev created an issue.

🇩🇪Germany a.dmitriiev

Found this issue, while trying to add Custom Field (module version 4.0.0-rc2) #3550437: Custom field automator compatibility with version 4.0 and provided MR with a fix

🇩🇪Germany a.dmitriiev

a.dmitriiev created an issue.

🇩🇪Germany a.dmitriiev

a.dmitriiev made their first commit to this issue’s fork.

🇩🇪Germany a.dmitriiev

a.dmitriiev created an issue.

🇩🇪Germany a.dmitriiev

Merged, thank you!

🇩🇪Germany a.dmitriiev

a.dmitriiev made their first commit to this issue’s fork.

🇩🇪Germany a.dmitriiev

Hi @gxleano, thank you for working on this issue.

Please check my comments, also the pipeline for phpstan and phpcs are failing, as well as tests. Please adjust the tests if needed for the new logic.

🇩🇪Germany a.dmitriiev

As AI Core RC1 was released with the config action to configure Field Widget Actions, now the "Generate Alt Text" button will appear automatically on image field of media type "Image".

🇩🇪Germany a.dmitriiev

I found this page https://community.openai.com/t/image-url-is-only-supported-by-certain-mo... , some people also randomly have this problem.
For my tests I used gpt-4o and it was not returning such an error. By the way gpt-4o is the default one that is set by ai_provider_openai module when you setup the provider.

🇩🇪Germany a.dmitriiev

Some more references for the changes:

  1. constraints support was introduced in Drupal 10.3 see change record https://www.drupal.org/node/3428254
  2. FullyValidatable constraint introduced in Drupal 10.3 see change record https://www.drupal.org/node/3404425
🇩🇪Germany a.dmitriiev

The change totally makes sense. RTBC'ed

🇩🇪Germany a.dmitriiev

a.dmitriiev created an issue.

🇩🇪Germany a.dmitriiev

I have changed the config dependency to enforced dependency. I have also checked the update path from 1.1.0 to 1.2.0 and the enforced dependency is also added with update hooks.

🇩🇪Germany a.dmitriiev

a.dmitriiev created an issue.

🇩🇪Germany a.dmitriiev

I have slightly updated the implementation to support configuring multiple components per single entity display in one recipe, as it could be the case that 2 or more components of 1 entity display would need to be setup in a recipe and it is of course not possible to call the same action on the same config multiple time.

🇩🇪Germany a.dmitriiev

Actually it seems that there is no update hook that creates this configs in case the ai_content_suggestions module is already installed.

🇩🇪Germany a.dmitriiev

@marcus_johansson your changes in MR actually would not fix the problem, as dependency should be enforced if you want them to be removed when the module is uninstalled. See this change record for the reference https://www.drupal.org/node/2404447

🇩🇪Germany a.dmitriiev

I have added the update hook to automatically discover the new plugin and also adjusted the config schema.

The plugin itself works as expected. So change the issue status to RTBC.

As a follow up: I think it makes sense to review all plugins and maybe move some methods to the base class as a lot of them just repeat themselves.

🇩🇪Germany a.dmitriiev

a.dmitriiev made their first commit to this issue’s fork.

🇩🇪Germany a.dmitriiev

a.dmitriiev created an issue.

🇩🇪Germany a.dmitriiev

@ezeedub Please continue in MR

🇩🇪Germany a.dmitriiev

@ezeedub I have created a MR as it is easier to review and interact with than with patch. I hope you don't mind, I have of course added the credit for you in the new credit record system.

🇩🇪Germany a.dmitriiev

I will review this issue asap. This idea is great.

🇩🇪Germany a.dmitriiev

I think I also know what is the problem. When the form is submitted for some reason there is some “memory” of values there, and when it is submitted once again, it is still using the old values. Instead of re-indexing, I tried to add the page reload in between of two form submits (one that changes title from Contextual content to Ignore and the other one that triggers preview).

I have adjusted the tests, and now the tests also confirm that when "Title" is set to "Ignore" it doesn't appear in the chunk at all, not as h1 (#) in the beginning and not as "Title: node title" in the end of the chunk.

So we need a decision, should the entity label (node title) be always included in the chunk disregarding the setting for the fields in the index, or should it actually follow the settings and should NOT be included if it is set to "Ignore".

🇩🇪Germany a.dmitriiev

Ok, I tried to follow the test logic and yes, you are right, Scott. It doesn't need re-indexing. But it seems that it needs saving of the settings (form submit) before you can submit the preview again, because the config 'ai_search.index.' . $index->id() that is used in the code for chunk preparation doesn't use the values from form_state, so the previously saved settings are used.

🇩🇪Germany a.dmitriiev

It seems that test is failing on line 245 and my re-indexing is not executed because of this

🇩🇪Germany a.dmitriiev

Thank you Scott. I have actually found another issue while checking tests. The tests that change the field index settings don't re-index content. I have added re-indexing between the Title as contextual content and ignore options. Let's see, now there should be more failures.

So the problem is actually that if Title is set to "Ignore" it won't be added to the beginning of the chunk as h1 anymore as well as no "Title: node title" will be added to the end of the chunk. Ignore is really ignore, but that is not because of the changes from MR.

How should we proceed here? I don't want just to rewrite the tests so they do not fail :D

There are couple of solutions:

1. When title is added to contextual content, the possible options:
a) Title is added as h1 to each chunk and not added in the end of the chunk as 'Title: node title' - this one is confusing as word 'Title:' will not be found and the prompt for agent can't reference the title via title label, but has to reference via h1 tag instead
b) Title is added as 'Title: node title' in the end and no h1 is added
c) Title is added in the both places from a) and b) - this is how it is currently implemented and causes duplication of title and potential loss of valuable tokens
2. When title is ignored, the possible options:
a) Title is not added as h1 and not added in the end of the chunk like 'Title: node title' - current behavior
b) Title is still added as h1 and not added in the end. - this one is a bit confusing because if you want to ignore it, why it would be as h1?

What would be your suggestions? Which options should be in each case?

🇩🇪Germany a.dmitriiev

Adding some related pages, that explain why some changes in this MR were done:

1. Drupal\Core\Serialization\Attribute\JsonSchema attribute was added in Drupal 11.2, see https://www.drupal.org/node/3424710
2. Attributes in php unit appeared for version 10 and higher https://docs.phpunit.de/en/10.5/attributes.html .
3. Drupal 11 requires php unit 10, but Drupal 10 is still on PhpUnit 9 https://www.drupal.org/docs/develop/automated-testing/phpunit-in-drupal/...

🇩🇪Germany a.dmitriiev

a.dmitriiev created an issue.

🇩🇪Germany a.dmitriiev

Maybe as a quick workaround the prompts should be moved to optional folder? Alternatively, the enforced module dependency would be needed for that config, but obviously, the prompts can live without AI Content Suggestions module, so I would not try to use this approach.

🇩🇪Germany a.dmitriiev

There is already an issue Provide a Local Storage (browser) alternative for storing chatbot history Active for this. I think they need to be merged into one, so that the focus and effort is not spread between them.

🇩🇪Germany a.dmitriiev

a.dmitriiev created an issue.

🇩🇪Germany a.dmitriiev

I have tested the MR for this issue with MR for ai_agents module here https://www.drupal.org/project/ai_agents/issues/3538174 📌 Use streamed chat for agents Active and checked the DeepChat bot. Functionality is working, chatbot has the "Details" section with tools that were called.

The idea of callbacks looks reasonable to me, as for streaming agent needs to wait until the complete message is returned.

🇩🇪Germany a.dmitriiev

I have rebased the MR, there is one PHPStan issue though determineSolvability doesn't have a return statement in case there are callbacks. It seems that it is not needed but the method definition requires the integer value.

🇩🇪Germany a.dmitriiev

a.dmitriiev made their first commit to this issue’s fork.

🇩🇪Germany a.dmitriiev

@sandeep_k you need to clear the caches, as one more argument was added to service definition.

🇩🇪Germany a.dmitriiev

The MR is ready to be merged, but I am not quite sure about the way the tokenizer is selected. The logic that is used to choose the tokenizer is based on the models name comparison:

      // Attempt to auto-select based on the embedding engine.
      $matching_model_possibilities = array_keys(array_filter($supported_models, function ($model) use ($embeddings_engine) {
        return str_contains($model, $embeddings_engine);
      }, ARRAY_FILTER_USE_KEY));
      if ($matching_model_possibilities) {
        return reset($matching_model_possibilities);
      }

But even for OpenAI provider the fallback chat model is used, as there are no exact nor partial matches of embedding and chat model names. I have tested with Gemini provider and there the names are very different, so this way of choosing the chat model is not very reliable. Maybe it is better to check the default models from getSetupData method of provider plugin that is selected for Engine? And just set the default model for chat from the same provider?

🇩🇪Germany a.dmitriiev

I have rebased the MR. Also the config schema needed changes, because of the moving of chat_model property to embeddings_engine_configuration section.

🇩🇪Germany a.dmitriiev

a.dmitriiev made their first commit to this issue’s fork.

🇩🇪Germany a.dmitriiev

a.dmitriiev created an issue.

🇩🇪Germany a.dmitriiev

I have tried the MR functionality locally and found one thing that I guess will be confusing. It seems that "Automator Worker" is set globally per field still and when more automator types are added they are still using the same "Automator Worker". I think it makes sense to allow each automator type to use different Worker, as for example you can create an automator for field widget actions, but at the same time different automator for cron job, but on the same field. Or am I over complicating things?

Regarding 1.2 vs 2.0, I would like to see this in 1.2 of course, as in combination with Field Widget Actions, this feature will be most valuable now. Maybe it is still just possible to have the old methods (that of course should be marked as deprecated) that will trigger the new methods?

🇩🇪Germany a.dmitriiev

The problem is gone now. Finally it is ready to be merged. RTBC'ed. Sorry for this issue ping-pong, but as this is a foundation for future logging system, it should be flawless as possible. Thank you @murz for your effort!

🇩🇪Germany a.dmitriiev

It looks good now, but there is now problem in src/Base/OpenAiBasedProviderClientBase.php line 356, the setChatTokenUsage method expects second parameter to be array, but StreamedResponse object is passed. Maybe Marcus can check this?

Because of this error there is no way to test the Streamed logging.

🇩🇪Germany a.dmitriiev

Now looks good! RTBC'ed.

There should be no breaking changes, as the new position is only added with new style, that doesn't break existing functionality.

🇩🇪Germany a.dmitriiev

@jurgenhaas regarding delta, it actually depends on how you want to use it. If the delta is needed to get the button element or the form element from the form array, then 0 will be wrong value here, as in the array parents tree, the element will not be found then, but if it is NULL then you can assume, that it is not under any 0, 1, 2, 3, ... element, but just inside the widget parent.

🇩🇪Germany a.dmitriiev

Hi @thomas.wardin,

The module itself doesn't provide any content templates, but gives you tools that you can create your own templates from the existing content items. So if you create a page with the structure you like and then you want to create more pages like this in the future, you can create a template from it and it will contain the items you need: paragraphs, media references, content references, etc.

On content edit page, you should see the "Template" tab, when you click it you will see a form like this:

If you provide the name and submit this form, the template with the structure of this node will appear in the list and you will be able to create new content items from this template.

Please let me know if you need more information on this.

Production build 0.71.5 2024