Port openai_content to ai module

Created on 12 July 2024, 2 months ago
Updated 1 August 2024, about 2 months ago

Problem/Motivation

The openai_content module should also use the AI provided AI connectors under the hood.
Will use this issue to try a port.

Steps to reproduce

Proposed resolution

Remaining tasks

User interface changes

API changes

Data model changes

✨ Feature request
Status

Fixed

Version

1.0

Component

Miscellaneous

Created by

πŸ‡§πŸ‡ͺBelgium wouters_f Leuven

Live updates comments and jobs are added and updated live.
Sign in to follow issues

Merge Requests

Comments & Activities

  • Issue created by @wouters_f
  • πŸ‡§πŸ‡ͺBelgium wouters_f Leuven

    I've ported the module BUT

    1. I made it more granular.
    You can toggle each of the assistands ON or OFF

    2. Via the new settings screen

    3. Its Using the AI under the hood , so now it's AI agnostic!

    4. QUestion for MArcus:
    How do I replace the moderation request (that's the only thing that I need to move from openai to /ai).
    Not sure how to do that.

    Summary:
    All of these work already

    This one is still using the old ways

  • Status changed to Needs review 2 months ago
  • πŸ‡§πŸ‡ͺBelgium wouters_f Leuven
  • πŸ‡§πŸ‡ͺBelgium wouters_f Leuven

    Another question: Do I still need the StringHelper::prepareText if we do normalisation?

  • πŸ‡§πŸ‡ͺBelgium wouters_f Leuven

    Ok So i changed some more things:

    1. You can only do the moderation stuff to OPENAI.
    others will block you for that, so hardcoded that check to not allow this.

    2. Removed the dependeny on StringHelper::prepareText

  • πŸ‡§πŸ‡ͺBelgium wouters_f Leuven

    Marcus sent me this:

    Currently only OpenAI provider has moderation endpoint, it works like this:

    $ai_provider = \Drupal::service('ai.provider')->createInstance('openai');
    // Normalized $response will be a ModerationResponse object.
    $prompt = 'I fucking hate you, you fucking idiot (the goal is to Hate speech here to trigger the moderation) !';
    $response = $ai_provider->moderation($input, 'text-moderation-latest', ["your_module_name"])->getNormalized();
    

    Don't send that message to any other endpoint

    The answer will be a Drupal\ai\OperationType\Moderation\ModerationResponse that has a boolean from the isFlagged method if the moderation endpoint thought the prompt was provoking and an array of information from getInformation method.
    Note that the getInformation method is not normalized, so if you want to show that you probably need to do something like this if you want normalized output that is somewhat readable (or look into trace debugging tools styling like Tracy or Whoops):

    echo '<pre>';
    print_r($response->getInformation()); //or var_dump
    echo '</pre>';
    

    I ended up with this working code:

    $response = $ai_provider->moderation($target_field_value, 'text-moderation-latest', ["ai_content"])->getNormalized();
        $content = [];
        if ($response->isFlagged()) {
          $categories = $response->getInformation();
    
          $content['heading'] = [
            '#markup' => '<p>' . t('Violation(s) found for these categories:') . '</p>',
          ];
    
          $violations = [];
          foreach ($categories as $category => $did_violate) {
            $violations[] = Unicode::ucfirst($category);
          }
          $content['results'] = [
            '#theme' => 'item_list',
            '#list_type' => 'ul',
            '#items' => $violations,
            ];
    /*and so on**/
    
  • πŸ‡ΊπŸ‡ΈUnited States kevinquillen

    Throwing one thing in here, it would be helpful to change the tone drop down from a hard coded list to a taxonomy select (so users can add in several bespoke tone options).

  • πŸ‡ΊπŸ‡ΈUnited States kevinquillen

    OpenAI returns score values with the moderation results, it could be useful to include that somehow. I don't know if OpenAI has made the scoring very interpretable, but I always considered the possibility someone may want to set violation thresholds for what they are willing to accept (i.e. up to .5). Looks like they do not return anything clear, yet. The docs say between 0 and 1 but sometimes you can get some huge numbers.

  • πŸ‡§πŸ‡ͺBelgium wouters_f Leuven

    Would be great if someone could test the module and approve it. Thanks!

  • Pipeline finished with Failed
    2 months ago
    Total: 221s
    #225350
  • πŸ‡©πŸ‡ͺGermany Marcus_Johansson

    I did do major rewrites here instead of doing a code review, check: https://git.drupalcode.org/project/ai/-/merge_requests/16

    It does:

    • Move everything to a service, so the module file/procedural code is kept at minimum.
    • Add the possibility to choose model for each of the asssistative tools.
    • Abstract away the last hard coded connections to OpenAI - moderation is provider agnostic as the other tools.
  • Issue was unassigned.
  • πŸ‡§πŸ‡ͺBelgium wouters_f Leuven
  • Pipeline finished with Failed
    2 months ago
    Total: 184s
    #225942
  • Status changed to RTBC 2 months ago
  • πŸ‡©πŸ‡ͺGermany Marcus_Johansson

    Tested and rewritten certain parts. Unless someone opposes it, it will be merged tonight CET.

  • Status changed to Fixed 2 months ago
  • πŸ‡©πŸ‡ͺGermany Marcus_Johansson

    Merge into dev!

  • Automatically closed - issue fixed for 2 weeks with no activity.

Production build 0.71.5 2024