ChatGPT-like UI/UX (and integration with aichat)

Created on 12 March 2023, over 1 year ago
Updated 16 August 2023, 10 months ago

While a bit crude and needs further consideration & styling, I'm quite pleased with the progress here - feels almost equivalent to chat.openai.com already without ALL the bells and whistles. What I did go 'hard' on is it's req/res formatting along the way. Implemented Highlight.js, as did ChatGPT's website, to handle syntax highlighting of programmatic code, and encapsulated HTML in pre/code tags as needed. Always nuances to consider but its handling most of it here. Used ChatGPT to help me write and debug it along the way - go figure... :)

Working to get this into a clean patch against the Drupal OpenAI modules DEV branch to contribute soon!

.

✨ Feature request
Status

Needs work

Version

1.0

Component

OpenAI ChatGPT

Created by

πŸ‡ΊπŸ‡ΈUnited States d0t101101

Live updates comments and jobs are added and updated live.
Sign in to follow issues

Comments & Activities

  • Issue created by @d0t101101
  • πŸ‡©πŸ‡°Denmark ressa Copenhagen

    Looks cool @d0t101101, I look forward to trying it out, when you upload the patch :)

  • πŸ‡ΊπŸ‡ΈUnited States d0t101101

    @ressa - thanks@ Coming your way here soon!

  • πŸ‡±πŸ‡ΉLithuania mindaugasd

    Looks nice.

    Idea:

    • ChatGPT feature could utilize default 'node' and 'comment' modules.
    • Bot would respond to user posting a regular comment. It would be similar to drupal.org bot (what module drupal.org already uses for bots?). Default 'comment form' could be used as well.
    • This would solve various questions and enable many default drupal features: data storage, multi-user, multi-conversation, content display, various flexibility and listing.
    • This could be implemented as ECA action ( https://www.drupal.org/project/eca β†’ )
  • πŸ‡ΊπŸ‡ΈUnited States d0t101101

    @kevinquillen - for your consideration, here is a semi-heavy patch (created against latest DEV branch) to vastly improve the openai_chatgpt sub module! Have many ideas in mind for further enhancements, but thought I'd share this earlier on.

  • πŸ‡ΊπŸ‡ΈUnited States d0t101101

    @mindaugasd - thank you, sir!

    I personally wasn't familiar with the ECA module at all, until now. :) After toying with it after installation on my Drupal 10 playground environment, I better see your vision there too. Would be incredibly neat to have ECA's ease of use for modeling any websites workflow be able to hook into OpenAI and create content/comments/etc on the fly.

  • πŸ‡ΊπŸ‡ΈUnited States d0t101101

    Also, FYI here, I did invest some time into getting this new ChatGPT form to animate the typing of the response, to mimic Chat.openai.com. Chat.openai.com seems to use its own backend APIs to get a 'stream' of the response, as its being created, making it possible for them to display results to the user faster as its being generated. Unless I maybe missed something, I don't believe that's possible with the OpenAI PHP client library in use today.

    There are many ways to handle the typewritter like animation of the text, ideally in Pure CSS, but also checked out various JS libraries. The 'gotcha' I ran into was handling the code blocks in the response - getting these to cleanly animate along with normal text proved to be a challenge, so I opted to just gut this bell and whistle out for now in favor of displaying the full response as quickly as possible; smooth scroll to the bottom of the page...

  • πŸ‡©πŸ‡°Denmark ressa Copenhagen

    Thanks @d0t101101, I tried but can't apply the patch ... perhaps you can create a patch from inside the openai module, since the paths look different from other patches?

    diff -Naur openai/modules/openai_chatgpt/openai_chatgpt.info.yml openai_patch/modules/openai_chatgpt/openai_chatgpt.info.yml
    

    Example from ✨ Add a permission to "Make OpenAI queries" and use this for the CKEditor plugin Fixed :

    diff --git a/modules/openai_ckeditor/openai_ckeditor.routing.yml b/modules/openai_ckeditor/openai_ckeditor.routing.yml
    
  • πŸ‡±πŸ‡ΉLithuania mindaugasd

    @d0t101101 ECA vision is further described in πŸ“Œ [META] Drupal could be great for building AI tools (like ChatGPT) Active

    Despite vision being large, the simpler solutions we can find, the better.

    Having a single ECA action which responds to a regular comment is a good start.

  • πŸ‡ΊπŸ‡ΈUnited States d0t101101

    @ressa - ah ha! I've attached an updated patch created using 'git diff', which should do the trick!

  • Status changed to Needs review over 1 year ago
  • πŸ‡ΊπŸ‡ΈUnited States d0t101101
  • πŸ‡ΊπŸ‡ΈUnited States d0t101101

    Attaching a more recent screenshot of this patch in action!

  • πŸ‡ΊπŸ‡ΈUnited States d0t101101

    Regarding #7 above - it seems openai-php client is working towards making this library capable of handling streamed responses from OpenAI APIs. Very much looking forward for support for this! Such a feature will vastly improve the UI/UX here as we can show immediate responses and 'type the response' as its being generated:

    https://github.com/openai-php/client/issues/80

    https://platform.openai.com/docs/api-reference/completions/create#comple...

  • πŸ‡©πŸ‡°Denmark ressa Copenhagen

    Thanks @d0t101101, the new patch from #10 applies, and I could try the interactive prompt. The first result appeared as expected, so that part works great. I tried asking a follow up question, which it stalled on ... it could be the new code, or it could be just ChatGPT has an issue right now?

    Anyway, thanks for working on this!

  • πŸ‡±πŸ‡ΉLithuania mindaugasd

    GPT's takes long to respond, or responds with various errors. So there has to be a button to repeat queries.
    When query fails, it could be print user message and display "Please try again" with an "Repeat" button in place of an answer.

  • πŸ‡ΊπŸ‡ΈUnited States d0t101101

    @Ressa & @mindaugasd - there are some TODOs in the code around the $message variable storage and handling. The way the underlying OpenAI PHP library works is it passes the previous messages back up with each request to keep the chat context. Without controls in place to truncate the older messages, it will inevitably grow too large resulting in AJAX errors related to using too many tokens in the request. Once that limit is hit, you currently have to reload the page. The good news there is I have this mostly fixed and will re-roll a patch to when time permits!

    Regarding taking 'too long' to respond, the PHP library just recently added the ability to use streaming of the response. I'm waiting on that to be fully supported, which will make it possible to animate the response and give more immediate results to the client. Today, it gets everything back in 'one go', which can be anywhere between 3 and 15 seconds, depending on ChatGPT's load at the time.

  • πŸ‡±πŸ‡ΉLithuania mindaugasd
    • I use GPT-4, which has large context window, so token count is not an issue. API simply sends error responses.
    • And response can sometimes take a minute (I patched OpenAI PHP library to allow that - they are fixing the issue to support longer wait times).
    • There could be a selector to truncate message length (selectable by the user) to reduce costs, because GPT-4 is >20x times more expensive than GPT-3.5. Also this would solve context length issue of GPT-3.5 at the same time.
  • πŸ‡±πŸ‡ΉLithuania mindaugasd

    P.S. I work on completely different code branch, so cannot help directly with a patch (a lot of differences), but I can share what I learn.

  • πŸ‡±πŸ‡ΉLithuania mindaugasd

    Maybe helpful - here I posted transcript with GPT-4 asking to build Drupal modules: #3336313-6: Use ChatGPT for solving Drupal issues to increase rate of development β†’
    It displays how long context length is [also, look at attached document there], also it showcases how good it is right now and where it could improve in the future.

  • πŸ‡ΊπŸ‡ΈUnited States kevinquillen

    I'd really prefer Node and Comment are not used to store data/responses. If anything, a custom entity instead of nodes. I'd want the lowest possible latency in read/write and without impacting the greater system. At the lowest level, a table that stores id/uid/chat_id/messages - where 'messages' is a json tuple. There is some of that in the OpenAI logs module.

  • πŸ‡±πŸ‡ΉLithuania mindaugasd

    To optimize for performance and retain flexibility, ECA action could be made pluggable into any entity, including custom entities.
    There are usecases, when performance is not a top priority, and so node-comments would fit well.
    And there are other cases, where perfomance/efficiency is very important.
    It depends on the site. Drupal itself is not optimized for performance, but for site building, and ECA action allows to build.

  • πŸ‡±πŸ‡ΉLithuania mindaugasd

    Not using "ECA action" is ok too. There can be many solutions.

  • πŸ‡±πŸ‡ΉLithuania mindaugasd

    Multimedia input and output

    Another benefit of using 'comment' is because GPT-4 will work with image input soon, as demonstrated in GPT-4 demo videos.
    'Comment' module allows to upload image easily.
    Later GPT-4 will probably work with PDFs, audio and so on.
    'Comment' will allow to work with multimedia input and output better, by having underlying fields infrastructure.
    Utilizing default drupal functionality is quicker to develop features that will be are coming in the future.

  • πŸ‡ΊπŸ‡ΈUnited States kevinquillen

    What does multimedia have to do with Comment instead of Media? Comment sounds like the wrong data store for anything that is not a comment.

  • πŸ‡±πŸ‡ΉLithuania mindaugasd

    Just another benefit. Overall, benefits of using a 'comment' outweigh the negatives.
    But again, if scalability is a priority, then this solution can be made without using default functionality.

  • πŸ‡±πŸ‡ΉLithuania mindaugasd

    Created a separate issue for this: ✨ Creating chatGPT ECA action Active

  • πŸ‡±πŸ‡ΉLithuania mindaugasd

    About code highlighting and markdown
    OpenAI uses markdown in their responses.
    When asking to sort things in a table, OpenAI responds with an actual table in markdown format.
    Code is also sent back in markdown. So we need to integrate markdown.

    Drupal markdown modules:

    Highlighting modules

    Markdown links

  • πŸ‡ΊπŸ‡ΈUnited States d0t101101

    @mindaugasd - Agreed! 90% sure the native chat.openai.com UI is using the Highlight.js lib, which is what I implemented here :) Automatic language detection is pretty darn good on its own, assuming its contained, formatted and escaped correctly within the DOM.

    Noticed that more recent versions of the chat.openai.com UI also let the responses span multiple replies contained within code blocks - i.e. if the code was too long for the initial/single response length, it can continue in a new code block in subsequent responses to follow too. Sometimes prompting it with 'continue this code block' helps to keep it continue on and contained as well. This can break down though as markup/code may get truncated along the way, and it reverts to spitting out raw code in the response.

    Very much looking forward to getting streaming responses from the underlying PHP library/API, so we can possibly better handle this as its returned and displayed for each request.

    Food for thought!

  • πŸ‡±πŸ‡ΉLithuania mindaugasd

    @d0t101101 I am not sure if we are talking about the same thing.
    If you implement markdown, problems you talk about probably gets solved.

    This is how to do it:

    Install library:
    use League\CommonMark\CommonMarkConverter;

    Include:
    use League\CommonMark\GithubFlavoredMarkdownConverter;

    Code:

        $converter = new GithubFlavoredMarkdownConverter([
          //'html_input' => 'escape',     // optional line if text formats are configured correctly
          'allow_unsafe_links' => false,
          'max_nesting_level' => 10
        ]);
    
        foreach ($messages as $key => $msg) {
    
          $role = $msg['role'];
          $role_id = 'chat-role-'.$role;
    
          if ($role == 'user') {
            $content = check_markup($msg['content'], 'chat_user');
          } else {
            $content = check_markup($converter->convert($msg['content']), 'chat_assistant');
          }
    
          $element["chat-msg-$key"] = [
            '#type' => 'container',
            '#attributes' => ['class' => [$role_id]],
            'markup' => ['#markup' => $content]
          ];
        }
    
  • πŸ‡±πŸ‡ΉLithuania mindaugasd

    @d0t101101 markdown and highlighting are separate things.
    Markdown is about everything, not only code. After markdown is implemented, then highlighting can be implemented on top as a cherry on the cake:)

  • πŸ‡±πŸ‡ΉLithuania mindaugasd

    Using check_markup was not the best, because not everything works that way.

    Sending improved part of code:

      foreach ($messages as $msg) {
    
        $role = $msg['role'];
        $role_class = 'chat-role-' . $role;
    
        $content = ($role == 'user') ? 
                   $msg['content'] : 
                   $converter->convert($msg['content']);
    
        $format = ($role == 'user') ? 'chat_user' : 'chat_assistant';
    
        $build['chat-msg-'.$key] = [
          '#type' => 'container',
          '#attributes' => ['class' => [$role_class]],
          'content' => [
            '#type' => 'processed_text',
            '#text' => $content,
            '#format' => $format
          ]
        ];
      }
    
  • πŸ‡ΊπŸ‡ΈUnited States d0t101101

    @mindaugasd - thanks for the inputs and code examples here! I've been incredibly busy lately and stoked to get this all under wraps ASAP. Looks like the underlying openai-php/client lib now has some solid examples of how to handle streamed Chat responses too, which could vastly improve the UX:

    https://github.com/openai-php/client/tree/v0.4.1#created-streamed

    I'm on the 'waiting list' for but am still awaiting actual access to the OpenAI ChatGPT v4 API so I can further develop and debug that as well.

  • πŸ‡ΊπŸ‡ΈUnited States kevinquillen

    If you pay for ChatGPT Plus, you can try GPT-4. How is the streamed responses coming along? I've been working on performance improvements for the Devel Generate feature, the main blocker is the varying response time depending on the prompt. I am wondering if a response can be streamed across several batches.

  • πŸ‡ΊπŸ‡ΈUnited States d0t101101

    @kevinquillen:

    Have been using ChatGPT Plus for a months but it seems for the GPT-v4 API calls to work, it must also be visible under your OpenAI accounts 'Playground' too. And for me, it still isn't! Chat.openai.com shows as a Plus paid plan. So as mentioned I applied for their waiting list here; will follow up with their support directly to expedite if possible next...

    https://openai.com/waitlist/gpt-4-api

    On the Development front, after upgrading php-openai/client to the latest v0.4.1 via composer, restarting Apache and clearing all Drupal caches, the main blocker I've been fighting with is getting my code to successfully call the new createStreamed() method in the vendor lib instead of just create() here:

    https://git.drupalcode.org/project/openai/-/blob/1.0.x/modules/openai_ch...

    Changing this line to createStreamed() results in this error getting logged:

    Uncaught PHP Exception Error: "Call to undefined method OpenAI\\Resources\\Chat::createStreamed()"

    Thinking this was related to the OpenAI Drupal modules factory pattern, I've tried to work through this by adding a similar public function createStreamed() method here, but that doesn't seem to have an impact:

    https://git.drupalcode.org/project/openai/-/blob/1.0.x/src/Http/ClientFa...
    https://git.drupalcode.org/project/openai/-/blob/1.0.x/modules/openai_ch...

    Any ideas or suggestions as to where I might need to define createStreamed() for this to work in the Drupal OpenAI modules ChatGptForm.php? After that's addressed, this thread has lots of juicy details to proceed further:

    https://github.com/openai-php/client/pull/84

    In the above, my guess is that I'm probably missing something fundamental around the use of $instance, $container and/or symphony's dependency injection! Forgive my ignorance, sir. :)

  • πŸ‡ΊπŸ‡ΈUnited States kevinquillen

    createStreamed is supported as of v0.4 of the openai-php client. The problem I am seeing now is Drupals Ajax response does not seem geared for streamed type behavior.

    I have tried both of these:

        $stream = $this->client->completions()->createStreamed(
          [
            'model' => $model,
            'prompt' => trim($prompt),
            'temperature' => (int) $temperature,
            'max_tokens' => (int) $max_tokens,
          ],
        );
    
        foreach ($stream as $response) {
          $text = trim($response->choices[0]->text) ?? $this->t('No answer was provided.');
          $response = new AjaxResponse();
          $response->addCommand(new HtmlCommand('#edit-response', $text));
          return $response;
        }
    

    and

        $stream = $this->client->completions()->createStreamed(
          [
            'model' => $model,
            'prompt' => trim($prompt),
            'temperature' => (int) $temperature,
            'max_tokens' => (int) $max_tokens,
          ],
        );
    
        foreach ($stream as $response) {
          $form['response'] .= trim($response->choices[0]->text) ?? $this->t('No answer was provided.');
          return $form['response'];
        }
    

    While the streaming 'worked' - its not allowed to complete because of the return. I am not sure offhand how to permit returning data when available in either case without the request being terminated on the first pass.

  • πŸ‡±πŸ‡ΉLithuania mindaugasd

    Few days ago I asked chatGPT a few questions how to implement streamed.

    It suggested javascript code:

      var displayStreamedContent = function () {
        var container = $('#streamed-content');
    
        $.ajax({
          url: '/stream',
          method: 'GET',
          cache: false,
          xhrFields: {
            onprogress: function (event) {
              // Append received data to the container
              var response = event.currentTarget.response;
              container.text(response);
            },
          },
        }).fail(function (jqXHR, textStatus) {
          container.text('Error: ' + textStatus);
        });    
      };
    

    Also provided Drupal side of code (route, controller, guzzle code and new StreamedResponse())

    But have not explored further besides asking as few questions about this.

  • πŸ‡±πŸ‡ΉLithuania mindaugasd

    And all the response looks very doable and looks that it should work as chatGPT said.

  • πŸ‡ΊπŸ‡ΈUnited States kevinquillen

    Funny because everything it gave me doesn't work at all, complete with classes that don't exist in any version of Drupal.

  • πŸ‡±πŸ‡ΉLithuania mindaugasd

    In principle chatGPT-4 solution looks like a good solution.

    With

    new AjaxResponse(); $response->addCommand()

    You can initiate that javascript function to stream a response.

    And I check streamedResponse class exists https://symfony.com/doc/current/components/http_foundation.html#streamin...
    Wanted to open it on the https://api.drupal.org, but API site recently does not work properly.

  • πŸ‡ΊπŸ‡ΈUnited States kevinquillen

    I have tried this a handful of ways and I am not sure it will work.

    For one, doing this from a form in Drupal, even with AjaxCommands, can't be streamed. Drupal cannot work that way for forms.

    https://drupal.stackexchange.com/questions/315677/using-streamedresponse...

    I have tried several ways to make it read out data as it is available. Either it waits for the entire response before it starts to write, or I just get an error.

    I changed the Completion controller to:

      public function generate(Request $request) {
        $data = json_decode($request->getContent());
    
        $stream = $this->client->completions()->createStreamed(
          [
            'model' => $data->options->model ?? 'text-davinci-003',
            'prompt' => trim($data->prompt),
            'temperature' => floatval($data->options->temperature),
            'max_tokens' => (int) $data->options->max_tokens,
          ]
        );
    
    
        return new StreamedResponse(function () use ($stream) {
          foreach ($stream as $response) {
            echo $response->choices[0]->text;
            ob_flush();
            flush();
          }
        });
      }
    

    the js:

    fetch(drupalSettings.path.baseUrl + 'api/openai-ckeditor/completion', {
                  method: 'POST',
                  credentials: 'same-origin',
                  body: JSON.stringify({'prompt': prompt, 'options': this._config}),
                }).then(async (response) => {
                  const reader = response.body.getReader();
                  while (true) {
                    const {value, done} = await reader.read();
                    const text = new TextDecoder().decode(value);
                    console.log(done);
                    if (done) break;
                    editor.model.insertContent(
                      writer.createText(text)
                    );
                    console.log('Received', text);
                  }
                });
    

    I also tried with the response return within the foreach loop, which does nothing.

    Ultimately, the fetch waits for the entire response and then writes it all at once. I have not seen any working examples of this so far, and there are several closed issues in the client package repo with no answers. Note that 'StreamResponse' from the API client is not the same as Symfonys StreamedResponse. Drupal controllers can only return a render array or Response (StreamedResponse) object. I otherwise do not know how to stream responses back as I have never worked with them.

  • πŸ‡ΊπŸ‡ΈUnited States kevinquillen

    The missing piece is you need a web server that can support streaming a response. I made some notes in here:

    https://www.drupal.org/project/openai/releases/1.0.0-alpha9 β†’

    Without that, it will only return the response when it is finished (no real time writing visuals like the OpenAI site).

    But with FormAPI it is never going to do it natively. I got this to work by writing Javascript that takes over the submit onclick event and working it out from there, with the response being written into a div in the form. FormAPI needs to pass messages back and forth, it cannot do string fragments, not that I've seen anyway. I'm not even 100% a custom AjaxCommand and AjaxResponse can do it for similar reasons.

  • πŸ‡±πŸ‡ΉLithuania mindaugasd

    Created a module

    AI chat user interface

    β†’
    https://www.drupal.org/project/aichat β†’

    for AI developer assistant.

    "pluggable to many backends" means that it won't have data storage, only user interface. So many different modules could integrate with it in theory.

  • Issue was unassigned.
  • Status changed to Needs work 10 months ago
  • πŸ‡±πŸ‡ΉLithuania mindaugasd

    Chat UI have released. Maybe it is good idea to move UI/UX efforts there?

    From previous patch: highlighting could be done, while CSS - not sure, because usually it is part of the theme.
    Maybe creating a sub-module "Chat UI Styles" could be a solution.

    Chat UI can be integrated to OpenAI module by copying and modifying "example backend" sub-module, here is the code:
    https://git.drupalcode.org/project/aichat/-/tree/1.0.x/modules/aichat_ba...

    Chat UI have "Conversation" entities
    And conversation messages are saved to JSON "data" field of conversation in example backend.

    As written in module's roadmap, these features could be implemented too with OpenAI integration (which requires to write new Chat UI API):

    • Implement response streaming.
    • Implement end-user facing chat configuration form.
Production build 0.69.0 2024