Enable token support for AI Interpolator Rule Huggingface Text Generation

Created on 12 June 2024, 16 days ago
Updated 15 June 2024, 13 days ago

Problem/Motivation

I needed to use the awesome "Advanced Mode (Token)" feature in a rule, to be able to use several tokens, based on multiple fields. I was using AI Interpolator Rule Huggingface Text Generation, but only had "Base mode" as an option ...

I found ✨ Many Different Fields Interpolation Active where "Many fields to one field interpolation" was mentioned:

Problem/Motivation

Currently the AI Interpolator can handle the following use cases:

  • One field to one field interpolation
  • Many fields to one field interpolation <<<<<< THIS ONE

Looking in the code, I found that advancedMode for token support for the rule was disabled. I tried enabling it, and I could now use tokens from all fields, and write more complex prompts.

Steps to reproduce

Want to use content from multiple fields via tokens with AI Interpolator Rule Huggingface Text Generation, but only have "Base mode" as an option.

Proposed resolution

Add support for tokens to AI Interpolator Rule Huggingface Text Generation.

... or maybe there are good reasons not to?

Remaining tasks

User interface changes

API changes

Data model changes

πŸ“Œ Task
Status

Fixed

Version

1.0

Component

Code

Created by

πŸ‡©πŸ‡°Denmark ressa Copenhagen

Live updates comments and jobs are added and updated live.
Sign in to follow issues

Merge Requests

Comments & Activities

  • Issue created by @ressa
  • Merge request !3Enable token support β†’ (Merged) created by ressa
  • Status changed to Needs review 16 days ago
  • πŸ‡©πŸ‡°Denmark ressa Copenhagen
  • πŸ‡©πŸ‡°Denmark ressa Copenhagen

    Bonus question: Does anyone have any experience with getting fairly fast and short answers (max. 500 characters) from HuggingFace, but getting longer processing time and thereby answers, after upgrading to a paying account?

    HuggingFace answers after only 5 seconds with a short reply, whereas OpenAI, which I used previously, spent up to 30 seconds, returning very elaborate replies with up to 4000 character ...

  • πŸ‡©πŸ‡ͺGermany Marcus_Johansson

    Thanks, I assume this is already tested by yourself, so I'll set it to fixed. It should be visible in the DEV version.

    Regarding you question, do you mean that the Huggingface Pro account, versus normal Huggingface account? If that is the case I wouldn't know. I don't see any difference between them.

    If you mean that you are actually paying for a machine via dedicated inference, then the speed depends on the machine size.

  • Status changed to Fixed 13 days ago
  • πŸ‡©πŸ‡°Denmark ressa Copenhagen

    Thanks for a fast reply @Marcus_Johansson.

    So I guess there is no way to turn up a time parameter, max. characters ... I assumed that a free account gets for example 5 seconds execution, and 500 characters max., whereas paying customers could in theory be allowed to set a max. execution time and amount of text, for example 30 seconds and 5000 characters.

    But since you write "I don't see any difference between them.", so maybe it's not possible. It's just odd, because locally Ollama and Llama3 has no problem crunching a prompt for 15-25 seconds and returning ~5000 characters, but the same model on HuggingFace only thinks for ~5 seconds and returns ~500 characters.

Production build 0.69.0 2024