- Issue created by @yautja_cetanu
- 🇱🇹Lithuania mindaugasd
Sounds like great idea 🙌🏻, security against LLM hacks is important.
- 🇩🇰Denmark ressa Copenhagen
Thanks for creating this issue @yautja_cetanu, it would be a nice feature to be able to block prompts based on some rules, before passing them on to a LLM.
- 🇧🇪Belgium wouters_f Leuven
Traditional chatbot companies do this by having a layer in front of the communication to GPT.
They have a whitelist/blacklist kind of approach.They first do intent detection (to find the forbidden/allowed events).
According to the output of the intent detection they will act accordingly.I guess this is where the ai_external_moderation module comes into play?
https://git.drupalcode.org/project/ai/-/tree/1.0.x/modules/ai_external_m...