- Issue created by @Vivek Panicker
- 🇬🇧United Kingdom seogow
Flexibility of having block independent to View is important for design purposes, hence usage of View area (and thus View's own caching) would not be good for UX builders.
But the point is valid.
I suggest to add fingerprinting of content and caching the LLM response. That way, should the content fingerprint - the View's result - remains the same, the LLM call is bypassed and output of a cache is used instead.
Would that be satisfactory?
- 🇮🇳India Vivek Panicker Kolkata
Flexibility of having block independent to View is important for design purposes, hence usage of View area (and thus View's own caching) would not be good for UX builders.
That's something I didn't think of. In that respect, the current implementation looks good.
Maybe we can later add a View area plugin so that if anyone did want to use it, they could.I suggest to add fingerprinting of content and caching the LLM response. That way, should the content fingerprint - the View's result - remains the same, the LLM call is bypassed and output of a cache is used instead.
Would that be satisfactory?That sounds good, if it does not add too much overhead.
- 🇬🇧United Kingdom seogow
Agreed. OK, I will aim to release this modules 1.0 with this functionality along with 1.1 version of AI Module on which it depends.