- Issue created by @AlfTheCat
Hi,
Very exciting to see ECA support in the openai module suite :)
On first try I ran into the issue that processing long prompts/outputs (rewriting texts, for instance) results in an incomplete result being returned. I understand the module does not address token limits, but I wonder if it could work around them via a chunking mechanism.
Hope this can be useful to others as well and can be an addition to the module in a future release.
Active
1.0
Code