I’m not sure what you mean by filtering results, can you give an example?
Not all models can reply with code without additional quotes and explanation. So to use this reply in code file we need to filter code part only to insert it into code buffer.
Thank you. Will see if I can switch to your package as a backend.
Ah got it. That is indeed a problem I’d like to solve. If you look at the Open AI integration, I already have code to solve it there, but how to extend it to everything else has been an open question that I’ll eventually have to figure out. The interfaces involve are also not clear. Any insight you’ve come up with is likely to be helpful, so please don’t hesitate to share.
My solution is to ask llm to return result in some format, like markdown code block, for example. And then process model output line by line to check if line matches prefix/suffix patterns to change state of simple parser state machine. All data before and including prefix and after and including suffix will be dropped. But line-by-line processing is a key here.
Not all models can reply with code without additional quotes and explanation. So to use this reply in code file we need to filter code part only to insert it into code buffer.
Thank you. Will see if I can switch to your package as a backend.
Ah got it. That is indeed a problem I’d like to solve. If you look at the Open AI integration, I already have code to solve it there, but how to extend it to everything else has been an open question that I’ll eventually have to figure out. The interfaces involve are also not clear. Any insight you’ve come up with is likely to be helpful, so please don’t hesitate to share.
My solution is to ask llm to return result in some format, like markdown code block, for example. And then process model output line by line to check if line matches prefix/suffix patterns to change state of simple parser state machine. All data before and including prefix and after and including suffix will be dropped. But line-by-line processing is a key here.