Override standard model for certain questions #1618
johnjosephhorton
started this conversation in
Ideas
Replies: 1 comment
-
@johnjosephhorton I think here is the injection point . Regarding the complications: |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Basic Idea
Often, we want to use a particular model to answer questions, but then use a different model to evaluate that model's responses. We could break it up into two jobs, but it might be more useful to be able to specify that for a certain question, a different model should be used (one could imagine a similar notion of Agents).
E.g., suppose I wanted several models to write a haiku but then have 1 model evaluate them.
How to implement
We'd need to modify the "Jobs" object to take this override dictionary and pass it down to Invigilator at the right points.
Some complications
Results object
The
results.select('model.*')
will be wrong - it will suggest there was one model for every question when that's not the case.One possibility is this field becomes a dictionary with keys of the
Token bucket
The token bucket is defined at a higher level than this, so it might require fairly substantial re-factoring.
Cost calculation
I don't know recall the code organization well-enough to know if this would adversely affect the calculation of costs. I could imagine it's implicitly assuming that the specified model was what was used.
Beta Was this translation helpful? Give feedback.
All reactions