-
Notifications
You must be signed in to change notification settings - Fork 23
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Factuality Evaluator failing #28
Comments
Hmm this error seems to imply that the result from OpenAI did not include the "usage" key. We have some logic in We're in the middle of reworking this in #27 and I suspect, especially if you're not using braintrust, that this will resolve the issue. In the meantime, could you share the version of autoevals, openai, and python you're using? |
autoevals 0.0.30 python 3.10 |
it would also be useful if you print the stacktrace on errors |
Interesting, do you mind patching #27 or re-testing after we land that change? I was not able to repro the error, but I suspect the response you're getting from OpenAI (perhaps related to your key?) is missing the |
yes, i can re test once you've landed #27 , should i leave this issue open till then ? |
just published 0.0.31. Please leave it opne! |
works now |
Tried this code snippet:
I see this error:
Score(name='Factuality', score=0, metadata={}, error=KeyError('usage'))
Factuality score: 0
Any suggestions on how i can debug this ?
The text was updated successfully, but these errors were encountered: