Skip to content

Commit

Permalink
Use LLM-as-a-Judge header
Browse files Browse the repository at this point in the history
  • Loading branch information
ankrgyl committed Jul 24, 2024
1 parent 4c3aef9 commit a8fc1bd
Showing 1 changed file with 2 additions and 2 deletions.
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,9 +4,9 @@ Autoevals is a tool to quickly and easily evaluate AI model outputs.

It bundles together a variety of automatic evaluation methods including:

- LLM-as-a-Judge
- Heuristic (e.g. Levenshtein distance)
- Statistical (e.g. BLEU)
- Model-based (using LLMs)

Autoevals is developed by the team at [Braintrust](https://braintrust.dev/).

Expand Down Expand Up @@ -150,7 +150,7 @@ npx braintrust run example.eval.js

## Supported Evaluation Methods

### Model-Based Classification
### LLM-as-a-Judge

- Battle
- ClosedQA
Expand Down

0 comments on commit a8fc1bd

Please sign in to comment.