Skip to content

Commit

Permalink
Readme fixes (#75)
Browse files Browse the repository at this point in the history
  • Loading branch information
danielericlee authored Jul 8, 2024
1 parent 2f89ba3 commit 25deacc
Showing 1 changed file with 3 additions and 3 deletions.
6 changes: 3 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ It bundles together a variety of automatic evaluation methods including:
- Statistical (e.g. BLEU)
- Model-based (using LLMs)

Autoevals is developed by the team at [BrainTrust](https://braintrustdata.com/).
Autoevals is developed by the team at [Braintrust](https://braintrust.dev/).

Autoevals uses model-graded evaluation for a variety of subjective tasks including fact checking,
safety, and more. Many of these evaluations are adapted from OpenAI's excellent [evals](https://github.com/openai/evals)
Expand Down Expand Up @@ -78,7 +78,7 @@ import { Factuality } from "autoevals";

## Using Braintrust with Autoevals

Once you grade an output using Autoevals, it's convenient to use [BrainTrust](https://www.braintrustdata.com/docs/libs/python) to log and compare your evaluation results.
Once you grade an output using Autoevals, it's convenient to use [Braintrust](https://www.braintrust.dev/docs/libs/python) to log and compare your evaluation results.

### Python

Expand Down Expand Up @@ -340,4 +340,4 @@ There is nothing particularly novel about the evaluation methods in this library

## Documentation

The full docs are available [here](https://www.braintrustdata.com/docs/autoevals/overview).
The full docs are available [here](https://www.braintrust.dev/docs/reference/autoevals).

0 comments on commit 25deacc

Please sign in to comment.