Skip to content

Commit

Permalink
Quick quickstart updates (#586)
Browse files Browse the repository at this point in the history
Main changes: 
- more background on langsmith
- added up front info about what the tutorial covers
  • Loading branch information
tanushree-sharma authored Dec 12, 2024
2 parents 06f8b6c + 187b3f1 commit 79c9d07
Show file tree
Hide file tree
Showing 2 changed files with 26 additions and 12 deletions.
37 changes: 26 additions & 11 deletions docs/index.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,23 @@ import { RegionalUrl } from "@site/src/components/RegionalUrls";

**LangSmith** is a platform for building production-grade LLM applications.
It allows you to closely monitor and evaluate your application, so you can ship quickly and with confidence.
LangChain's open source frameworks [langchain](https://python.langchain.com) and [langgraph](https://langchain-ai.github.io/langgraph/) work seemlessly with LangSmith but are not necessary - LangSmith works on its own!
With LangSmith you can:

- **Trace LLM Applications**: Gain visibility into LLM calls and other parts of your application's logic.
- **Evaluate Performance**: Compare results across models, prompts, and architectures to identify what works best.
- **Improve Prompts**: Quickly refine prompts to achieve more accurate and reliable results.

:::tip LangSmith + LangChain OSS

LangSmith integrates seamlessly with LangChain's open source frameworks [`langchain`](https://python.langchain.com) and [`langgraph`](https://langchain-ai.github.io/langgraph/), with no extra instrumentation needed.

If you're already using either of these, see the how-to guide for [setting up LangSmith with LangChain](./observability/how_to_guides/tracing/trace_with_langchain) or [setting up LangSmith with LangGraph](https://docs.smith.langchain.com/observability/how_to_guides/tracing/trace_with_langgraph).

:::

LangSmith is a **standalone platform** that can be used on it's own no matter how you're creating your LLM applicatons.

In this tutorial, we'll walk you though logging your first trace in LangSmith using the LangSmith SDK and running an evaluation to measure the performance of your application. This example uses the OpenAI API, however you can use your provider of choice.

## 1. Install LangSmith

Expand Down Expand Up @@ -60,12 +76,6 @@ To create an API key head to the <RegionalUrl text='Settings page' suffix='/sett

## 4. Log your first trace

:::tip LangSmith + LangChain OSS
You don't need to use the LangSmith SDK directly if your application is built on [LangChain](https://python.langchain.com)/[LangGraph](https://langchain-ai.github.io/langgraph/) (either Python and JS).

See the how-to guide for tracing with LangChain [here](./observability/how_to_guides/tracing/trace_with_langchain).
:::

We provide multiple ways to log traces to LangSmith. Below, we'll highlight
how to use `traceable()`. See more on the [Annotate code for tracing](./observability/how_to_guides/tracing/annotate_code) page.

Expand All @@ -87,12 +97,17 @@ how to use `traceable()`. See more on the [Annotate code for tracing](./observab
groupId="client-language"
/>

- View a [sample output trace](https://smith.langchain.com/public/b37ca9b1-60cd-4a2a-817e-3c4e4443fdc0/r).
- Learn more about tracing in the observability [tutorials](./observability/tutorials), [conceptual guide](./observability/concepts) and [how-to guides](./observability/how_to_guides/index.md).
Learn more about tracing in the observability [tutorials](./observability/tutorials), [conceptual guide](./observability/concepts) and [how-to guides](./observability/how_to_guides/index.md).

## 5. View your trace

By default, the trace will be logged to the project with the name `default`. You should see the following [sample output trace](https://smith.langchain.com/public/b37ca9b1-60cd-4a2a-817e-3c4e4443fdc0/r) logged using the above code.

## 6. Run your first evaluation

## 5. Run your first evaluation
[Evaluations](./evaluation/concepts) help assess application performance by testing the application against a given set of inputs. Evaluations require a system to test, data to serve as test cases, and evaluators to grade the results.

Evaluation requires a system to test, data to serve as test cases, and optionally evaluators to grade the results. Here we use a built-in accuracy evaluator.
Here we are running an evaluation against a sample dataset using a simple custom evaluator that checks if the real output exactly matches our gold-standard output.

<CodeTabs
tabs={[
Expand Down
1 change: 0 additions & 1 deletion src/components/QuickStart.js
Original file line number Diff line number Diff line change
Expand Up @@ -161,7 +161,6 @@ export function ConfigureSDKEnvironmentCodeTabs({}) {
ShellBlock(`export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
# The below examples use the OpenAI API, though it's not necessary in general
export OPENAI_API_KEY=<your-openai-api-key>`),
]}
groupId="client-language"
Expand Down

0 comments on commit 79c9d07

Please sign in to comment.