Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Quick quickstart updates #586

Merged
merged 4 commits into from
Dec 12, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
37 changes: 26 additions & 11 deletions docs/index.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,23 @@ import { RegionalUrl } from "@site/src/components/RegionalUrls";

**LangSmith** is a platform for building production-grade LLM applications.
It allows you to closely monitor and evaluate your application, so you can ship quickly and with confidence.
LangChain's open source frameworks [langchain](https://python.langchain.com) and [langgraph](https://langchain-ai.github.io/langgraph/) work seemlessly with LangSmith but are not necessary - LangSmith works on its own!
With LangSmith you can:

- **Trace LLM Applications**: Gain visibility into LLM calls and other parts of your application's logic.
- **Evaluate Performance**: Compare results across models, prompts, and architectures to identify what works best.
- **Improve Prompts**: Quickly refine prompts to achieve more accurate and reliable results.

:::tip LangSmith + LangChain OSS

LangSmith integrates seamlessly with LangChain's open source frameworks [`langchain`](https://python.langchain.com) and [`langgraph`](https://langchain-ai.github.io/langgraph/), with no extra instrumentation needed.

If you're already using either of these, see the how-to guide for [setting up LangSmith with LangChain](./observability/how_to_guides/tracing/trace_with_langchain) or [setting up LangSmith with LangGraph](https://docs.smith.langchain.com/observability/how_to_guides/tracing/trace_with_langgraph).

:::

LangSmith is a **standalone platform** that can be used on it's own no matter how you're creating your LLM applicatons.

In this tutorial, we'll walk you though logging your first trace in LangSmith using the LangSmith SDK and running an evaluation to measure the performance of your application. This example uses the OpenAI API, however you can use your provider of choice.

## 1. Install LangSmith

Expand Down Expand Up @@ -60,12 +76,6 @@ To create an API key head to the <RegionalUrl text='Settings page' suffix='/sett

## 4. Log your first trace

:::tip LangSmith + LangChain OSS
You don't need to use the LangSmith SDK directly if your application is built on [LangChain](https://python.langchain.com)/[LangGraph](https://langchain-ai.github.io/langgraph/) (either Python and JS).

See the how-to guide for tracing with LangChain [here](./observability/how_to_guides/tracing/trace_with_langchain).
:::

We provide multiple ways to log traces to LangSmith. Below, we'll highlight
how to use `traceable()`. See more on the [Annotate code for tracing](./observability/how_to_guides/tracing/annotate_code) page.

Expand All @@ -87,12 +97,17 @@ how to use `traceable()`. See more on the [Annotate code for tracing](./observab
groupId="client-language"
/>

- View a [sample output trace](https://smith.langchain.com/public/b37ca9b1-60cd-4a2a-817e-3c4e4443fdc0/r).
- Learn more about tracing in the observability [tutorials](./observability/tutorials), [conceptual guide](./observability/concepts) and [how-to guides](./observability/how_to_guides/index.md).
Learn more about tracing in the observability [tutorials](./observability/tutorials), [conceptual guide](./observability/concepts) and [how-to guides](./observability/how_to_guides/index.md).

## 5. View your trace

By default, the trace will be logged to the project with the name `default`. You should see the following [sample output trace](https://smith.langchain.com/public/b37ca9b1-60cd-4a2a-817e-3c4e4443fdc0/r) logged using the above code.

## 6. Run your first evaluation

## 5. Run your first evaluation
[Evaluations](./evaluation/concepts) help assess application performance by testing the application against a given set of inputs. Evaluations require a system to test, data to serve as test cases, and evaluators to grade the results.

Evaluation requires a system to test, data to serve as test cases, and optionally evaluators to grade the results. Here we use a built-in accuracy evaluator.
Here we are running an evaluation against a sample dataset using a simple custom evaluator that checks if the real output exactly matches our gold-standard output.

<CodeTabs
tabs={[
Expand Down
1 change: 0 additions & 1 deletion src/components/QuickStart.js
Original file line number Diff line number Diff line change
Expand Up @@ -154,14 +154,13 @@
);
}

export function ConfigureSDKEnvironmentCodeTabs({}) {

Check warning on line 157 in src/components/QuickStart.js

View workflow job for this annotation

GitHub Actions / Check linting

Unexpected empty object pattern
return (
<CodeTabs
tabs={[
ShellBlock(`export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>

# The below examples use the OpenAI API, though it's not necessary in general
export OPENAI_API_KEY=<your-openai-api-key>`),
]}
groupId="client-language"
Expand All @@ -169,7 +168,7 @@
);
}

export function ConfigureEnvironmentCodeTabs({}) {

Check warning on line 171 in src/components/QuickStart.js

View workflow job for this annotation

GitHub Actions / Check linting

Unexpected empty object pattern
return (
<CodeTabs
tabs={[
Expand All @@ -184,7 +183,7 @@
);
}

export function LangChainQuickStartCodeTabs({}) {

Check warning on line 186 in src/components/QuickStart.js

View workflow job for this annotation

GitHub Actions / Check linting

Unexpected empty object pattern
const simpleTSBlock = `import { ChatOpenAI } from "@langchain/openai";
import { ChatPromptTemplate } from "@langchain/core/prompts";
import { StringOutputParser } from "@langchain/core/output_parsers";
Expand All @@ -202,7 +201,7 @@
const context = "During this morning's meeting, we solved all world conflict."
await chain.invoke({ question: question, context: context });`;

const alternativeTSBlock = `import { Client } from "langsmith";

Check warning on line 204 in src/components/QuickStart.js

View workflow job for this annotation

GitHub Actions / Check linting

'alternativeTSBlock' is assigned a value but never used
import { LangChainTracer } from "langchain/callbacks";

const client = new Client({
Expand Down Expand Up @@ -316,7 +315,7 @@
print(tok, end="")
# See an example run at: https://smith.langchain.com/public/3e853ad8-77ce-404d-ad4c-05726851ad0f/r`);

export function TraceableQuickStartCodeBlock({}) {

Check warning on line 318 in src/components/QuickStart.js

View workflow job for this annotation

GitHub Actions / Check linting

Unexpected empty object pattern
return (
<CodeBlock
className={TraceableQuickStart.value}
Expand All @@ -327,7 +326,7 @@
);
}

export function TraceableThreadingCodeBlock({}) {

Check warning on line 329 in src/components/QuickStart.js

View workflow job for this annotation

GitHub Actions / Check linting

Unexpected empty object pattern
return (
<CodeBlock
className={TraceableQuickStart.value}
Expand Down Expand Up @@ -401,7 +400,7 @@
);
}

export function RunTreeQuickStartCodeTabs({}) {

Check warning on line 403 in src/components/QuickStart.js

View workflow job for this annotation

GitHub Actions / Check linting

Unexpected empty object pattern
return (
<CodeTabs
tabs={[
Expand Down
Loading