From 9bc6b34854bfcb692219dfb3ede5b4415c0195cf Mon Sep 17 00:00:00 2001 From: Tanushree Sharma Date: Wed, 11 Dec 2024 15:32:10 -0800 Subject: [PATCH 1/4] quickstart updates --- docs/index.mdx | 37 +++++++++++++++++++++++++----------- src/components/QuickStart.js | 1 - 2 files changed, 26 insertions(+), 12 deletions(-) diff --git a/docs/index.mdx b/docs/index.mdx index bb870143..af80fc8a 100644 --- a/docs/index.mdx +++ b/docs/index.mdx @@ -28,7 +28,22 @@ import { RegionalUrl } from "@site/src/components/RegionalUrls"; **LangSmith** is a platform for building production-grade LLM applications. It allows you to closely monitor and evaluate your application, so you can ship quickly and with confidence. -LangChain's open source frameworks [langchain](https://python.langchain.com) and [langgraph](https://langchain-ai.github.io/langgraph/) work seemlessly with LangSmith but are not necessary - LangSmith works on its own! +With LangSmith you can: +- **Trace LLM Calls**: Gain visibility into LLM calls, and other parts of your application's logic. +- **Evaluate Performance**: Compare results across models and prompts to identify what works best. +- **Improve Prompts**: Quickly refine prompts to achieve more accurate and reliable results. + +:::tip LangSmith + LangChain OSS + +LangSmith integrates seamlessly with LangChain's open source frameworks [LangChain](https://python.langchain.com) and [LangGraph](https://langchain-ai.github.io/langgraph/), with no extra instrumentation needed. + +If you're already using either of these, see the how-to guide for [setting up LangSmith with LangChain](./observability/how_to_guides/tracing/trace_with_langchain) or [setting up LangSmith with LangGraph](https://docs.smith.langchain.com/observability/how_to_guides/tracing/trace_with_langgraph). + +::: + +LangSmith is a **standalone platform** that can be used on it's own no matter how you're creating your LLM applicatons. + +In this tutorial, we'll walk you though logging your first trace in LangSmith using the LangSmith SDK and running an evaluation to measure the performance of your application. This example uses the OpenAI API, however you can use your provider of choice. ## 1. Install LangSmith @@ -60,12 +75,6 @@ To create an API key head to the -- View a [sample output trace](https://smith.langchain.com/public/b37ca9b1-60cd-4a2a-817e-3c4e4443fdc0/r). -- Learn more about tracing in the observability [tutorials](./observability/tutorials), [conceptual guide](./observability/concepts) and [how-to guides](./observability/how_to_guides/index.md). +Learn more about tracing in the observability [tutorials](./observability/tutorials), [conceptual guide](./observability/concepts) and [how-to guides](./observability/how_to_guides/index.md). + +## 5. View your trace + +By default, the trace will be logged to the project with the name `default`. You should see the following [sample output trace](https://smith.langchain.com/public/b37ca9b1-60cd-4a2a-817e-3c4e4443fdc0/r) logged using the above code. + + +## 6. Run your first evaluation -## 5. Run your first evaluation +[Evaluations](./evaluation/concepts#evaluators) help assess application performance by testing it against a given set of inputs. Evaluations require a system to test, data to serve as test cases, and optionally evaluators to grade the results. -Evaluation requires a system to test, data to serve as test cases, and optionally evaluators to grade the results. Here we use a built-in accuracy evaluator. +Here we are running an evaluation against a sample dataset using a built-in accuracy evaluator for accuracy. -# The below examples use the OpenAI API, though it's not necessary in general export OPENAI_API_KEY=`), ]} groupId="client-language" From 33f95c0db964a89e6b9ab620f36bd399f198e2b8 Mon Sep 17 00:00:00 2001 From: Tanushree <87711021+tanushree-sharma@users.noreply.github.com> Date: Thu, 12 Dec 2024 10:59:34 -0800 Subject: [PATCH 2/4] Apply suggestions from code review Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com> --- docs/index.mdx | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/docs/index.mdx b/docs/index.mdx index af80fc8a..181a9381 100644 --- a/docs/index.mdx +++ b/docs/index.mdx @@ -29,8 +29,8 @@ import { RegionalUrl } from "@site/src/components/RegionalUrls"; **LangSmith** is a platform for building production-grade LLM applications. It allows you to closely monitor and evaluate your application, so you can ship quickly and with confidence. With LangSmith you can: -- **Trace LLM Calls**: Gain visibility into LLM calls, and other parts of your application's logic. -- **Evaluate Performance**: Compare results across models and prompts to identify what works best. +- **Trace LLM Applications**: Gain visibility into LLM calls and other parts of your application's logic. +- **Evaluate Performance**: Compare results across models, prompts, and architectures to identify what works best. - **Improve Prompts**: Quickly refine prompts to achieve more accurate and reliable results. :::tip LangSmith + LangChain OSS @@ -105,9 +105,9 @@ By default, the trace will be logged to the project with the name `default`. You ## 6. Run your first evaluation -[Evaluations](./evaluation/concepts#evaluators) help assess application performance by testing it against a given set of inputs. Evaluations require a system to test, data to serve as test cases, and optionally evaluators to grade the results. +[Evaluations](./evaluation/concepts) help assess application performance by testing the application against a given set of inputs. Evaluations require a system to test, data to serve as test cases, and evaluators to grade the results. -Here we are running an evaluation against a sample dataset using a built-in accuracy evaluator for accuracy. +Here we are running an evaluation against a sample dataset using a simple custom evaluator that checks if the real output exactly matches our gold-standard output. Date: Thu, 12 Dec 2024 13:38:57 -0800 Subject: [PATCH 3/4] Apply suggestions from code review --- docs/index.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/index.mdx b/docs/index.mdx index 181a9381..c30e03bc 100644 --- a/docs/index.mdx +++ b/docs/index.mdx @@ -35,7 +35,7 @@ With LangSmith you can: :::tip LangSmith + LangChain OSS -LangSmith integrates seamlessly with LangChain's open source frameworks [LangChain](https://python.langchain.com) and [LangGraph](https://langchain-ai.github.io/langgraph/), with no extra instrumentation needed. +LangSmith integrates seamlessly with LangChain's open source frameworks [`langchain`](https://python.langchain.com) and [`langgraph`](https://langchain-ai.github.io/langgraph/), with no extra instrumentation needed. If you're already using either of these, see the how-to guide for [setting up LangSmith with LangChain](./observability/how_to_guides/tracing/trace_with_langchain) or [setting up LangSmith with LangGraph](https://docs.smith.langchain.com/observability/how_to_guides/tracing/trace_with_langgraph). From 187b3f11a6c09e9cbfa74f664a3acbeabb0ec797 Mon Sep 17 00:00:00 2001 From: Tanushree Sharma Date: Thu, 12 Dec 2024 14:25:34 -0800 Subject: [PATCH 4/4] linting fix --- docs/index.mdx | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/docs/index.mdx b/docs/index.mdx index c30e03bc..bee4681e 100644 --- a/docs/index.mdx +++ b/docs/index.mdx @@ -29,21 +29,22 @@ import { RegionalUrl } from "@site/src/components/RegionalUrls"; **LangSmith** is a platform for building production-grade LLM applications. It allows you to closely monitor and evaluate your application, so you can ship quickly and with confidence. With LangSmith you can: + - **Trace LLM Applications**: Gain visibility into LLM calls and other parts of your application's logic. - **Evaluate Performance**: Compare results across models, prompts, and architectures to identify what works best. - **Improve Prompts**: Quickly refine prompts to achieve more accurate and reliable results. :::tip LangSmith + LangChain OSS -LangSmith integrates seamlessly with LangChain's open source frameworks [`langchain`](https://python.langchain.com) and [`langgraph`](https://langchain-ai.github.io/langgraph/), with no extra instrumentation needed. +LangSmith integrates seamlessly with LangChain's open source frameworks [`langchain`](https://python.langchain.com) and [`langgraph`](https://langchain-ai.github.io/langgraph/), with no extra instrumentation needed. -If you're already using either of these, see the how-to guide for [setting up LangSmith with LangChain](./observability/how_to_guides/tracing/trace_with_langchain) or [setting up LangSmith with LangGraph](https://docs.smith.langchain.com/observability/how_to_guides/tracing/trace_with_langgraph). +If you're already using either of these, see the how-to guide for [setting up LangSmith with LangChain](./observability/how_to_guides/tracing/trace_with_langchain) or [setting up LangSmith with LangGraph](https://docs.smith.langchain.com/observability/how_to_guides/tracing/trace_with_langgraph). ::: -LangSmith is a **standalone platform** that can be used on it's own no matter how you're creating your LLM applicatons. +LangSmith is a **standalone platform** that can be used on it's own no matter how you're creating your LLM applicatons. -In this tutorial, we'll walk you though logging your first trace in LangSmith using the LangSmith SDK and running an evaluation to measure the performance of your application. This example uses the OpenAI API, however you can use your provider of choice. +In this tutorial, we'll walk you though logging your first trace in LangSmith using the LangSmith SDK and running an evaluation to measure the performance of your application. This example uses the OpenAI API, however you can use your provider of choice. ## 1. Install LangSmith @@ -100,12 +101,11 @@ Learn more about tracing in the observability [tutorials](./observability/tutori ## 5. View your trace -By default, the trace will be logged to the project with the name `default`. You should see the following [sample output trace](https://smith.langchain.com/public/b37ca9b1-60cd-4a2a-817e-3c4e4443fdc0/r) logged using the above code. - +By default, the trace will be logged to the project with the name `default`. You should see the following [sample output trace](https://smith.langchain.com/public/b37ca9b1-60cd-4a2a-817e-3c4e4443fdc0/r) logged using the above code. ## 6. Run your first evaluation -[Evaluations](./evaluation/concepts) help assess application performance by testing the application against a given set of inputs. Evaluations require a system to test, data to serve as test cases, and evaluators to grade the results. +[Evaluations](./evaluation/concepts) help assess application performance by testing the application against a given set of inputs. Evaluations require a system to test, data to serve as test cases, and evaluators to grade the results. Here we are running an evaluation against a sample dataset using a simple custom evaluator that checks if the real output exactly matches our gold-standard output.