Concepts
The pace of AI application development is often rate-limited by high-quality evaluations because there is a paradox of choice. Developers often wonder how to engineer their prompt or which LLM best balances accuracy, latency, and cost. High quality evaluations can help you rapidly answer these types of questions with confidence.
LangSmith allows you to build high-quality evaluations for your AI application. This conceptual guide will give you the foundations to get started. First, let's introduce the core components of LangSmith evaluation:
Dataset
: These are the inputs to your application used for conducting evaluations.Evaluator
: An evaluator is a function responsible for scoring your AI application based on the provided dataset.
Datasets
Datasets are the cornerstone of the LangSmith evaluation workflow. They consist of examples that provide inputs and, optionally, expected reference
outputs for assessing your AI application. Each example in a dataset is a data point with an inputs
dictionary, an optional outputs
dictionary, and an optional metadata
dictionary. The outputs
dictionary, if present, contains information for your evaluator(s) that remains hidden from the function being evaluated. It may include a reference
key (expected output or label, or additional context), example-specific criteria
, information the function must_include
and must_exclude
for that particular example, or any other information to check your system's outputs.
Creating datasets
There are various ways to build datasets for evaluation, including:
Manually curated examples
This is how we typically recommend people get started creating datasets. From building your application, you probably have some idea of what types of inputs you expect your application to be able to handle, and what "good" responses may be. You probably want to cover a few different common edge cases or situations you can imagine. Even 10-20 high-quality, manually-curated examples can go a long way.
Historical logs
Once you have an application in production, you start getting valuable information: how are users actually using it? This information can be valuable to capture and store in datasets. This allows you to test against these use cases as you iterate on your application.
If your application is going well, you will likely get a lot of usage! How can you determine which datapoints are valuable to add? There are a few heuristics you can follow. If possible - try to collect end user feedback. You can then see which datapoints got negative feedback. That is super valuable! These are spots where your application did not perform well. You should add these to your dataset to test against in the future. You can also use other heuristics to identify "interesting" datapoints - for example, runs that took a long time to complete could be interesting to look at and add to a dataset.
Synthetic data
Once you have a few examples, you can try to artificially generate examples. It's generally advised to have a few good hand-craft examples before this, as this synthetic data will often resemble them in some way. This can be a useful way to get a lot of datapoints, quickly.
To learn more about creating datasets in LangSmith, see our LangSmith Evaluation series:
- See our video on
Manually curated datasets
. - See our videos on
Datasets from traces
Types of datasets
LangSmith offers three distinct dataset types:
-
kv
(Key-Value) Dataset:- "Inputs" and "outputs" are represented as arbitrary key-value pairs.
- The
kv
dataset is the most versatile and default type, suitable for a wide range of evaluation scenarios. - This dataset type is ideal for evaluating chains and agents that require multiple inputs or generate multiple outputs.
-
llm
(Language Model) Dataset:- The
llm
dataset is designed for evaluating "completion" style language models. - The "inputs" dictionary contains a single "input" key mapped to the prompt string.
- The "outputs" dictionary contains a single "output" key mapped to the corresponding response string.
- This dataset type simplifies evaluation for LLMs by providing a standardized format for inputs and outputs.
- The
-
chat
Dataset:- The
chat
dataset is designed for evaluating LLM structured "chat" messages as inputs and outputs. - The "inputs" dictionary contains a single "input" key mapped to a list of serialized chat messages
- The "outputs" dictionary contains a single "output" key mapped to a list of serialized chat messages.
- This dataset type is useful for evaluating conversational AI systems or chatbots.
- The
Partitioning datasets
When setting up your evaluation, you may want to partition your dataset into different splits. This can help save cost. For example, you might use a smaller split for many rapid iterations and a larger split for your final evaluation. In addition, splits can be important for the interpretability of your experiments. For example, if you have a RAG application, you may want your dataset splits to focus on different types of questions (e.g., factual, opinion, etc) and to evaluate your application on each split separately.
To learn more about creating dataset splits in LangSmith:
- See our video on
dataset splits
in the LangSmith Evaluation series. - See our documentation here.
Evaluators
Evaluators are functions in LangSmith that score how well your application performs on a particular example. Evaluators receive these inputs:
Example
: The example from your Dataset.Root_run
: The output and intermediate steps from running the inputs through the application.
The evaluator returns an EvaluationResult
(or a similarly structured dictionary), which consists of:
Key
: The name of the metric being evaluated.Score
: The value of the metric for this example.Comment
: The reasoning or additional string information justifying the score.
There are a few approaches and types of scoring functions that can be used in LangSmith evaluation.
Human
Human evaluation is often a great starting point for evaluation. LangSmith makes it easy to review your LLM application outputs as well as the traces (all intermediate steps).
See our video using LangSmith to capture human feedback for prompt engineering.
Heuristic
Heuristic evaluators are hard-coded functions that perform computations to determine a score. To use them, you typically will need a set of rules that can be easily encoded into a function. They can be reference-free
(e.g., check the output for empty string or valid JSON). Or they can compare task output to a reference
(e.g., check if the output matches the reference exactly).
For some tasks, like code generation, custom heuristic evaluation (e.g., import and code execution-evaluation) are often extremely useful and superior to other evaluations (e.g., LLM-as-judge, discussed below).
- Watch the
Custom evaluator
video in our LangSmith Evaluation series for a comprehensive overview. - Read our documentation on custom evaluators.
- See our blog using custom evaluators for code generation.
LLM-as-judge
LLM-as-judge evaluators use LLMs to score system output. To use them, you typically encode the grading rules / criteria in the LLM prompt. They can be reference-free
(e.g., check if system output contains offensive content or adheres to specific criteria). Or, they can compare task output to a reference
(e.g., check if the output is factually accurate relative to the reference).
Check out our video on LLM-as-judge evaluators in our LangSmith Evaluation series.
With LLM-as-judge evaluators, it is important to carefully review the resulting scores and tune the grader prompt if needed. Often a process of trial-and-error is required to get LLM-as-judge evaluator prompts to produce reliable scores.
See documentation on our workflow to audit and manually correct evaluator scores here.
Pairwise
Pairwise evaluators pick the better of two task outputs based upon some criteria. This can use either a heuristic ("which response is longer"), an LLM (with a specific pairwise prompt), or human (asking them to manually annotate examples).
When should you use pairwise evaluation? Pairwise evaluation is helpful when it is difficult to directly score an LLM output, but easier to compare two outputs. This can be the case for tasks like summarization - it may be hard to give a summary a perfect score on a scale of 1-10, but easier to tell if it's better than a baseline.
Applying evaluations
We can visualize the above ideas collectively in the below diagram. To review, datasets
are composed of examples
that can be curated from a variety of sources such as historical logs or user curated examples. Evaluators
are functions that score how well your application performs on each example
in your dataset
. Evaluators can use different scoring functions, such as human
, heuristic
, LLM-as-judge
, or pairwise
. And if the dataset
contains reference
outputs, then the evaluator can compare the application output to the reference
.
Each time we run an evaluation, we are conducting an experiment. An experiment is a single execution of all the example inputs in your dataset
through your task
. Typically, we will run multiple experiments on a given dataset
, testing different tweaks to our task
(e.g., different prompts or LLMs). In LangSmith, you can easily view all the experiments associated with your dataset
and track your application's performance over time. Additionally, you can compare multiple experiments in a comparison view.
In the Dataset
section above, we discussed a few ways to build datasets (e.g., from historical logs or manual curation). One common way to use these datasets is offline evaluation, which is usually conducted prior to deployment of your LLM application. Below we'll discuss a few common paradigms for offline evaluation.
Unit Tests
Unit tests are often used in software development to verify the correctness of individual system components. Unit tests are often lightweight assertions on LLM inputs or outputs (e.g., type or schema checks). Often these are triggered by any change to your application as quick assertions of basic functionality. This means they often use heuristics to evaluate.
You generally expect unit tests to always pass (this is not strictly true, but more so than other types of evaluation flows). These types of tests are nice to run as part of CI, but when doing so it is useful to set up a cache (or something similar) to cache LLM calls (because those can quickly rack up!).
To learn more about unit tests with LangSmith, check out our unit testing video.
Regression Testing
Regression testing is often used to measure performance across versions of your application over time. They are used to ensure new app versions do not regress on examples that your current version is passing. In practice, they help you assess how much better or worse your new version is relative to the baseline. Often these are triggered when you are making app updates that are expected to influence the user experience. They are also commonly done when evaluating new or different models.
LangSmith's comparison view has native support for regression testing, allowing you to quickly see examples that have changed relative to the baseline (with regressions on specific examples shown in red and improvements in green):
Back-testing
Back-testing is an approach that combines dataset creation (discussed above) with evaluation. If you have a collection of production logs, you can turn them into a dataset. Then, you can re-run those production examples with newer application versions. This allows you to assess performance on past and realistic user inputs.
This is commonly used to evaluate new model versions. Anthropic dropped a new model? No problem! Grab the 1000 most recent runs through your application and pass them through the new model. Then compare those results to what actually happened in production.
See our video on Back-testing
to learn about this workflow.
Pairwise-testing
It can be easier for a human (or an LLM grader) to determine A is better than B
than to assign an individual score to either A or B. This helps to explain why some have observed that pairwise evaluations can be a more stable scoring approach than assigning individual scores to each experiment, particularly when working with LLM-as-judge evaluators.
Online Evaluation
Whereas offline evaluation focuses on pre-deployment testing, online evaluation allow you to evaluate an application in production. This can be useful for applying guardrails to LLM inputs or outputs, such as correctness and toxicity. Online evaluation can also work hand-in-hand with offline evaluation: for example, an online evaluator can be used to classify input questions into a set of categories that can be later used to curate a dataset for offline evaluation.
Experiment Configurations
LangSmith evaluations are kicked off using a single function, evaluate
, which takes in a dataset
, evaluator
, and various optional configurations, some of which we discuss below.
See documentation on using evaluate
here.
Repetitions
One of the most common questions when evaluating AI applications is: how can I build confidence in the result of an experiment? This is particularly relevant for LLM applications (e.g., agents), which can exhibit considerable run-to-run variability. Repetitions involve running the same evaluation multiple times and aggregating the results to smooth out run-to-run variability and examine the reproducibility of the AI application's performance. LangSmith evaluate
function allows you to easily set the number of repetitions and aggregates (the mean) of replicate experiments for you in the UI.
- See the video on
Repetitions
in our LangSmith Evaluation series - See our documentation on
Repetitions
Evaluating Specific LLM Applications
Below, we will discuss evaluation of a few specific, popular LLM applications.
Agents
LLM-powered autonomous agents combine three components (1) Tool calling, (2) Memory, and (3) Planning. Agents use tool calling with planning (e.g., often via prompting) and memory (e.g., often short-term message history) to generate responses. Tool calling allows a model to respond to a given prompt by generating two things: (1) a tool to invoke and (2) the input arguments required.
Below is a tool-calling agent in LangGraph. The assistant node
is an LLM that determines whether to invoke a tool based upon the input. The tool condition
sees if a tool was selected by the assistant node
and, if so, routes to the tool node
. The tool node
executes the tool and returns the output as a tool message to the assistant node
. This loop continues until as long as the assistant node
selects a tool. If no tool is selected, then the agent directly returns the LLM response.
This sets up three general types of agent evaluations that users are often interested in:
Final Response
: Evaluate the agent's final response.Single step
: Evaluate any agent step in isolation (e.g., whether it selects the appropriate tool).Trajectory
: Evaluate whether the agent took the expected path (e.g., of tool calls) to arrive at the final answer.
Below we will cover what these are, the components (inputs, outputs, evaluators) needed for each one, and when you should consider this. Note that you likely will want to do multiple (if not all!) of these types of evaluations - they are not mutually exclusive!