Skip to main content

Manage prompts programmatically

You can use the LangSmith Python and TypeScript SDK to manage prompts programmatically.

note

Previously this functionality lived in the langchainhub package which is now deprecated. All functionality going forward will live in the langsmith package.

Install packages

In Python, you can directly use the LangSmith SDK (recommended, full functionality) or you can use through the LangChain package (limited to pushing and pulling prompts).

In TypeScript, you must use the LangChain npm package for pulling prompts (it also allows pushing). For all other functionality, use the LangSmith package.

pip install -U langsmith 
# version >= 0.1.99

Configure environment variables

If you already have LANGCHAIN_API_KEY set to your current workspace's api key from LangSmith, you can skip this step.

Otherwise, get an API key for your workspace by navigating to Settings > API Keys > Create API Key in LangSmith.

Set your environment variable.

export LANGCHAIN_API_KEY="lsv2_..."
Terminology

What we refer to as "prompts" used to be called "repos", so any references to "repo" in the code are referring to a prompt.

Push a prompt

To create a new prompt or update an existing prompt, you can use the push prompt method.

from langsmith import client
from langchain_core.prompts import ChatPromptTemplate

client = Client()

prompt = ChatPromptTemplate.from_template("tell me a joke about {topic}")
url = client.push_prompt("joke-generator", object=prompt)
# url is a link to the prompt in the UI
print(url)

You can also push a prompt as a RunnableSequence of a prompt and a model. This is useful for storing the model configuration you want to use with this prompt. The provider must be supported by the LangSmith playground. (see settings here: Supported Providers)

from langsmith import client
from langchain_core.prompts import ChatPromptTemplate
from langchain_openai import ChatOpenAI

client = Client()
model = ChatOpenAI(model="gpt-4o-mini")

prompt = ChatPromptTemplate.from_template("tell me a joke about {topic}")
chain = prompt | model

client.push_prompt("joke-generator-with-model", object=chain)

Pull a prompt

To pull a prompt, you can use the pull prompt method, which returns a the prompt as a langchain PromptTemplate.

To pull a private prompt you do not need to specify the owner handle (though you can, if you have one set).

To pull a public prompt from the LangChain Hub, you need to specify the handle of the prompt's author.

from langsmith import client
from langchain_openai import ChatOpenAI

client = Client()

prompt = client.pull_prompt("joke-generator")
model = ChatOpenAI(model="gpt-4o-mini")

chain = prompt | model
chain.invoke({"topic": "cats"})

Similar to pushing a prompt, you can also pull a prompt as a RunnableSequence of a prompt and a model. Just specify include_model when pulling the prompt. If the stored prompt includes a model, it will be returned as a RunnableSequence. Make sure you have the proper environment variables set for the model you are using.

from langsmith import client

client = Client()
chain = client.pull_prompt("joke-generator-with-model", include_model=True)
chain.invoke({"topic": "cats"})

When pulling a prompt, you can also specify a specific commit hash or prompt tag to pull a specific version of the prompt.

prompt = client.pull_prompt("joke-generator:12344e88")

To pull a public prompt from the LangChain Hub, you need to specify the handle of the prompt's author.

prompt = client.pull_prompt("efriis/my-first-prompt")

Use a prompt without LangChain

If you want to store your prompts in LangSmith but use them directly with a model provider's API, you can use our conversion methods. These convert your prompt into the payload required for the OpenAI or Anthropic API.

These conversion methods rely on logic from within LangChain integration packages, and you will need to install the appropriate package as a dependency in addition to your official SDK of choice. Here are some examples:

OpenAI

pip install -U langchain_openai
from openai import OpenAI

from langsmith.client import Client, convert_prompt_to_openai_format

# langsmith client
client = Client()

# openai client
oai_client = OpenAI()

# pull prompt and invoke to populate the variables
prompt = client.pull_prompt("joke-generator")
prompt_value = prompt.invoke({"topic": "cats"})

openai_payload = convert_prompt_to_openai_format(prompt_value)
openai_response = oai_client.chat.completions.create(**openai_payload)

Anthropic

pip install -U langchain_anthropic
from anthropic import Anthropic

from langsmith.client import Client, convert_prompt_to_anthropic_format

# langsmith client
client = Client()
# anthropic client
anthropic_client = Anthropic()
# pull prompt and invoke to populate the variables
prompt = client.pull_prompt("joke-generator")
prompt_value = prompt.invoke({"topic": "cats"})
anthropic_payload = convert_prompt_to_anthropic_format(prompt_value)
anthropic_response = anthropic_client.messages.create(**anthropic_payload)

List, delete, and like prompts

You can also list, delete, and like/unlike prompts using the list prompts, delete prompt, like prompt and unlike prompt methods. See the LangSmith SDK client for extensive documentation on these methods.

# List all prompts in my workspace
prompts = client.list_prompts()
# List my private prompts that include "joke"
prompts = client.list_prompts(query="joke", is_public=False)
# Delete a prompt
client.delete_prompt("joke-generator")
# Like a prompt
client.like_prompt("efriis/my-first-prompt")
# Unlike a prompt
client.unlike_prompt("efriis/my-first-prompt")
Important Note for JavaScript Users

For pulling prompts, we recommend using the langchain/hub package, as it handles prompt deserialization automatically. However, you can also choose to use the _pullPrompt method of the langsmith package directly but, you will need to manually deserialize the prompt using LangChain's load method.

All other methods in the LangSmith SDK can be used directly.


Was this page helpful?


You can leave detailed feedback on GitHub.