ChatMistralAI
This will help you getting started with ChatMistralAI chat models. For detailed documentation of all ChatMistralAI features and configurations head to the API reference.
Overviewβ
Integration detailsβ
Class | Package | Local | Serializable | PY support | Package downloads | Package latest |
---|---|---|---|---|---|---|
ChatMistralAI | @langchain/mistralai | β | β | β |
Model featuresβ
Tool calling | Structured output | JSON mode | Image input | Audio input | Video input | Token-level streaming | Token usage | Logprobs |
---|---|---|---|---|---|---|---|---|
β | β | β | β | β | β | β | β | β |
Setupβ
To access ChatMistralAI
models youβll need to create a ChatMistralAI
account, get an API key, and install the @langchain/mistralai
integration package.
Credentialsβ
Head here to sign up to Mistral AI and
generate an API key. Once youβve done this set the MISTRAL_API_KEY
environment variable:
export MISTRAL_API_KEY="your-api-key"
If you want to get automated tracing of your model calls you can also set your LangSmith API key by uncommenting below:
# export LANGCHAIN_TRACING_V2="true"
# export LANGCHAIN_API_KEY="your-api-key"
Installationβ
The LangChain ChatMistralAI integration lives in the
@langchain/mistralai
package:
- npm
- yarn
- pnpm
npm i @langchain/mistralai
yarn add @langchain/mistralai
pnpm add @langchain/mistralai
Instantiationβ
Now we can instantiate our model object and generate chat completions:
import { ChatMistralAI } from "@langchain/mistralai";
const llm = new ChatMistralAI({
model: "mistral-small",
temperature: 0,
maxTokens: undefined,
maxRetries: 2,
// other params...
});
Invocationβ
When sending chat messages to mistral, there are a few requirements to follow:
- The first message can not be an assistant (ai) message.
- Messages must alternate between user and assistant (ai) messages.
- Messages can not end with an assistant (ai) or system message.
const aiMsg = await llm.invoke([
[
"system",
"You are a helpful assistant that translates English to French. Translate the user sentence.",
],
["human", "I love programming."],
]);
aiMsg;
AIMessage {
"content": "Sure, I'd be happy to help you translate that sentence into French! The English sentence \"I love programming\" translates to \"J'aime programmer\" in French. Let me know if you have any other questions or need further assistance!",
"additional_kwargs": {},
"response_metadata": {
"tokenUsage": {
"completionTokens": 52,
"promptTokens": 32,
"totalTokens": 84
},
"finish_reason": "stop"
},
"tool_calls": [],
"invalid_tool_calls": [],
"usage_metadata": {
"input_tokens": 32,
"output_tokens": 52,
"total_tokens": 84
}
}
console.log(aiMsg.content);
Sure, I'd be happy to help you translate that sentence into French! The English sentence "I love programming" translates to "J'aime programmer" in French. Let me know if you have any other questions or need further assistance!
Chainingβ
We can chain our model with a prompt template like so:
import { ChatPromptTemplate } from "@langchain/core/prompts";
const prompt = ChatPromptTemplate.fromMessages([
[
"system",
"You are a helpful assistant that translates {input_language} to {output_language}.",
],
["human", "{input}"],
]);
const chain = prompt.pipe(llm);
await chain.invoke({
input_language: "English",
output_language: "German",
input: "I love programming.",
});
AIMessage {
"content": "Ich liebe Programmierung. (German translation)",
"additional_kwargs": {},
"response_metadata": {
"tokenUsage": {
"completionTokens": 12,
"promptTokens": 26,
"totalTokens": 38
},
"finish_reason": "stop"
},
"tool_calls": [],
"invalid_tool_calls": [],
"usage_metadata": {
"input_tokens": 26,
"output_tokens": 12,
"total_tokens": 38
}
}
Tool callingβ
Mistralβs API now supports tool calling and JSON mode! The examples
below demonstrates how to use them, along with how to use the
withStructuredOutput
method to easily compose structured output LLM
calls.
import { ChatMistralAI } from "@langchain/mistralai";
import { ChatPromptTemplate } from "@langchain/core/prompts";
import { z } from "zod";
import { tool } from "@langchain/core/tools";
const calculatorSchema = z.object({
operation: z
.enum(["add", "subtract", "multiply", "divide"])
.describe("The type of operation to execute."),
number1: z.number().describe("The first number to operate on."),
number2: z.number().describe("The second number to operate on."),
});
const calculatorTool = tool(
(input) => {
return JSON.stringify(input);
},
{
name: "calculator",
description: "A simple calculator tool",
schema: calculatorSchema,
}
);
// Bind the tool to the model
const modelWithTool = new ChatMistralAI({
model: "mistral-large-latest",
}).bind({
tools: [calculatorTool],
});
const calcToolPrompt = ChatPromptTemplate.fromMessages([
[
"system",
"You are a helpful assistant who always needs to use a calculator.",
],
["human", "{input}"],
]);
// Chain your prompt, model, and output parser together
const chainWithCalcTool = calcToolPrompt.pipe(modelWithTool);
const calcToolRes = await chainWithCalcTool.invoke({
input: "What is 2 + 2?",
});
console.log(calcToolRes.tool_calls);
[
{
name: 'calculator',
args: { operation: 'add', number1: 2, number2: 2 },
type: 'tool_call',
id: 'Tn8X3UCSP'
}
]
.withStructuredOutput({ ... })
β
Using the .withStructuredOutput
method, you can easily make the LLM
return structured output, given only a Zod or JSON schema:
The Mistral tool calling API requires descriptions for each tool field. If descriptions are not supplied, the API will error.
import { ChatMistralAI } from "@langchain/mistralai";
import { ChatPromptTemplate } from "@langchain/core/prompts";
import { z } from "zod";
const calculatorSchemaForWSO = z
.object({
operation: z
.enum(["add", "subtract", "multiply", "divide"])
.describe("The type of operation to execute."),
number1: z.number().describe("The first number to operate on."),
number2: z.number().describe("The second number to operate on."),
})
.describe("A simple calculator tool");
const llmForWSO = new ChatMistralAI({
model: "mistral-large-latest",
});
// Pass the schema and tool name to the withStructuredOutput method
const modelWithStructuredOutput = llmForWSO.withStructuredOutput(
calculatorSchemaForWSO,
{
name: "calculator",
}
);
const promptForWSO = ChatPromptTemplate.fromMessages([
[
"system",
"You are a helpful assistant who always needs to use a calculator.",
],
["human", "{input}"],
]);
// Chain your prompt and model together
const chainWSO = promptForWSO.pipe(modelWithStructuredOutput);
const responseWSO = await chainWSO.invoke({
input: "What is 2 + 2?",
});
console.log(responseWSO);
{ operation: 'add', number1: 2, number2: 2 }
You can supply a βnameβ field to give the LLM additional context around what you are trying to generate. You can also pass βincludeRawβ to get the raw message back from the model too.
const includeRawModel = llmForWSO.withStructuredOutput(calculatorSchemaForWSO, {
name: "calculator",
includeRaw: true,
});
const includeRawChain = promptForWSO.pipe(includeRawModel);
const includeRawResponse = await includeRawChain.invoke({
input: "What is 2 + 2?",
});
console.dir(includeRawResponse, { depth: null });
{
raw: AIMessage {
lc_serializable: true,
lc_kwargs: {
content: '',
tool_calls: [
{
name: 'calculator',
args: { operation: 'add', number1: 2, number2: 2 },
type: 'tool_call',
id: 'w48T6Nc3d'
}
],
invalid_tool_calls: [],
additional_kwargs: {
tool_calls: [
{
id: 'w48T6Nc3d',
function: {
name: 'calculator',
arguments: '{"operation": "add", "number1": 2, "number2": 2}'
},
type: 'function'
}
]
},
usage_metadata: { input_tokens: 205, output_tokens: 34, total_tokens: 239 },
response_metadata: {}
},
lc_namespace: [ 'langchain_core', 'messages' ],
content: '',
name: undefined,
additional_kwargs: {
tool_calls: [
{
id: 'w48T6Nc3d',
function: {
name: 'calculator',
arguments: '{"operation": "add", "number1": 2, "number2": 2}'
},
type: 'function'
}
]
},
response_metadata: {
tokenUsage: { completionTokens: 34, promptTokens: 205, totalTokens: 239 },
finish_reason: 'tool_calls'
},
id: undefined,
tool_calls: [
{
name: 'calculator',
args: { operation: 'add', number1: 2, number2: 2 },
type: 'tool_call',
id: 'w48T6Nc3d'
}
],
invalid_tool_calls: [],
usage_metadata: { input_tokens: 205, output_tokens: 34, total_tokens: 239 }
},
parsed: { operation: 'add', number1: 2, number2: 2 }
}
Using JSON schema:β
import { ChatMistralAI } from "@langchain/mistralai";
import { ChatPromptTemplate } from "@langchain/core/prompts";
const calculatorJsonSchema = {
type: "object",
properties: {
operation: {
type: "string",
enum: ["add", "subtract", "multiply", "divide"],
description: "The type of operation to execute.",
},
number1: { type: "number", description: "The first number to operate on." },
number2: {
type: "number",
description: "The second number to operate on.",
},
},
required: ["operation", "number1", "number2"],
description: "A simple calculator tool",
};
const llmForJsonSchema = new ChatMistralAI({
model: "mistral-large-latest",
});
// Pass the schema and tool name to the withStructuredOutput method
const modelWithJsonSchemaTool =
llmForJsonSchema.withStructuredOutput(calculatorJsonSchema);
const promptForJsonSchema = ChatPromptTemplate.fromMessages([
[
"system",
"You are a helpful assistant who always needs to use a calculator.",
],
["human", "{input}"],
]);
// Chain your prompt and model together
const chainWithJsonSchema = promptForJsonSchema.pipe(modelWithJsonSchemaTool);
const responseFromJsonSchema = await chainWithJsonSchema.invoke({
input: "What is 2 + 2?",
});
console.log(responseFromJsonSchema);
{ operation: 'add', number1: 2, number2: 2 }
Tool calling agentβ
The larger Mistral models not only support tool calling, but can also be used in the Tool Calling agent. Hereβs an example:
import { z } from "zod";
import { ChatMistralAI } from "@langchain/mistralai";
import { tool } from "@langchain/core/tools";
import { AgentExecutor, createToolCallingAgent } from "langchain/agents";
import { ChatPromptTemplate } from "@langchain/core/prompts";
const llmForAgent = new ChatMistralAI({
temperature: 0,
model: "mistral-large-latest",
});
// Prompt template must have "input" and "agent_scratchpad input variables"
const agentPrompt = ChatPromptTemplate.fromMessages([
["system", "You are a helpful assistant"],
["placeholder", "{chat_history}"],
["human", "{input}"],
["placeholder", "{agent_scratchpad}"],
]);
// Mocked tool
const currentWeatherToolForAgent = tool(async () => "28 Β°C", {
name: "get_current_weather",
description: "Get the current weather in a given location",
schema: z.object({
location: z.string().describe("The city and state, e.g. San Francisco, CA"),
}),
});
const agent = createToolCallingAgent({
llm: llmForAgent,
tools: [currentWeatherToolForAgent],
prompt: agentPrompt,
});
const agentExecutor = new AgentExecutor({
agent,
tools: [currentWeatherToolForAgent],
});
const agentInput = "What's the weather like in Paris?";
const agentRes = await agentExecutor.invoke({ input: agentInput });
console.log(agentRes.output);
It's 28 Β°C in Paris.
API referenceβ
For detailed documentation of all ChatMistralAI features and configurations head to the API reference: https://api.js.langchain.com/classes/langchain_mistralai.ChatMistralAI.html