Loading source
Pulling the file list, source metadata, and syntax-aware rendering for this listing.
Source from repo
A comprehensive collection of Agent Skills for context engineering, multi-agent architectures, and production agent systems.
Files
Skill
Size
Entrypoint
Format
Open file
Syntax-highlighted preview of this file as included in the skill package.
examples/interleaved-thinking/docs/interleavedthinking.md
1# M2.1 Tool Use & Interleaved Thinking23> MiniMax-M2.1 is an Agentic Model with exceptional Tool Use capabilities.45M2.1 natively supports Interleaved Thinking, enabling it to reason between each round of tool interactions. Before every Tool Use, the model reflects on the current environment and the tool outputs to decide its next action.67<img src="https://filecdn.minimax.chat/public/4f4b43c1-f0a5-416a-8770-1a4f80feeb1e.png" />89This ability allows M2.1 to excel at long-horizon and complex tasks, achieving state-of-the-art (SOTA) results on benchmarks such as SWE, BrowseCamp, and xBench, which test both coding and agentic reasoning performance.1011In the following examples, we’ll illustrate best practices for Tool Use and Interleaved Thinking with M2.1. The key principle is to return the model’s full response each time—especially the internal reasoning fields (e.g., thinking or reasoning\_details).1213## Parameters1415### Request Parameters1617* `tools`: Defines the list of callable functions, including function names, descriptions, and parameter schemas1819### Response Parameters2021Key fields in Tool Use responses:2223* `thinking/reasoning_details`: The model's thinking/reasoning process24* `text/content`: The text content output by the model25* `tool_calls`: Contains information about functions the model has decided to invoke26* `function.name`: The name of the function being called27* `function.arguments`: Function call parameters (JSON string format)28* `id`: Unique identifier for the tool call2930## Important Note3132In multi-turn function call conversations, the complete model response (i.e., the assistant message) must be append to the conversation history to maintain the continuity of the reasoning chain.3334**OpenAI SDK:**3536* Append the full `response_message` object (including the `tool_calls` field) to the message history37* When using MiniMax-M2.1, the `content` field contains `<think>` tags which will be automatically preserved38* In the Interleaved Thinking Compatible Format, by using the additional parameter (`reasoning_split=True`), the model's thinking content is separated into the `reasoning_details` field. This content also needs to be added to historical messages.3940**Anthropic SDK:**4142* Append the full `response.content` list to the message history (includes all content blocks: thinking/text/tool\_use)4344See examples below for implementation details.4546## Examples4748### Anthropic SDK4950#### Configure Environment Variables5152For international users, use `https://api.minimax.io/anthropic`; for users in China, use `https://api.minimaxi.com/anthropic`5354```bash theme={null}55export ANTHROPIC_BASE_URL=https://api.minimax.io/anthropic56export ANTHROPIC_API_KEY=${YOUR_API_KEY}57```5859#### Example6061```python theme={null}62import anthropic63import json6465# Initialize client66client = anthropic.Anthropic()6768# Define tool: weather query69tools = [70{71"name": "get_weather",72"description": "Get weather of a location, the user should supply a location first.",73"input_schema": {74"type": "object",75"properties": {76"location": {77"type": "string",78"description": "The city and state, e.g. San Francisco, US",79}80},81"required": ["location"]82}83}84]8586def send_messages(messages):87params = {88"model": "MiniMax-M2.1",89"max_tokens": 4096,90"messages": messages,91"tools": tools,92}9394response = client.messages.create(**params)95return response9697def process_response(response):98thinking_blocks = []99text_blocks = []100tool_use_blocks = []101102# Iterate through all content blocks103for block in response.content:104if block.type == "thinking":105thinking_blocks.append(block)106print(f"💭 Thinking>\n{block.thinking}\n")107elif block.type == "text":108text_blocks.append(block)109print(f"💬 Model>\t{block.text}")110elif block.type == "tool_use":111tool_use_blocks.append(block)112print(f"🔧 Tool>\t{block.name}({json.dumps(block.input, ensure_ascii=False)})")113114return thinking_blocks, text_blocks, tool_use_blocks115116# 1. User query117messages = [{"role": "user", "content": "How's the weather in San Francisco?"}]118print(f"\n👤 User>\t {messages[0]['content']}")119120# 2. Model returns first response (may include tool calls)121response = send_messages(messages)122thinking_blocks, text_blocks, tool_use_blocks = process_response(response)123124# 3. If tool calls exist, execute tools and continue conversation125if tool_use_blocks:126# ⚠️ Critical: Append the assistant's complete response to message history127# response.content contains a list of all blocks: [thinking block, text block, tool_use block]128# Must be fully preserved, otherwise subsequent conversation will lose context129messages.append({130"role": "assistant",131"content": response.content132})133134# Execute tool and return result (simulating weather API call)135print(f"\n🔨 Executing tool: {tool_use_blocks[0].name}")136tool_result = "24℃, sunny"137print(f"📊 Tool result: {tool_result}")138139# Add tool execution result140messages.append({141"role": "user",142"content": [143{144"type": "tool_result",145"tool_use_id": tool_use_blocks[0].id,146"content": tool_result147}148]149})150151# 4. Get final response152final_response = send_messages(messages)153process_response(final_response)154```155156**Output:**157158```nushell theme={null}159👤 User> How's the weather in San Francisco?160💭 Thinking>161Okay, so the user is asking about the weather in San Francisco. This is a straightforward request that requires me to get current weather information for a specific location.162163Looking at my available tools, I see I have a `get_weather` function that can provide weather information for a location. This is exactly what I need to answer the user's question. The function requires a "location" parameter, which should be a string containing the city and potentially the state/country.164165In this case, the user has clearly specified San Francisco as the location. San Francisco is a major city in US, so I don't need to include the country code - just "San Francisco" should be sufficient for the weather service to identify the correct location.166167The required parameters for the get_weather tool are:168- location: "San Francisco"169170I should call this tool to retrieve the current weather information for San Francisco before I can provide a meaningful answer to the user. Once I get the weather data back from the tool, I'll be able to share details like temperature, conditions (sunny, cloudy, rainy, etc.), and possibly other relevant information like humidity or wind speed.171172So I'll make a tool call to get_weather with the location parameter set to "San Francisco". After I receive the response from this tool call, I'll be able to provide the user with the current weather information they requested.173174🔧 Tool> get_weather({"location": "San Francisco"})175176🔨 Executing tool: get_weather177📊 Tool result: 24℃, sunny178💭 Thinking>179I've just called the get_weather tool to check the current conditions in San Francisco as the user requested. Let me analyze what information I received back.180181The tool returned a simple response: "24℃, sunny". This is quite straightforward - it tells me the current temperature is 24 degrees Celsius and the weather conditions are sunny. This is exactly the basic weather information the user was asking about.182183The temperature is given in Celsius (24℃), which is appropriate since US uses the metric system. I could convert this to Fahrenheit for users who might be more familiar with that scale, but since the user didn't specify a preference, I'll stick with the metric measurement as provided by the tool.184185For the weather condition, I received "sunny" which indicates clear skies and good visibility. This is useful information that tells the user they can expect good weather if they're planning to be outside.186187I don't have additional details like humidity, wind speed, or UV index from the tool response. If the user wants more detailed information, they could ask a follow-up question, and I might need to provide general advice about sunny weather conditions or suggest checking a more detailed weather service.188189Now I need to formulate a clear, concise response to the user that directly answers their question about the weather in San Francisco. I'll keep it simple and factual, stating the temperature and conditions clearly. I should also add a friendly closing to invite further questions if needed.190191The most straightforward way to present this information is to state the temperature first, followed by the conditions, and then add a friendly note inviting the user to ask for more information if they want it.192193💬 Model> The current weather in San Francisco is 24℃ and sunny.194```195196**Response Body**197198```json theme={null}199{200"id": "05566b15ee32962663694a2772193ac7",201"type": "message",202"role": "assistant",203"model": "MiniMax-M2.1",204"content": [205{206"thinking": "Let me think about this request. The user is asking about the weather in San Francisco. This is a straightforward request that requires current weather information.\n\nTo provide accurate weather information, I need to use the appropriate tool. Looking at the tools available to me, I see there's a \"get_weather\" tool that seems perfect for this task. This tool requires a location parameter, which should include both the city and state/region.\n\nThe user has specified \"San Francisco\" as the location, but they haven't included the state. For the US, it's common practice to include the state when specifying a city, especially for well-known cities like San Francisco that exist in multiple states (though there's really only one San Francisco that's famous).\n\nAccording to the tool description, I need to provide the location in the format \"San Francisco, US\" - with the city, comma, and the country code for the United States. This follows the standard format specified in the tool's parameter description: \"The city and state, e.g. San Francisco, US\".\n\nSo I need to call the get_weather tool with the location parameter set to \"San Francisco, US\". This will retrieve the current weather information for San Francisco, which I can then share with the user.\n\nI'll format my response using the required XML tags for tool calls, providing the tool name \"get_weather\" and the arguments as a JSON object with the location parameter set to \"San Francisco, US\".",207"signature": "cfa12f9d651953943c7a33278051b61f586e2eae016258ad6b824836778406bd",208"type": "thinking"209},210{211"type": "tool_use",212"id": "call_function_3679004591_1",213"name": "get_weather",214"input": {215"location": "San Francisco, US"216}217}218],219"usage": {220"input_tokens": 222,221"output_tokens": 321222},223"stop_reason": "tool_use",224"base_resp": {225"status_code": 0,226"status_msg": ""227}228}229```230231### OpenAI SDK232233#### Configure Environment Variables234235For international users, use `https://api.minimax.io/v1`; for users in China, use `https://api.minimaxi.com/v1`236237```bash theme={null}238export OPENAI_BASE_URL=https://api.minimax.io/v1239export OPENAI_API_KEY=${YOUR_API_KEY}240```241242#### Interleaved Thinking Compatible Format243244When calling MiniMax-M2.1 via the OpenAI SDK, you can pass the extra parameter `reasoning_split=True` to get a more developer-friendly output format.245246<Note>247Important Note: To ensure that Interleaved Thinking functions properly and the model’s chain of thought remains uninterrupted, the entire `response_message` — including the `reasoning_details` field — must be preserved in the message history and passed back to the model in the next round of interaction.This is essential for achieving the model’s best performance.248</Note>249250Be sure to review how your API request and response handling function (e.g., `send_messages`) is implemented, as well as how you append the historical messages with `messages.append(response_message)`.251252```python theme={null}253import json254255from openai import OpenAI256257client = OpenAI()258259# Define tool: weather query260tools = [261{262"type": "function",263"function": {264"name": "get_weather",265"description": "Get weather of a location, the user should supply a location first.",266"parameters": {267"type": "object",268"properties": {269"location": {270"type": "string",271"description": "The city and state, e.g. San Francisco, US",272}273},274"required": ["location"],275},276},277},278]279280281def send_messages(messages):282"""Send messages and return response"""283response = client.chat.completions.create(284model="MiniMax-M2.1",285messages=messages,286tools=tools,287# Set reasoning_split=True to separate thinking content into reasoning_details field288extra_body={"reasoning_split": True},289)290return response.choices[0].message291292293# 1. User query294messages = [{"role": "user", "content": "How's the weather in San Francisco?"}]295print(f"👤 User>\t {messages[0]['content']}")296297# 2. Model returns tool call298response_message = send_messages(messages)299300if response_message.tool_calls:301tool_call = response_message.tool_calls[0]302function_args = json.loads(tool_call.function.arguments)303print(f"💭 Thinking>\t {response_message.reasoning_details[0]['text']}")304print(f"💬 Model>\t {response_message.content}")305print(f"🔧 Tool>\t {tool_call.function.name}({function_args['location']})")306307# 3. Execute tool and return result308messages.append(response_message)309messages.append(310{311"role": "tool",312"tool_call_id": tool_call.id,313"content": "24℃, sunny", # In real applications, call actual weather API here314}315)316317# 4. Get final response318final_message = send_messages(messages)319print(320f"💭 Thinking>\t {final_message.model_dump()['reasoning_details'][0]['text']}"321)322print(f"💬 Model>\t {final_message.content}")323else:324print(f"💬 Model>\t {response_message.content}")325```326327**Output:**328329```330👤 User> How's the weather in San Francisco?331💭 Thinking> Alright, the user is asking about the weather in San Francisco. This is a straightforward question that requires real-time information about current weather conditions.332333Looking at the available tools, I see I have access to a "get_weather" tool that's specifically designed for this purpose. The tool requires a "location" parameter, which should be in the format of city and state, like "San Francisco, CA".334335The user has clearly specified they want weather information for "San Francisco" in their question. However, they didn't include the state (California), which is recommended for the tool parameter. While "San Francisco" alone might be sufficient since it's a well-known city, for accuracy and to follow the parameter format, I should include the state as well.336337Since I need to use the tool to get the current weather information, I'll need to call the "get_weather" tool with "San Francisco, CA" as the location parameter. This will provide the user with the most accurate and up-to-date weather information for their query.338339I'll format my response using the required tool_calls XML tags and include the tool name and arguments in the specified JSON format.340💬 Model>341342🔧 Tool> get_weather(San Francisco, US)343💭 Thinking> Okay, I've received the user's question about the weather in San Francisco, and I've used the get_weather tool to retrieve the current conditions.344345The tool has returned a simple response: "24℃, sunny". This gives me two pieces of information - the temperature is 24 degrees Celsius, and the weather condition is sunny. That's quite straightforward and matches what I would expect for San Francisco on a nice day.346347Now I need to present this information to the user in a clear, concise way. Since the response from the tool was quite brief, I'll keep my answer similarly concise. I'll directly state the temperature and weather condition that the tool provided.348349I should make sure to mention that this information is current, so the user understands they're getting up-to-date conditions. I don't need to provide additional details like humidity, wind speed, or forecast since the user only asked about the current weather.350351The temperature is given in Celsius (24℃), which is the standard metric unit, so I'll leave it as is rather than converting to Fahrenheit, though I could mention the conversion if the user seems to be more familiar with Fahrenheit.352353Since this is a simple informational query, I don't need to ask follow-up questions or suggest activities based on the weather. I'll just provide the requested information clearly and directly.354355My response will be a single sentence stating the current temperature and weather conditions in San Francisco, which directly answers the user's question.356💬 Model> The weather in San Francisco is currently sunny with a temperature of 24℃.357```358359**Response Body**360361```json theme={null}362{363"id": "05566b8d51ded3a3016d6cc100685cad",364"choices": [365{366"finish_reason": "tool_calls",367"index": 0,368"message": {369"content": "\n",370"role": "assistant",371"name": "MiniMax AI",372"tool_calls": [373{374"id": "call_function_2831178524_1",375"type": "function",376"function": {377"name": "get_weather",378"arguments": "{\"location\": \"San Francisco, US\"}"379},380"index": 0381}382],383"audio_content": "",384"reasoning_details": [385{386"type": "reasoning.text",387"id": "reasoning-text-1",388"format": "MiniMax-response-v1",389"index": 0,390"text": "Let me think about this request. The user is asking about the weather in San Francisco. This is a straightforward request where they want to know current weather conditions in a specific location.\n\nLooking at the tools available to me, I have access to a \"get_weather\" tool that can retrieve weather information for a location. The tool requires a location parameter in the format of \"city, state\" or \"city, country\". In this case, the user has specified \"San Francisco\" which is a city in the United States.\n\nTo properly use the tool, I need to format the location parameter correctly. The tool description mentions examples like \"San Francisco, US\" which follows the format of city, country code. However, since the user just mentioned \"San Francisco\" without specifying the state, and San Francisco is a well-known city that is specifically in California, I could use \"San Francisco, CA\" as the parameter value instead.\n\nActually, \"San Francisco, US\" would also work since the user is asking about the famous San Francisco in the United States, and there aren't other well-known cities with the same name that would cause confusion. The US country code is explicit and clear.\n\nBoth \"San Francisco, CA\" and \"San Francisco, US\" would be valid inputs for the tool. I'll go with \"San Francisco, US\" since it follows the exact format shown in the tool description example and is unambiguous.\n\nSo I'll need to call the get_weather tool with the location parameter set to \"San Francisco, US\". This will retrieve the current weather information for San Francisco, which I can then present to the user."391}392]393}394}395],396"created": 1762080909,397"model": "MiniMax-M2.1",398"object": "chat.completion",399"usage": {400"total_tokens": 560,401"total_characters": 0,402"prompt_tokens": 203,403"completion_tokens": 357404},405"input_sensitive": false,406"output_sensitive": false,407"input_sensitive_type": 0,408"output_sensitive_type": 0,409"output_sensitive_int": 0,410"base_resp": {411"status_code": 0,412"status_msg": ""413}414}415```416417#### OpenAI Native Format418419Since the OpenAI ChatCompletion API native format does not natively support thinking return and pass-back, the model's thinking is injected into the `content` field in the form of `<think>reasoning_content</think>`. Developers can manually parse it for display purposes. However, we strongly recommend developers use the Interleaved Thinking compatible format.420421What `extra_body={"reasoning_split": False}` does:422423* Embeds thinking in content: The model's reasoning is wrapped in `<think>` tags within the `content` field424* Requires manual parsing: You need to parse `<think>` tags if you want to display reasoning separately425426<Note>427Important Reminder: If you choose to use the native format, please note that in the message history, do not modify the `content` field. You must preserve the model's thinking content completely, i.e., `<think>reasoning_content</think>`. This is essential to ensure Interleaved Thinking works effectively and achieves optimal model performance!428</Note>429430```python theme={null}431from openai import OpenAI432import json433434# Initialize client435client = OpenAI(436api_key="<api-key>",437base_url="https://api.minimax.io/v1",438)439440# Define tool: weather query441tools = [442{443"type": "function",444"function": {445"name": "get_weather",446"description": "Get weather of a location, the user should supply a location first.",447"parameters": {448"type": "object",449"properties": {450"location": {451"type": "string",452"description": "The city and state, e.g. San Francisco, US",453}454},455"required": ["location"]456},457}458},459]460461def send_messages(messages):462"""Send messages and return response"""463response = client.chat.completions.create(464model="MiniMax-M2.1",465messages=messages,466tools=tools,467# Set reasoning_split=False to keep thinking content in <think> tags within content field468extra_body={"reasoning_split": False},469)470return response.choices[0].message471472# 1. User query473messages = [{"role": "user", "content": "How's the weather in San Francisco?"}]474print(f"👤 User>\t {messages[0]['content']}")475476# 2. Model returns tool call477response_message = send_messages(messages)478479if response_message.tool_calls:480tool_call = response_message.tool_calls[0]481function_args = json.loads(tool_call.function.arguments)482print(f"💬 Model>\t {response_message.content}")483print(f"🔧 Tool>\t {tool_call.function.name}({function_args['location']})")484485# 3. Execute tool and return result486messages.append(response_message)487messages.append({488"role": "tool",489"tool_call_id": tool_call.id,490"content": "24℃, sunny" # In production, call actual weather API here491})492493# 4. Get final response494final_message = send_messages(messages)495print(f"💬 Model>\t {final_message.content}")496else:497print(f"💬 Model>\t {response_message.content}")498```499500**Output:**501502```nushell theme={null}503👤 User> How's the weather in San Francisco?504💬 Model> <think>505Alright, the user is asking about the weather in San Francisco. This is a straightforward request that I can handle using the tools provided to me.506507I see that I have access to a tool called "get_weather" which can provide weather information for a location. Looking at the parameters, it requires a "location" parameter which should be a string in the format of "city and state, e.g. San Francisco, US".508509In this case, the user has already specified the location as "San Francisco", which is a major city in California, US. I need to format this properly for the tool call. Following the example format in the tool description, I should format it as "San Francisco, US".510511The user didn't specify any other parameters or requirements, so a simple weather query should be sufficient. I don't need to ask for clarification since they've provided a clear location.512513Let me prepare the tool call to get the weather information for San Francisco. I'll use the "get_weather" tool with the location parameter set to "San Francisco, US". This should return the current weather conditions for San Francisco, which is what the user is asking about.514515Once I get the weather information back from the tool, I'll be able to provide the user with details about the current weather in San Francisco, such as temperature, conditions (sunny, cloudy, rainy, etc.), and possibly other relevant information like humidity or wind speed if that data is available.516517So I'll proceed with making the tool call to get_weather with the location parameter.518</think>519520🔧 Tool> get_weather(San Francisco, US)521💬 Model> <think>522Let me analyze what's happening in this conversation. The user asked about the weather in San Francisco, and I needed to provide them with this information.523524Looking at the tools available to me, I have access to a "get_weather" tool that can retrieve weather information for a specific location. I used this tool and called it with the argument "location": "San Francisco, US" as specified in the tool's parameters.525526The tool has now returned a response with the weather information for San Francisco. The response is quite concise - it simply states "24℃, sunny". This gives me two pieces of information:5271. The temperature is 24 degrees Celsius5282. The weather condition is sunny529530This is exactly what the user wanted to know - how's the weather in San Francisco. The information is clear and straightforward.531532Now I need to format this information in a clear, natural way for the user. Since the tool returned the temperature in Celsius, I'll use that unit rather than converting to Fahrenheit (though 24°C is about 75°F if the user happens to think in those terms).533534I should keep my response concise since the weather information itself is simple. I don't need to add any caveats or additional explanations since the weather report is straightforward. I won't include any details about wind, humidity, or other meteorological data since the tool didn't provide that information.535536So my response will simply state the current temperature and that it's sunny in San Francisco, which directly answers the user's question.537</think>538539The weather in San Francisco is currently sunny with a temperature of 24℃.540```541542**Response Body**543544```JSON theme={null}545{546"id": "055b7928a143b2d21ad6b2bab2c8f8b2",547"choices": [{548"finish_reason": "tool_calls",549"index": 0,550"message": {551"content": "<think>\nAlright, the user is asking about the weather in San Francisco. This is a straightforward request that I can handle using the tools provided to me.\n\nI see that I have access to a tool called \"get_weather\" which can provide weather information for a location. Looking at the parameters, it requires a \"location\" parameter which should be a string in the format of \"city and state, e.g. San Francisco, US\".\n\nIn this case, the user has already specified the location as \"San Francisco\", which is a major city in California, US. I need to format this properly for the tool call. Following the example format in the tool description, I should format it as \"San Francisco, US\".\n\nThe user didn't specify any other parameters or requirements, so a simple weather query should be sufficient. I don't need to ask for clarification since they've provided a clear location.\n\nLet me prepare the tool call to get the weather information for San Francisco. I'll use the \"get_weather\" tool with the location parameter set to \"San Francisco, US\". This should return the current weather conditions for San Francisco, which is what the user is asking about.\n\nOnce I get the weather information back from the tool, I'll be able to provide the user with details about the current weather in San Francisco, such as temperature, conditions (sunny, cloudy, rainy, etc.), and possibly other relevant information like humidity or wind speed if that data is available.\n\nSo I'll proceed with making the tool call to get_weather with the location parameter.\n</think>\n\n\n",552"role": "assistant",553"name": "MiniMax AI",554"tool_calls": [{555"id": "call_function_1202729600_1",556"type": "function",557"function": {558"name": "get_weather",559"arguments": "{\"location\": \"San Francisco, US\"}"560},561"index": 0562}],563"audio_content": ""564}565}],566"created": 1762412072,567"model": "MiniMax-M2.1",568"object": "chat.completion",569"usage": {570"total_tokens": 560,571"total_characters": 0,572"prompt_tokens": 222,573"completion_tokens": 338574},575"input_sensitive": false,576"output_sensitive": false,577"input_sensitive_type": 0,578"output_sensitive_type": 0,579"output_sensitive_int": 0,580"base_resp": {581"status_code": 0,582"status_msg": ""583}584}585```586587## Recommended Reading588589<Columns cols={2}>590<Card title="M2.1 for AI Coding Tools" icon="book-open" href="/guides/text-ai-coding-tools" arrow="true" cta="Click here">591MiniMax-M2.1 excels at code understanding, dialogue, and reasoning.592</Card>593594<Card title="Text Generation" icon="book-open" arrow="true" href="/guides/text-generation" cta="Click here">595Supports text generation via compatible Anthropic API and OpenAI API.596</Card>597598<Card title="Compatible Anthropic API (Recommended)" icon="book-open" href="/api-reference/text-anthropic-api" arrow="true" cta="Click here">599Use Anthropic SDK with MiniMax models600</Card>601602<Card title="Compatible OpenAI API" icon="book-open" href="/api-reference/text-openai-api" arrow="true" cta="Click here">603Use OpenAI SDK with MiniMax models604</Card>605</Columns>606607608---609610> To find navigation and other pages in this documentation, fetch the llms.txt file at: https://platform.minimax.io/docs/llms.txt611