文档
工具使用
工具使用使得 LLM 能够通过 LM Studio 的 REST API(或通过任何 OpenAI 客户端),通过 /v1/chat/completions
端点请求调用外部函数和 API。这大大扩展了它们的功能,远超文本输出。
🔔 工具使用需要 LM Studio 0.3.6 或更高版本,在此获取
要从您自己的代码中以编程方式使用 LM Studio,请将 LM Studio 作为本地服务器运行。
您可以从 LM Studio 的“开发者”选项卡或通过 lms
CLI 启动服务器。
lms server start
npx lmstudio install-cli
来安装 lms
这将允许您通过类似 OpenAI 的 REST API 与 LM Studio 交互。有关 LM Studio 类似 OpenAI 的 API 的介绍,请参阅将 LM Studio 作为服务器运行。
您可以从 LM Studio 的“聊天”或“开发者”选项卡,或通过 lms
CLI 加载模型。
lms load
Curl
Python
工具使用描述了
┌──────────────────────────┐ │ SETUP: LLM + Tool list │ └──────────┬───────────────┘ ▼ ┌──────────────────────────┐ │ Get user input │◄────┐ └──────────┬───────────────┘ │ ▼ │ ┌──────────────────────────┐ │ │ LLM prompted w/messages │ │ └──────────┬───────────────┘ │ ▼ │ Needs tools? │ │ │ │ Yes No │ │ │ │ ▼ └────────────┐ │ ┌─────────────┐ │ │ │Tool Response│ │ │ └──────┬──────┘ │ │ ▼ │ │ ┌─────────────┐ │ │ │Execute tools│ │ │ └──────┬──────┘ │ │ ▼ ▼ │ ┌─────────────┐ ┌───────────┐ │Add results │ │ Normal │ │to messages │ │ response │ └──────┬──────┘ └─────┬─────┘ │ ▲ └───────────────────────┘
LM Studio 在请求体的 tools
参数中给出函数定义时,通过 /v1/chat/completions
端点支持工具使用。工具被指定为函数定义数组,描述了它们的参数和用法,例如:
它遵循与 OpenAI 的函数调用 API 相同的格式,并期望通过 OpenAI 客户端 SDK 工作。
在本示例流程中,我们将使用 lmstudio-community/Qwen2.5-7B-Instruct-GGUF 作为模型。
您向 LLM 提供一个工具列表。这些是模型可以请求调用的工具。例如:
// the list of tools is model-agnostic [ { "type": "function", "function": { "name": "get_delivery_date", "description": "Get the delivery date for a customer's order", "parameters": { "type": "object", "properties": { "order_id": { "type": "string" } }, "required": ["order_id"] } } } ]
此列表将根据模型的聊天模板注入到模型的 system
提示中。对于 Qwen2.5-Instruct
,这看起来像:
<|im_start|>system You are Qwen, created by Alibaba Cloud. You are a helpful assistant. # Tools You may call one or more functions to assist with the user query. You are provided with function signatures within <tools></tools> XML tags: <tools> {"type": "function", "function": {"name": "get_delivery_date", "description": "Get the delivery date for a customer's order", "parameters": {"type": "object", "properties": {"order_id": {"type": "string"}}, "required": ["order_id"]}}} </tools> For each function call, return a json object with function name and arguments within <tool_call></tool_call> XML tags: <tool_call> {"name": <function-name>, "arguments": <args-json-object>} </tool_call><|im_end|>
重要:模型只能请求调用这些工具,因为 LLM 不能直接调用函数、API 或任何其他工具。它们只能输出文本,然后可以解析该文本以编程方式调用函数。
收到提示后,LLM 可以决定:
User: Get me the delivery date for order 123 Model: <tool_call> {"name": "get_delivery_date", "arguments": {"order_id": "123"}} </tool_call>
User: Hi Model: Hello! How can I assist you today?
LM Studio 将模型输出的文本解析为符合 OpenAI 规范的 chat.completion
响应对象。
tools
的权限,LM Studio 将尝试将工具调用解析到 chat.completion
响应对象的 response.choices[0].message.tool_calls
字段中。response.choices[0].message.content
字段。tool_calls
字段中。这对于当您未按预期收到 tool_calls
时进行故障排除非常有用。一个格式不正确的 Qwen2.5-Instruct
工具调用示例:<tool_call> ["name": "get_delivery_date", function: "date"] </tool_call>
请注意,括号不正确,并且调用不遵循
name, argument
格式。
您的代码解析 chat.completion
响应以检查模型发出的工具调用,然后使用模型指定的参数调用相应的工具。然后,您的代码将:
添加到 messages
数组中,然后发送回模型。
# pseudocode, see examples for copy-paste snippets if response.has_tool_calls: for each tool_call: # Extract function name & args function_to_call = tool_call.name # e.g. "get_delivery_date" args = tool_call.arguments # e.g. {"order_id": "123"} # Execute the function result = execute_function(function_to_call, args) # Add result to conversation add_to_messages([ ASSISTANT_TOOL_CALL_MESSAGE, # The request to use the tool TOOL_RESULT_MESSAGE # The tool's response ]) else: # Normal response without tools add_to_messages(response.content)
然后,LLM 会再次收到带有更新消息数组的提示,但没有工具访问权限。这是因为:
# Example messages messages = [ {"role": "user", "content": "When will order 123 be delivered?"}, {"role": "assistant", "function_call": { "name": "get_delivery_date", "arguments": {"order_id": "123"} }}, {"role": "tool", "content": "2024-03-15"}, ] response = client.chat.completions.create( model="lmstudio-community/qwen2.5-7b-instruct", messages=messages )
此调用后 response.choices[0].message.content
字段可能如下所示:
Your order #123 will be delivered on March 15th, 2024
循环继续回到流程的第 2 步。
注意:这是工具使用的“严格”流程。然而,您当然可以根据您的用例对此流程进行实验。
通过 LM Studio,所有模型都至少支持一定程度的工具使用。
然而,目前有两种支持级别可能会影响体验质量:原生和默认。
具有原生工具使用支持的模型将在应用程序中带有锤子徽章,并且在工具使用场景中通常表现更好。
“原生”工具使用支持意味着:
tools
数组格式化为系统提示,并告诉模型如何格式化工具调用。chat.completion
对象。LM Studio 中目前具有原生工具使用支持的模型(可能会有更改):
GGUF
lmstudio-community/Qwen2.5-7B-Instruct-GGUF (4.68 GB)MLX
mlx-community/Qwen2.5-7B-Instruct-4bit (4.30 GB)GGUF
lmstudio-community/Meta-Llama-3.1-8B-Instruct-GGUF (4.92 GB)MLX
mlx-community/Meta-Llama-3.1-8B-Instruct-8bit (8.54 GB)GGUF
bartowski/Ministral-8B-Instruct-2410-GGUF (4.67 GB)MLX
mlx-community/Ministral-8B-Instruct-2410-4bit (4.67 GB)“默认”工具使用支持意味着以下两者之一:
在底层,默认工具使用通过以下方式工作:
tool
角色消息转换为 user
角色,以便不带 tool
角色的聊天模板也能兼容。assistant
角色的 tool_calls
转换为默认工具调用格式。结果会因模型而异。
您可以通过在终端中运行 lms log stream
,然后向不支持原生工具使用的模型发送带有 tools
的聊天完成请求来查看默认格式。默认格式可能会有更改。
→ % lms log stream Streaming logs from LM Studio timestamp: 11/13/2024, 9:35:15 AM type: llm.prediction.input modelIdentifier: gemma-2-2b-it modelPath: lmstudio-community/gemma-2-2b-it-GGUF/gemma-2-2b-it-Q4_K_M.gguf input: "<start_of_turn>system You are a tool-calling AI. You can request calls to available tools with this EXACT format: [TOOL_REQUEST]{"name": "tool_name", "arguments": {"param1": "value1"}}[END_TOOL_REQUEST] AVAILABLE TOOLS: { "type": "toolArray", "tools": [ { "type": "function", "function": { "name": "get_delivery_date", "description": "Get the delivery date for a customer's order", "parameters": { "type": "object", "properties": { "order_id": { "type": "string" } }, "required": [ "order_id" ] } } } ] } RULES: - Only use tools from AVAILABLE TOOLS - Include all required arguments - Use one [TOOL_REQUEST] block per tool - Never use [TOOL_RESULT] - If you decide to call one or more tools, there should be no other text in your message Examples: "Check Paris weather" [TOOL_REQUEST]{"name": "get_weather", "arguments": {"location": "Paris"}}[END_TOOL_REQUEST] "Send email to John about meeting and open browser" [TOOL_REQUEST]{"name": "send_email", "arguments": {"to": "John", "subject": "meeting"}}[END_TOOL_REQUEST] [TOOL_REQUEST]{"name": "open_browser", "arguments": {}}[END_TOOL_REQUEST] Respond conversationally if no matching tools exist.<end_of_turn> <start_of_turn>user Get me delivery date for order 123<end_of_turn> <start_of_turn>model "
如果模型严格遵循此格式进行工具调用,即:
[TOOL_REQUEST]{"name": "get_delivery_date", "arguments": {"order_id": "123"}}[END_TOOL_REQUEST]
那么 LM Studio 将能够将这些工具调用解析为 chat.completions
对象,就像原生支持的模型一样。
所有没有原生工具使用支持的模型都将具有默认工具使用支持。
curl
的示例此示例演示了模型使用 curl
工具请求工具调用。
要在 Mac 或 Linux 上运行此示例,请使用任何终端。在 Windows 上,请使用 Git Bash。
curl https://:1234/v1/chat/completions \ -H "Content-Type: application/json" \ -d '{ "model": "lmstudio-community/qwen2.5-7b-instruct", "messages": [{"role": "user", "content": "What dell products do you have under $50 in electronics?"}], "tools": [ { "type": "function", "function": { "name": "search_products", "description": "Search the product catalog by various criteria. Use this whenever a customer asks about product availability, pricing, or specifications.", "parameters": { "type": "object", "properties": { "query": { "type": "string", "description": "Search terms or product name" }, "category": { "type": "string", "description": "Product category to filter by", "enum": ["electronics", "clothing", "home", "outdoor"] }, "max_price": { "type": "number", "description": "Maximum price in dollars" } }, "required": ["query"], "additionalProperties": false } } } ] }'
/v1/chat/completions
识别的所有参数都将得到尊重,可用工具数组应在 tools
字段中提供。
如果模型认为用户消息最适合通过工具调用来完成,则会在响应字段 choices[0].message.tool_calls
中提供工具调用请求对象的数组。
顶层响应对象的 finish_reason
字段也将填充为 "tool_calls"
。
对上述 curl
请求的示例响应将如下所示:
{ "id": "chatcmpl-gb1t1uqzefudice8ntxd9i", "object": "chat.completion", "created": 1730913210, "model": "lmstudio-community/qwen2.5-7b-instruct", "choices": [ { "index": 0, "logprobs": null, "finish_reason": "tool_calls", "message": { "role": "assistant", "tool_calls": [ { "id": "365174485", "type": "function", "function": { "name": "search_products", "arguments": "{\"query\":\"dell\",\"category\":\"electronics\",\"max_price\":50}" } } ] } } ], "usage": { "prompt_tokens": 263, "completion_tokens": 34, "total_tokens": 297 }, "system_fingerprint": "lmstudio-community/qwen2.5-7b-instruct" }
用通俗易懂的话来说,上述响应可以理解为模型在说:
“请调用
search_products
函数,参数为:
query
参数的 'dell',category
参数的 'electronics'max_price
参数的 '50'并把结果返回给我”
tool_calls
字段需要解析才能调用实际的函数/API。下面的示例演示了如何操作。
python
的示例当与 Python 等编程语言结合使用时,工具使用会大放异彩,您可以在其中实现 tools
字段中指定的函数,以便在模型请求时以编程方式调用它们。
下面是一个简单的单轮(模型只调用一次)示例,演示如何使模型能够调用一个名为 say_hello
的函数,该函数会向控制台打印一句问候语。
single-turn-example.py
from openai import OpenAI # Connect to LM Studio client = OpenAI(base_url="https://:1234/v1", api_key="lm-studio") # Define a simple function def say_hello(name: str) → str: print(f"Hello, {name}!") # Tell the AI about our function tools = [ { "type": "function", "function": { "name": "say_hello", "description": "Says hello to someone", "parameters": { "type": "object", "properties": { "name": { "type": "string", "description": "The person's name" } }, "required": ["name"] } } } ] # Ask the AI to use our function response = client.chat.completions.create( model="lmstudio-community/qwen2.5-7b-instruct", messages=[{"role": "user", "content": "Can you say hello to Bob the Builder?"}], tools=tools ) # Get the name the AI wants to use a tool to say hello to # (Assumes the AI has requested a tool call and that tool call is say_hello) tool_call = response.choices[0].message.tool_calls[0] name = eval(tool_call.function.arguments)["name"] # Actually call the say_hello function say_hello(name) # Prints: Hello, Bob the Builder!
从控制台运行此脚本应产生如下结果:
→ % python single-turn-example.py Hello, Bob the Builder!
在以下内容中玩转名称:
messages=[{"role": "user", "content": "Can you say hello to Bob the Builder?"}]
查看模型使用不同的名称调用 say_hello
函数。
现在来看一个稍微复杂一点的示例。
在此示例中,我们将:
get_delivery_date
函数multi-turn-example.py
(点击展开)from datetime import datetime, timedelta import json import random from openai import OpenAI # Point to the local server client = OpenAI(base_url="https://:1234/v1", api_key="lm-studio") model = "lmstudio-community/qwen2.5-7b-instruct" def get_delivery_date(order_id: str) → datetime: # Generate a random delivery date between today and 14 days from now # in a real-world scenario, this function would query a database or API today = datetime.now() random_days = random.randint(1, 14) delivery_date = today + timedelta(days=random_days) print( f"\nget_delivery_date function returns delivery date:\n\n{delivery_date}", flush=True, ) return delivery_date tools = [ { "type": "function", "function": { "name": "get_delivery_date", "description": "Get the delivery date for a customer's order. Call this whenever you need to know the delivery date, for example when a customer asks 'Where is my package'", "parameters": { "type": "object", "properties": { "order_id": { "type": "string", "description": "The customer's order ID.", }, }, "required": ["order_id"], "additionalProperties": False, }, }, } ] messages = [ { "role": "system", "content": "You are a helpful customer support assistant. Use the supplied tools to assist the user.", }, { "role": "user", "content": "Give me the delivery date and time for order number 1017", }, ] # LM Studio response = client.chat.completions.create( model=model, messages=messages, tools=tools, ) print("\nModel response requesting tool call:\n", flush=True) print(response, flush=True) # Extract the arguments for get_delivery_date # Note this code assumes we have already determined that the model generated a function call. tool_call = response.choices[0].message.tool_calls[0] arguments = json.loads(tool_call.function.arguments) order_id = arguments.get("order_id") # Call the get_delivery_date function with the extracted order_id delivery_date = get_delivery_date(order_id) assistant_tool_call_request_message = { "role": "assistant", "tool_calls": [ { "id": response.choices[0].message.tool_calls[0].id, "type": response.choices[0].message.tool_calls[0].type, "function": response.choices[0].message.tool_calls[0].function, } ], } # Create a message containing the result of the function call function_call_result_message = { "role": "tool", "content": json.dumps( { "order_id": order_id, "delivery_date": delivery_date.strftime("%Y-%m-%d %H:%M:%S"), } ), "tool_call_id": response.choices[0].message.tool_calls[0].id, } # Prepare the chat completion call payload completion_messages_payload = [ messages[0], messages[1], assistant_tool_call_request_message, function_call_result_message, ] # Call the OpenAI API's chat completions endpoint to send the tool call result back to the model # LM Studio response = client.chat.completions.create( model=model, messages=completion_messages_payload, ) print("\nFinal model response with knowledge of the tool call result:\n", flush=True) print(response.choices[0].message.content, flush=True)
从控制台运行此脚本应产生如下结果:
→ % python multi-turn-example.py Model response requesting tool call: ChatCompletion(id='chatcmpl-wwpstqqu94go4hvclqnpwn', choices=[Choice(finish_reason='tool_calls', index=0, logprobs=None, message=ChatCompletionMessage(content=None, refusal=None, role='assistant', function_call=None, tool_calls=[ChatCompletionMessageToolCall(id='377278620', function=Function(arguments='{"order_id":"1017"}', name='get_delivery_date'), type='function')]))], created=1730916196, model='lmstudio-community/qwen2.5-7b-instruct', object='chat.completion', service_tier=None, system_fingerprint='lmstudio-community/qwen2.5-7b-instruct', usage=CompletionUsage(completion_tokens=24, prompt_tokens=223, total_tokens=247, completion_tokens_details=None, prompt_tokens_details=None)) get_delivery_date function returns delivery date: 2024-11-19 13:03:17.773298 Final model response with knowledge of the tool call result: Your order number 1017 is scheduled for delivery on November 19, 2024, at 13:03 PM.
在上述原则的基础上,我们可以将 LM Studio 模型与本地定义的函数结合起来,创建一个“代理”——一个将语言模型与自定义函数配对的系统,以理解请求并执行超出基本文本生成范围的操作。
以下示例中的代理可以:
agent-chat-example.py
(点击展开)import json from urllib.parse import urlparse import webbrowser from datetime import datetime import os from openai import OpenAI # Point to the local server client = OpenAI(base_url="https://:1234/v1", api_key="lm-studio") model = "lmstudio-community/qwen2.5-7b-instruct" def is_valid_url(url: str) → bool: try: result = urlparse(url) return bool(result.netloc) # Returns True if there's a valid network location except Exception: return False def open_safe_url(url: str) → dict: # List of allowed domains (expand as needed) SAFE_DOMAINS = { "lmstudio.ai", "github.com", "google.com", "wikipedia.org", "weather.com", "stackoverflow.com", "python.org", "docs.python.org", } try: # Add http:// if no scheme is present if not url.startswith(('http://', 'https://')): url = 'http://' + url # Validate URL format if not is_valid_url(url): return {"status": "error", "message": f"Invalid URL format: {url}"} # Parse the URL and check domain parsed_url = urlparse(url) domain = parsed_url.netloc.lower() base_domain = ".".join(domain.split(".")[-2:]) if base_domain in SAFE_DOMAINS: webbrowser.open(url) return {"status": "success", "message": f"Opened {url} in browser"} else: return { "status": "error", "message": f"Domain {domain} not in allowed list", } except Exception as e: return {"status": "error", "message": str(e)} def get_current_time() → dict: """Get the current system time with timezone information""" try: current_time = datetime.now() timezone = datetime.now().astimezone().tzinfo formatted_time = current_time.strftime("%Y-%m-%d %H:%M:%S %Z") return { "status": "success", "time": formatted_time, "timezone": str(timezone), "timestamp": current_time.timestamp(), } except Exception as e: return {"status": "error", "message": str(e)} def analyze_directory(path: str = ".") → dict: """Count and categorize files in a directory""" try: stats = { "total_files": 0, "total_dirs": 0, "file_types": {}, "total_size_bytes": 0, } for entry in os.scandir(path): if entry.is_file(): stats["total_files"] += 1 ext = os.path.splitext(entry.name)[1].lower() or "no_extension" stats["file_types"][ext] = stats["file_types"].get(ext, 0) + 1 stats["total_size_bytes"] += entry.stat().st_size elif entry.is_dir(): stats["total_dirs"] += 1 # Add size of directory contents for root, _, files in os.walk(entry.path): for file in files: try: stats["total_size_bytes"] += os.path.getsize(os.path.join(root, file)) except (OSError, FileNotFoundError): continue return {"status": "success", "stats": stats, "path": os.path.abspath(path)} except Exception as e: return {"status": "error", "message": str(e)} tools = [ { "type": "function", "function": { "name": "open_safe_url", "description": "Open a URL in the browser if it's deemed safe", "parameters": { "type": "object", "properties": { "url": { "type": "string", "description": "The URL to open", }, }, "required": ["url"], }, }, }, { "type": "function", "function": { "name": "get_current_time", "description": "Get the current system time with timezone information", "parameters": { "type": "object", "properties": {}, "required": [], }, }, }, { "type": "function", "function": { "name": "analyze_directory", "description": "Analyze the contents of a directory, counting files and folders", "parameters": { "type": "object", "properties": { "path": { "type": "string", "description": "The directory path to analyze. Defaults to current directory if not specified.", }, }, "required": [], }, }, }, ] def process_tool_calls(response, messages): """Process multiple tool calls and return the final response and updated messages""" # Get all tool calls from the response tool_calls = response.choices[0].message.tool_calls # Create the assistant message with tool calls assistant_tool_call_message = { "role": "assistant", "tool_calls": [ { "id": tool_call.id, "type": tool_call.type, "function": tool_call.function, } for tool_call in tool_calls ], } # Add the assistant's tool call message to the history messages.append(assistant_tool_call_message) # Process each tool call and collect results tool_results = [] for tool_call in tool_calls: # For functions with no arguments, use empty dict arguments = ( json.loads(tool_call.function.arguments) if tool_call.function.arguments.strip() else {} ) # Determine which function to call based on the tool call name if tool_call.function.name == "open_safe_url": result = open_safe_url(arguments["url"]) elif tool_call.function.name == "get_current_time": result = get_current_time() elif tool_call.function.name == "analyze_directory": path = arguments.get("path", ".") result = analyze_directory(path) else: # llm tried to call a function that doesn't exist, skip continue # Add the result message tool_result_message = { "role": "tool", "content": json.dumps(result), "tool_call_id": tool_call.id, } tool_results.append(tool_result_message) messages.append(tool_result_message) # Get the final response final_response = client.chat.completions.create( model=model, messages=messages, ) return final_response def chat(): messages = [ { "role": "system", "content": "You are a helpful assistant that can open safe web links, tell the current time, and analyze directory contents. Use these capabilities whenever they might be helpful.", } ] print( "Assistant: Hello! I can help you open safe web links, tell you the current time, and analyze directory contents. What would you like me to do?" ) print("(Type 'quit' to exit)") while True: # Get user input user_input = input("\nYou: ").strip() # Check for quit command if user_input.lower() == "quit": print("Assistant: Goodbye!") break # Add user message to conversation messages.append({"role": "user", "content": user_input}) try: # Get initial response response = client.chat.completions.create( model=model, messages=messages, tools=tools, ) # Check if the response includes tool calls if response.choices[0].message.tool_calls: # Process all tool calls and get final response final_response = process_tool_calls(response, messages) print("\nAssistant:", final_response.choices[0].message.content) # Add assistant's final response to messages messages.append( { "role": "assistant", "content": final_response.choices[0].message.content, } ) else: # If no tool call, just print the response print("\nAssistant:", response.choices[0].message.content) # Add assistant's response to messages messages.append( { "role": "assistant", "content": response.choices[0].message.content, } ) except Exception as e: print(f"\nAn error occurred: {str(e)}") exit(1) if __name__ == "__main__": chat()
从控制台运行此脚本将允许您与代理聊天。
→ % python agent-example.py Assistant: Hello! I can help you open safe web links, tell you the current time, and analyze directory contents. What would you like me to do? (Type 'quit' to exit) You: What time is it? Assistant: The current time is 14:11:40 (EST) as of November 6, 2024. You: What time is it now? Assistant: The current time is 14:13:59 (EST) as of November 6, 2024. You: Open lmstudio.ai Assistant: The link to lmstudio.ai has been opened in your default web browser. You: What's in my current directory? Assistant: Your current directory at `/Users/matt/project` contains a total of 14 files and 8 directories. Here's the breakdown: - Files without an extension: 3 - `.mjs` files: 2 - `.ts` (TypeScript) files: 3 - Markdown (`md`) file: 1 - JSON files: 4 - TOML file: 1 The total size of these items is 1,566,990,604 bytes. You: Thank you! Assistant: You're welcome! If you have any other questions or need further assistance, feel free to ask. You:
通过 /v1/chat/completions
流式传输(stream=true
)时,工具调用以块的形式发送。函数名称和参数通过 chunk.choices[0].delta.tool_calls.function.name
和 chunk.choices[0].delta.tool_calls.function.arguments
以片段形式发送。
例如,要调用 get_current_weather(location="San Francisco")
,每个 chunk.choices[0].delta.tool_calls[0]
对象中的流式 ChoiceDeltaToolCall
将如下所示:
ChoiceDeltaToolCall(index=0, id='814890118', function=ChoiceDeltaToolCallFunction(arguments='', name='get_current_weather'), type='function') ChoiceDeltaToolCall(index=0, id=None, function=ChoiceDeltaToolCallFunction(arguments='{"', name=None), type=None) ChoiceDeltaToolCall(index=0, id=None, function=ChoiceDeltaToolCallFunction(arguments='location', name=None), type=None) ChoiceDeltaToolCall(index=0, id=None, function=ChoiceDeltaToolCallFunction(arguments='":"', name=None), type=None) ChoiceDeltaToolCall(index=0, id=None, function=ChoiceDeltaToolCallFunction(arguments='San Francisco', name=None), type=None) ChoiceDeltaToolCall(index=0, id=None, function=ChoiceDeltaToolCallFunction(arguments='"}', name=None), type=None)
这些块必须在整个流中累积以形成完整的函数签名以供执行。
下面的示例展示了如何通过 /v1/chat/completions
流式传输端点(stream=true
)创建一个简单的支持工具的聊天机器人。
tool-streaming-chatbot.py
(点击展开)from openai import OpenAI import time client = OpenAI(base_url="http://127.0.0.1:1234/v1", api_key="lm-studio") MODEL = "lmstudio-community/qwen2.5-7b-instruct" TIME_TOOL = { "type": "function", "function": { "name": "get_current_time", "description": "Get the current time, only if asked", "parameters": {"type": "object", "properties": {}}, }, } def get_current_time(): return {"time": time.strftime("%H:%M:%S")} def process_stream(stream, add_assistant_label=True): """Handle streaming responses from the API""" collected_text = "" tool_calls = [] first_chunk = True for chunk in stream: delta = chunk.choices[0].delta # Handle regular text output if delta.content: if first_chunk: print() if add_assistant_label: print("Assistant:", end=" ", flush=True) first_chunk = False print(delta.content, end="", flush=True) collected_text += delta.content # Handle tool calls elif delta.tool_calls: for tc in delta.tool_calls: if len(tool_calls) <= tc.index: tool_calls.append({ "id": "", "type": "function", "function": {"name": "", "arguments": ""} }) tool_calls[tc.index] = { "id": (tool_calls[tc.index]["id"] + (tc.id or "")), "type": "function", "function": { "name": (tool_calls[tc.index]["function"]["name"] + (tc.function.name or "")), "arguments": (tool_calls[tc.index]["function"]["arguments"] + (tc.function.arguments or "")) } } return collected_text, tool_calls def chat_loop(): messages = [] print("Assistant: Hi! I am an AI agent empowered with the ability to tell the current time (Type 'quit' to exit)") while True: user_input = input("\nYou: ").strip() if user_input.lower() == "quit": break messages.append({"role": "user", "content": user_input}) # Get initial response response_text, tool_calls = process_stream( client.chat.completions.create( model=MODEL, messages=messages, tools=[TIME_TOOL], stream=True, temperature=0.2 ) ) if not tool_calls: print() text_in_first_response = len(response_text) > 0 if text_in_first_response: messages.append({"role": "assistant", "content": response_text}) # Handle tool calls if any if tool_calls: tool_name = tool_calls[0]["function"]["name"] print() if not text_in_first_response: print("Assistant:", end=" ", flush=True) print(f"**Calling Tool: {tool_name}**") messages.append({"role": "assistant", "tool_calls": tool_calls}) # Execute tool calls for tool_call in tool_calls: if tool_call["function"]["name"] == "get_current_time": result = get_current_time() messages.append({ "role": "tool", "content": str(result), "tool_call_id": tool_call["id"] }) # Get final response after tool execution final_response, _ = process_stream( client.chat.completions.create( model=MODEL, messages=messages, stream=True ), add_assistant_label=False ) if final_response: print() messages.append({"role": "assistant", "content": final_response}) if __name__ == "__main__": chat_loop()
您可以通过从控制台运行此脚本与机器人聊天。
→ % python tool-streaming-chatbot.py Assistant: Hi! I am an AI agent empowered with the ability to tell the current time (Type 'quit' to exit) You: Tell me a joke, then tell me the current time Assistant: Sure! Here's a light joke for you: Why don't scientists trust atoms? Because they make up everything. Now, let me get the current time for you. **Calling Tool: get_current_time** The current time is 18:49:31. Enjoy your day! You:
在LM Studio Discord 服务器上与其他 LM Studio 用户聊天,讨论 LLM、硬件等。
此页面内容
快速开始
1. 启动 LM Studio 作为服务器
2. 加载模型
3. 复制、粘贴并运行示例!
工具使用
“工具使用”到底是什么?
高级流程
深入流程
支持的模型
原生工具使用支持
默认工具使用支持
使用 curl 的示例
使用 python 的示例
单轮示例
多轮示例
高级代理示例
流式传输
社区