langchain中 callbacks constructor实现
目录
- 代码
- 代码解释
- 代码结构
- 代码功能
- 类似例子
代码
from typing import Any, Dict, Listfrom langchain_openai import ChatOpenAI
from langchain_core.callbacks import BaseCallbackHandler
from langchain_core.messages import BaseMessage
from langchain_core.outputs import LLMResult
from langchain_core.prompts import ChatPromptTemplateclass LoggingHandler(BaseCallbackHandler):def on_chat_model_start(self, serialized: Dict[str, Any], messages: List[List[BaseMessage]], **kwargs) -> None:print("Chat model started")def on_llm_end(self, response: LLMResult, **kwargs) -> None:print(f"Chat model ended, response: {response}")def on_chain_start(self, serialized: Dict[str, Any], inputs: Dict[str, Any], **kwargs) -> None:chain_name = serialized.get('name') if serialized else 'Unknown'print(f"Chain {chain_name } started")def on_chain_end(self, outputs: Dict[str, Any], **kwargs) -> None:print(f"Chain ended, outputs: {outputs}")callbacks = [LoggingHandler()]
llm = ChatOpenAI(
temperature=0,model="GLM-4-Flash-250414 ",openai_api_key="your api key",openai_api_base="https://open.bigmodel.cn/api/paas/v4/"
)
prompt = ChatPromptTemplate.from_template("What is 1 + {number}?")chain = prompt | llmchain.invoke({"number": "2"}, config={"callbacks": callbacks})
Chain Unknown started
Chain ChatPromptTemplate started
Chain ended, outputs: messages=[HumanMessage(content='What is 1 + 2?', additional_kwargs={}, response_metadata={})]
Chat model started
Chat model ended, response: generations=[[ChatGeneration(text='1 + 2 = 3', generation_info={'finish_reason': 'stop', 'logprobs': None}, message=AIMessage(content='1 + 2 = 3', additional_kwargs={'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 10, 'prompt_tokens': 15, 'total_tokens': 25, 'completion_tokens_details': None, 'prompt_tokens_details': None}, 'model_name': 'GLM-4-Flash-250414 ', 'system_fingerprint': None, 'id': '20250504114226d2d879c98ff0426b', 'finish_reason': 'stop', 'logprobs': None}, id='run-611d18e4-0d44-486e-b135-95ce31f092de-0', usage_metadata={'input_tokens': 15, 'output_tokens': 10, 'total_tokens': 25, 'input_token_details': {}, 'output_token_details': {}}))]] llm_output={'token_usage': {'completion_tokens': 10, 'prompt_tokens': 15, 'total_tokens': 25, 'completion_tokens_details': None, 'prompt_tokens_details': None}, 'model_name': 'GLM-4-Flash-250414 ', 'system_fingerprint': None, 'id': '20250504114226d2d879c98ff0426b'} run=None type='LLMResult'
Chain ended, outputs: content='1 + 2 = 3' additional_kwargs={'refusal': None} response_metadata={'token_usage': {'completion_tokens': 10, 'prompt_tokens': 15, 'total_tokens': 25, 'completion_tokens_details': None, 'prompt_tokens_details': None}, 'model_name': 'GLM-4-Flash-250414 ', 'system_fingerprint': None, 'id': '20250504114226d2d879c98ff0426b', 'finish_reason': 'stop', 'logprobs': None} id='run-611d18e4-0d44-486e-b135-95ce31f092de-0' usage_metadata={'input_tokens': 15, 'output_tokens': 10, 'total_tokens': 25, 'input_token_details': {}, 'output_token_details': {}}AIMessage(content='1 + 2 = 3', additional_kwargs={'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 10, 'prompt_tokens': 15, 'total_tokens': 25, 'completion_tokens_details': None, 'prompt_tokens_details': None}, 'model_name': 'GLM-4-Flash-250414 ', 'system_fingerprint': None, 'id': '20250504114226d2d879c98ff0426b', 'finish_reason': 'stop', 'logprobs': None}, id='run-611d18e4-0d44-486e-b135-95ce31f092de-0', usage_metadata={'input_tokens': 15, 'output_tokens': 10, 'total_tokens': 25, 'input_token_details': {}, 'output_token_details': {}})
代码解释
代码结构
-
导入模块:
langchain_openai
、langchain_core.callbacks
、langchain_core.messages
、langchain_core.outputs
、langchain_core.prompts
是用于处理 OpenAI 的聊天模型和回调机制的模块。
-
LoggingHandler 类:
- 继承自
BaseCallbackHandler
,用于处理不同阶段的回调。 on_chat_model_start
: 当聊天模型开始时打印消息。on_llm_end
: 当聊天模型结束时打印响应。on_chain_start
: 当链开始时打印链的名称。on_chain_end
: 当链结束时打印输出。
- 继承自
-
回调设置:
- 创建一个
LoggingHandler
实例并将其添加到回调列表中。
- 创建一个
-
ChatOpenAI 实例:
- 创建一个
ChatOpenAI
实例,设置温度、模型、API 密钥和基础 URL。
- 创建一个
-
ChatPromptTemplate:
- 使用模板创建一个聊天提示,模板内容为 “What is 1 + {number}?”。
-
链的创建和调用:
- 使用管道操作符
|
将提示和聊天模型连接成一个链。 - 调用链的
invoke
方法,传入参数{"number": "2"}
和回调配置。
- 使用管道操作符
代码功能
该代码的主要功能是使用 langchain
库创建一个简单的聊天模型,并通过回调机制记录模型的启动和结束状态。它通过 LoggingHandler
类实现了对模型和链的生命周期事件的日志记录。通过 ChatPromptTemplate
和 ChatOpenAI
的结合,代码能够生成一个简单的数学问题并获取其答案。
类似例子
from typing import Any, Dict, List
import timefrom langchain_openai import ChatOpenAI
from langchain_core.callbacks import BaseCallbackHandler
from langchain_core.messages import BaseMessage
from langchain_core.outputs import LLMResult
from langchain_core.prompts import ChatPromptTemplateclass StatsHandler(BaseCallbackHandler):def __init__(self):self.call_count = 0self.total_time = 0def on_chain_start(self, serialized: Dict[str, Any], inputs: Dict[str, Any], **kwargs) -> None:self.start_time = time.time()print(f"Chain started with inputs: {inputs}")def on_chain_end(self, outputs: Dict[str, Any], **kwargs) -> None:end_time = time.time()duration = end_time - self.start_timeself.call_count += 1self.total_time += durationaverage_time = self.total_time / self.call_countprint(f"Chain ended with outputs: {outputs}")print(f"Call count: {self.call_count}, Average response time: {average_time:.2f} seconds")callbacks = [StatsHandler()]
llm = ChatOpenAI(temperature=0,model="GLM-4-Flash-250414",openai_api_key="your api key",openai_api_base="https://open.bigmodel.cn/api/paas/v4/"
)
prompt = ChatPromptTemplate.from_template("What is 1 + {number}?")chain = prompt | llmchain.invoke({"number": "2"}, config={"callbacks": callbacks})
Chain started with inputs: {'number': '2'}
Chain started with inputs: {'number': '2'}
Chain ended with outputs: messages=[HumanMessage(content='What is 1 + 2?', additional_kwargs={}, response_metadata={})]
Call count: 1, Average response time: 0.00 seconds
Chain ended with outputs: content='1 + 2 = 3' additional_kwargs={'refusal': None} response_metadata={'token_usage': {'completion_tokens': 10, 'prompt_tokens': 15, 'total_tokens': 25, 'completion_tokens_details': None, 'prompt_tokens_details': None}, 'model_name': 'GLM-4-Flash-250414', 'system_fingerprint': None, 'id': '2025050411475196802f6ee9134ac8', 'finish_reason': 'stop', 'logprobs': None} id='run-7c41ec92-dbc1-4fbc-8dd8-708379db745f-0' usage_metadata={'input_tokens': 15, 'output_tokens': 10, 'total_tokens': 25, 'input_token_details': {}, 'output_token_details': {}}
Call count: 2, Average response time: 0.25 secondsAIMessage(content='1 + 2 = 3', additional_kwargs={'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 10, 'prompt_tokens': 15, 'total_tokens': 25, 'completion_tokens_details': None, 'prompt_tokens_details': None}, 'model_name': 'GLM-4-Flash-250414', 'system_fingerprint': None, 'id': '2025050411475196802f6ee9134ac8', 'finish_reason': 'stop', 'logprobs': None}, id='run-7c41ec92-dbc1-4fbc-8dd8-708379db745f-0', usage_metadata={'input_tokens': 15, 'output_tokens': 10, 'total_tokens': 25, 'input_token_details': {}, 'output_token_details': {}})