当前位置: 首页 > backend >正文

langchain中 callbacks constructor实现

目录

    • 代码
    • 代码解释
      • 代码结构
      • 代码功能
    • 类似例子

代码

from typing import Any, Dict, Listfrom langchain_openai import ChatOpenAI
from langchain_core.callbacks import BaseCallbackHandler
from langchain_core.messages import BaseMessage
from langchain_core.outputs import LLMResult
from langchain_core.prompts import ChatPromptTemplateclass LoggingHandler(BaseCallbackHandler):def on_chat_model_start(self, serialized: Dict[str, Any], messages: List[List[BaseMessage]], **kwargs) -> None:print("Chat model started")def on_llm_end(self, response: LLMResult, **kwargs) -> None:print(f"Chat model ended, response: {response}")def on_chain_start(self, serialized: Dict[str, Any], inputs: Dict[str, Any], **kwargs) -> None:chain_name = serialized.get('name') if serialized else 'Unknown'print(f"Chain {chain_name } started")def on_chain_end(self, outputs: Dict[str, Any], **kwargs) -> None:print(f"Chain ended, outputs: {outputs}")callbacks = [LoggingHandler()]
llm = ChatOpenAI(
temperature=0,model="GLM-4-Flash-250414 ",openai_api_key="your api key",openai_api_base="https://open.bigmodel.cn/api/paas/v4/"
)
prompt = ChatPromptTemplate.from_template("What is 1 + {number}?")chain = prompt | llmchain.invoke({"number": "2"}, config={"callbacks": callbacks})
Chain Unknown started
Chain ChatPromptTemplate started
Chain ended, outputs: messages=[HumanMessage(content='What is 1 + 2?', additional_kwargs={}, response_metadata={})]
Chat model started
Chat model ended, response: generations=[[ChatGeneration(text='1 + 2 = 3', generation_info={'finish_reason': 'stop', 'logprobs': None}, message=AIMessage(content='1 + 2 = 3', additional_kwargs={'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 10, 'prompt_tokens': 15, 'total_tokens': 25, 'completion_tokens_details': None, 'prompt_tokens_details': None}, 'model_name': 'GLM-4-Flash-250414 ', 'system_fingerprint': None, 'id': '20250504114226d2d879c98ff0426b', 'finish_reason': 'stop', 'logprobs': None}, id='run-611d18e4-0d44-486e-b135-95ce31f092de-0', usage_metadata={'input_tokens': 15, 'output_tokens': 10, 'total_tokens': 25, 'input_token_details': {}, 'output_token_details': {}}))]] llm_output={'token_usage': {'completion_tokens': 10, 'prompt_tokens': 15, 'total_tokens': 25, 'completion_tokens_details': None, 'prompt_tokens_details': None}, 'model_name': 'GLM-4-Flash-250414 ', 'system_fingerprint': None, 'id': '20250504114226d2d879c98ff0426b'} run=None type='LLMResult'
Chain ended, outputs: content='1 + 2 = 3' additional_kwargs={'refusal': None} response_metadata={'token_usage': {'completion_tokens': 10, 'prompt_tokens': 15, 'total_tokens': 25, 'completion_tokens_details': None, 'prompt_tokens_details': None}, 'model_name': 'GLM-4-Flash-250414 ', 'system_fingerprint': None, 'id': '20250504114226d2d879c98ff0426b', 'finish_reason': 'stop', 'logprobs': None} id='run-611d18e4-0d44-486e-b135-95ce31f092de-0' usage_metadata={'input_tokens': 15, 'output_tokens': 10, 'total_tokens': 25, 'input_token_details': {}, 'output_token_details': {}}AIMessage(content='1 + 2 = 3', additional_kwargs={'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 10, 'prompt_tokens': 15, 'total_tokens': 25, 'completion_tokens_details': None, 'prompt_tokens_details': None}, 'model_name': 'GLM-4-Flash-250414 ', 'system_fingerprint': None, 'id': '20250504114226d2d879c98ff0426b', 'finish_reason': 'stop', 'logprobs': None}, id='run-611d18e4-0d44-486e-b135-95ce31f092de-0', usage_metadata={'input_tokens': 15, 'output_tokens': 10, 'total_tokens': 25, 'input_token_details': {}, 'output_token_details': {}})

代码解释

代码结构

  1. 导入模块:

    • langchain_openailangchain_core.callbackslangchain_core.messageslangchain_core.outputslangchain_core.prompts 是用于处理 OpenAI 的聊天模型和回调机制的模块。
  2. LoggingHandler 类:

    • 继承自 BaseCallbackHandler,用于处理不同阶段的回调。
    • on_chat_model_start: 当聊天模型开始时打印消息。
    • on_llm_end: 当聊天模型结束时打印响应。
    • on_chain_start: 当链开始时打印链的名称。
    • on_chain_end: 当链结束时打印输出。
  3. 回调设置:

    • 创建一个 LoggingHandler 实例并将其添加到回调列表中。
  4. ChatOpenAI 实例:

    • 创建一个 ChatOpenAI 实例,设置温度、模型、API 密钥和基础 URL。
  5. ChatPromptTemplate:

    • 使用模板创建一个聊天提示,模板内容为 “What is 1 + {number}?”。
  6. 链的创建和调用:

    • 使用管道操作符 | 将提示和聊天模型连接成一个链。
    • 调用链的 invoke 方法,传入参数 {"number": "2"} 和回调配置。

代码功能

该代码的主要功能是使用 langchain 库创建一个简单的聊天模型,并通过回调机制记录模型的启动和结束状态。它通过 LoggingHandler 类实现了对模型和链的生命周期事件的日志记录。通过 ChatPromptTemplateChatOpenAI 的结合,代码能够生成一个简单的数学问题并获取其答案。

类似例子

from typing import Any, Dict, List
import timefrom langchain_openai import ChatOpenAI
from langchain_core.callbacks import BaseCallbackHandler
from langchain_core.messages import BaseMessage
from langchain_core.outputs import LLMResult
from langchain_core.prompts import ChatPromptTemplateclass StatsHandler(BaseCallbackHandler):def __init__(self):self.call_count = 0self.total_time = 0def on_chain_start(self, serialized: Dict[str, Any], inputs: Dict[str, Any], **kwargs) -> None:self.start_time = time.time()print(f"Chain started with inputs: {inputs}")def on_chain_end(self, outputs: Dict[str, Any], **kwargs) -> None:end_time = time.time()duration = end_time - self.start_timeself.call_count += 1self.total_time += durationaverage_time = self.total_time / self.call_countprint(f"Chain ended with outputs: {outputs}")print(f"Call count: {self.call_count}, Average response time: {average_time:.2f} seconds")callbacks = [StatsHandler()]
llm = ChatOpenAI(temperature=0,model="GLM-4-Flash-250414",openai_api_key="your api key",openai_api_base="https://open.bigmodel.cn/api/paas/v4/"
)
prompt = ChatPromptTemplate.from_template("What is 1 + {number}?")chain = prompt | llmchain.invoke({"number": "2"}, config={"callbacks": callbacks})
Chain started with inputs: {'number': '2'}
Chain started with inputs: {'number': '2'}
Chain ended with outputs: messages=[HumanMessage(content='What is 1 + 2?', additional_kwargs={}, response_metadata={})]
Call count: 1, Average response time: 0.00 seconds
Chain ended with outputs: content='1 + 2 = 3' additional_kwargs={'refusal': None} response_metadata={'token_usage': {'completion_tokens': 10, 'prompt_tokens': 15, 'total_tokens': 25, 'completion_tokens_details': None, 'prompt_tokens_details': None}, 'model_name': 'GLM-4-Flash-250414', 'system_fingerprint': None, 'id': '2025050411475196802f6ee9134ac8', 'finish_reason': 'stop', 'logprobs': None} id='run-7c41ec92-dbc1-4fbc-8dd8-708379db745f-0' usage_metadata={'input_tokens': 15, 'output_tokens': 10, 'total_tokens': 25, 'input_token_details': {}, 'output_token_details': {}}
Call count: 2, Average response time: 0.25 secondsAIMessage(content='1 + 2 = 3', additional_kwargs={'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 10, 'prompt_tokens': 15, 'total_tokens': 25, 'completion_tokens_details': None, 'prompt_tokens_details': None}, 'model_name': 'GLM-4-Flash-250414', 'system_fingerprint': None, 'id': '2025050411475196802f6ee9134ac8', 'finish_reason': 'stop', 'logprobs': None}, id='run-7c41ec92-dbc1-4fbc-8dd8-708379db745f-0', usage_metadata={'input_tokens': 15, 'output_tokens': 10, 'total_tokens': 25, 'input_token_details': {}, 'output_token_details': {}})
http://www.xdnf.cn/news/3953.html

相关文章:

  • 如何使用python保存字典
  • **马小帅面试记:Java技术问答**
  • 邮件协议、签名与推送
  • Learning vtkjs之MultiSliceImageMapper
  • 【C++】Docker常用语法
  • VTK入门指南
  • Leetcode 3538. Merge Operations for Minimum Travel Time
  • Spring AI版本1.0.0-M6和M8效果比较
  • Shell-流程控制-判断
  • 过采样处理
  • educoder平台课-Python程序设计-6.序列类型
  • 【翻译、转载】【转载】LLM 的函数调用与 MCP
  • Linux 的网络卡
  • ST-LINKV2仿真器下载
  • Java基于SaaS模式多租户ERP系统源码
  • 四年级数学知识边界总结思考-上册
  • GCC 使用指南
  • 具身系列——Q-Learning算法实现CartPole游戏(强化学习)
  • 实时操作系统与AI Agent的协同进化:重塑人形机器人产业格局
  • 「分享」学术工具
  • vae笔记
  • P4549 【模板】裴蜀定理
  • Android第三次面试总结之Java篇补充
  • 不定长滑动窗口(求最短/最小)
  • [运维]Linux安装、配置并使用atop监控工具
  • Spring MVC常见注解详解
  • 力扣1128题解
  • sql错题(1)
  • ssh连接云服务器记录
  • 一种实波束扫描雷达角超分辨方法——论文阅读