当前位置: 首页 > java >正文

LangChain构建大模型应用之Chain

该篇是吴恩达关于基于 LangChain 的大语言模型应用开发系列教程的第四部分,主要介绍 LangChain 中的链(Chain),包括多种链类型的概念、使用场景和具体操作示例。

1.链的概念:链通常将大型语言模型(LLM)与提示相结合,能对文本或其他数据执行一系列操作,可同时处理多个输入。在LangChain中,链是一系列模型,它们被连接在一起以完成一个特定的目标。聊天机器人应用程序的链实例可能涉及使用LLM来理解用户输入,使用内存组件来存储过去的交互,以及使用决策组件来创建相关响应。
LangChain的chain模块是其框架中用于构建智能对话和任务式应用的核心组件之一,主要负责流程控制和数据传递。以下是chain模块的一些详细介绍:
流程控制:Chains是LangChain中的核心流程控制单元,它们负责串联不同的组件和步骤,定义应用程序的执行逻辑。
数据传递:Chains可以传递上下文和数据,使得不同的模块之间能够共享信息。
组合与嵌套:Chains支持嵌套和组合,可以构建复杂的流程,例如顺序执行、条件判断和循环等。
可重用性:Chains可以被定义为可重用的模块,在不同的应用场景中复用。
灵活性:LangChain支持多种类型的Chains,如简单链、索引链、对话链等,以满足不同的需求。

链的创建与组合:

单一链:开发者可以创建一个包含特定功能的单一链,例如文本预处理、模型推理等。
自定义链:利用内置的基础链类,开发者可以自定义链的输入、输出和处理逻辑。
顺序组合:将多个链按照执行顺序串联起来,前一个链的输出作为下一个链的输入。
并行组合:同时执行多个链,将它们的输出合并或选择性地使用。
嵌套链:在一个链的内部调用另一个链,实现更复杂的流程控制

核心链类型:

LLMChain:与大型语言模型(LLMs)直接交互的链,用于生成和理解自然语言
SimpleSequentialChain:一个简单的顺序执行链,用于按顺序执行一系列步骤
SequentialChain:一个顺序链,可以包含多个步骤,每个步骤可以是另一个链
RouterChain:用于智能路由决策,根据输入决定执行哪个链
TransformChain:用于数据处理,可以对输入数据进行转换或处理
通过这些链的组合和嵌套,LangChain框架能够实现复杂的自然语言处理应用程序,提供高度的扩展性和可维护性

2.LLM 链:是基础且强大的链类型。需导入 OpenAI 模型、聊天提示模板和 LLM 链;初始化语言模型和提示,将两者结合形成链。以产品名称生成公司名称为例,输入产品描述,通过链的运行,可得到对应公司名称。
3.顺序链 Sequential Chains

  • 简单序列链 (SimpleSequentialChain):用于按顺序运行一系列链,每个子链只接受一个输入并返回一个输出。如先根据产品生成公司名称,再根据公司名称生成描述,前一个链的输出作为后一个链的输入。
import warnings
warnings.filterwarnings('ignore')
import os
from langchain.chat_models import ChatOpenAI # model
from langchain.prompts import ChatPromptTemplate # prompt
from langchain.chains import LLMChain # chain
llm = ChatOpenAI(openai_api_base="https://dashscope.aliyuncs.com/compatible-mode/v1",openai_api_key=os.getenv("DASHSCOPE_API_KEY"),model_name="qwen-plus",	# 模型名称temperature=0.9
)
import pandas as pd
df = pd.read_csv('Data.csv')
df.head()prompt = ChatPromptTemplate.from_template("What is the best name to describe \a company that makes {product}?"
)
chain = LLMChain(llm=llm, prompt=prompt)
product = "Queen Size Sheet Set"
chain.run(product)
chain.invoke(product)
from langchain.chains import SimpleSequentialChain
llm = ChatOpenAI(openai_api_base="https://dashscope.aliyuncs.com/compatible-mode/v1",openai_api_key=os.getenv("DASHSCOPE_API_KEY"),model_name="qwen-plus",	# 模型名称temperature=0.9
)
# prompt template 1
first_prompt = ChatPromptTemplate.from_template("What is the best name to describe \a company that makes {product}?"
)# Chain 1
chain_one = LLMChain(llm=llm, prompt=first_prompt)# prompt template 2
second_prompt = ChatPromptTemplate.from_template("Write a 20 words description for the following \company:{company_name}"
)
# chain 2
chain_two = LLMChain(llm=llm, prompt=second_prompt)
overall_simple_chain = SimpleSequentialChain(chains=[chain_one, chain_two],verbose=True)overall_simple_chain.invoke(product)

普通序列链(SequentialChain):可处理多个输入和输出的情况。创建多个链,如翻译评论、总结评论、检测评论语言、根据总结和语言生成回应,要确保各链输入键和输出键精确匹配。运行时,输入评论数据,可得到翻译后的评论、评论摘要以及用原始语言生成的回应等结果。

from langchain.chains import SequentialChain
llm = ChatOpenAI(openai_api_base="https://dashscope.aliyuncs.com/compatible-mode/v1",openai_api_key=os.getenv("DASHSCOPE_API_KEY"),model_name="qwen-plus",	# 模型名称temperature=0.9
)# prompt template 1: translate to english
first_prompt = ChatPromptTemplate.from_template("Translate the following review to english:""\n\n{Review}"
)
# chain 1: input= Review and output= English_Review
chain_one = LLMChain(llm=llm, prompt=first_prompt, output_key="English_Review")
second_prompt = ChatPromptTemplate.from_template("Can you summarize the following review in 1 sentence:""\n\n{English_Review}"
)
# chain 2: input= English_Review and output= summary
chain_two = LLMChain(llm=llm, prompt=second_prompt, output_key="summary")# prompt template 3: translate to english
third_prompt = ChatPromptTemplate.from_template("What language is the following review:\n\n{Review}"
)
# chain 3: input= Review and output= language
chain_three = LLMChain(llm=llm, prompt=third_prompt,output_key="language")# prompt template 4: follow up message
fourth_prompt = ChatPromptTemplate.from_template("Write a follow up response to the following ""summary in the specified language:""\n\nSummary: {summary}\n\nLanguage: {language}"
)
# chain 4: input= summary, language and output= followup_message
chain_four = LLMChain(llm=llm, prompt=fourth_prompt,output_key="followup_message")# overall_chain: input= Review 
# and output= English_Review,summary, followup_message
overall_chain = SequentialChain(chains=[chain_one, chain_two, chain_three, chain_four],input_variables=["Review"],output_variables=["English_Review", "summary","language","followup_message"],verbose=True
)review = df.Review[5]
overall_chain(review)

4.路由链(多提示链)(Router Chain):适用于根据输入内容将其路由到特定子链的场景。定义不同主题的提示模板,导入多提示链、LLM 路由器链和路由输出解析器;创建目的地链和默认链,定义路由模板,构建路由器链并组合形成总体链。提问时,若问题属于特定主题(如物理、数学等),会被路由到相应子链;若不属于任何子链相关主题,则会被传递到默认链。

physics_template = """You are a very smart physics professor. \
You are great at answering questions about physics in a concise\
and easy to understand manner. \
When you don't know the answer to a question you admit\
that you don't know.Here is a question:
{input}"""math_template = """You are a very good mathematician. \
You are great at answering math questions. \
You are so good because you are able to break down \
hard problems into their component parts, 
answer the component parts, and then put them together\
to answer the broader question.Here is a question:
{input}"""history_template = """You are a very good historian. \
You have an excellent knowledge of and understanding of people,\
events and contexts from a range of historical periods. \
You have the ability to think, reflect, debate, discuss and \
evaluate the past. You have a respect for historical evidence\
and the ability to make use of it to support your explanations \
and judgements.Here is a question:
{input}"""computerscience_template = """ You are a successful computer scientist.\
You have a passion for creativity, collaboration,\
forward-thinking, confidence, strong problem-solving capabilities,\
understanding of theories and algorithms, and excellent communication \
skills. You are great at answering coding questions. \
You are so good because you know how to solve a problem by \
describing the solution in imperative steps \
that a machine can easily interpret and you know how to \
choose a solution that has a good balance between \
time complexity and space complexity. Here is a question:
{input}"""prompt_infos = [{"name": "physics", "description": "Good for answering questions about physics", "prompt_template": physics_template},{"name": "math", "description": "Good for answering math questions", "prompt_template": math_template},{"name": "History", "description": "Good for answering history questions", "prompt_template": history_template},{"name": "computer science", "description": "Good for answering computer science questions", "prompt_template": computerscience_template}
]from langchain.chains.router import MultiPromptChain
from langchain.chains.router.llm_router import LLMRouterChain,RouterOutputParser
from langchain.prompts import PromptTemplate
llm = ChatOpenAI(openai_api_base="https://dashscope.aliyuncs.com/compatible-mode/v1",openai_api_key=os.getenv("DASHSCOPE_API_KEY"),model_name="qwen-plus",    # 模型名称temperature=0.9
)
destination_chains = {}
for p_info in prompt_infos:name = p_info["name"]prompt_template = p_info["prompt_template"]prompt = ChatPromptTemplate.from_template(template=prompt_template)chain = LLMChain(llm=llm, prompt=prompt)destination_chains[name] = chain  destinations = [f"{p['name']}: {p['description']}" for p in prompt_infos]
destinations_str = "\n".join(destinations)default_prompt = ChatPromptTemplate.from_template("{input}")
default_chain = LLMChain(llm=llm, prompt=default_prompt)MULTI_PROMPT_ROUTER_TEMPLATE = """Given a raw text input to a \
language model select the model prompt best suited for the input. \
You will be given the names of the available prompts and a \
description of what the prompt is best suited for. \
You may also revise the original input if you think that revising\
it will ultimately lead to a better response from the language model.<< FORMATTING >>
Return a markdown code snippet with a JSON object formatted to look like:
“”“
{{{{"destination": string \ name of the prompt to use or "DEFAULT""next_inputs": string \ a potentially modified version of the original input
}}}}
”“”REMEMBER: "destination" MUST be one of the candidate prompt \
names specified below OR it can be "DEFAULT" if the input is not\
well suited for any of the candidate prompts.
REMEMBER: "next_inputs" can just be the original input \
if you don't think any modifications are needed.<< CANDIDATE PROMPTS >>
{destinations}<< INPUT >>
{{input}}<< OUTPUT (remember to wrap the output with ```json (output)```)>>"""router_template = MULTI_PROMPT_ROUTER_TEMPLATE.format(destinations=destinations_str
)
router_prompt = PromptTemplate(template=router_template,input_variables=["input"],output_parser=RouterOutputParser(),
)router_chain = LLMRouterChain.from_llm(llm, router_prompt)chain = MultiPromptChain(router_chain=router_chain, destination_chains=destination_chains, default_chain=default_chain, verbose=True)chain.invoke("What is black body radiation?")
chain.invoke("what is 2 + 2")
chain.invoke("Why does every cell in our body contain DNA?")
http://www.xdnf.cn/news/4088.html

相关文章:

  • APP 设计中的色彩心理学:如何用色彩提升用户体验
  • 模型训练实用之梯度检查点
  • 二重指针和二维数组
  • 深入理解 Cortex-M3 的内核寄存器组
  • 学习笔记msp430f5529lp
  • AI向量检索
  • 【前缀和】连续数组
  • 支持图文混排的Gemini Next Chat
  • Linux 系统下VS Code python环境配置!
  • GPU性能加速的隐藏魔法:Dual-Issue Warp Schedule全解析
  • 国内短剧 vs. 海外短剧系统:如何选择?2025年深度对比与SEO优化指南
  • 高并发内存池------threadcache
  • WebService的学习
  • 电子邮件相关协议介绍
  • NetSuite 2025.1 学习笔记
  • Java基础学完,继续深耕(0505)Linux 常用命令
  • TS 类class修饰符
  • 接口测试过程中常见的缺陷详解
  • Go小技巧易错点100例(三十)
  • 算法刷题篇
  • 基于Redis实现优惠券秒杀——第3期(分布式锁-Redisson)
  • UniGetUI 使用指南:轻松管理 Windows 软件(包括CUDA)
  • 【Springboot知识】Springboot计划任务Schedule详解
  • 前端懒加载(Lazy Loading)实战指南
  • 旋转图像(中等)
  • RPC是什么
  • Linux文件复制命令精要指南:cp与scp详解
  • Three.js + React 实战系列 - 客户评价区细解教程 Clients 组件✨(回答式评价 + 评分星级)
  • 51c大模型~合集124
  • TS 类型兼容性