5. langgraph实现高级RAG (Adaptive RAG)

1. 数据准备

from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain_community.document_loaders import WebBaseLoader
from langchain_community.vectorstores import Chroma

urls = [
    "https://lilianweng.github.io/posts/2023-06-23-agent/",
    "https://lilianweng.github.io/posts/2023-03-15-prompt-engineering/",
    "https://lilianweng.github.io/posts/2023-10-25-adv-attack-llm/",
]

docs = [WebBaseLoader(url).load() for url in urls]
docs_list = [item for sublist in docs for item in sublist]

text_splitter = RecursiveCharacterTextSplitter.from_tiktoken_encoder(
    chunk_size=250, chunk_overlap=0
)
doc_splits = text_splitter.split_documents(docs_list)

from langchain_community.embeddings import ZhipuAIEmbeddings
embed = ZhipuAIEmbeddings(
    model="Embedding-3",
    api_key="your api key",
)

# Add to vectorDB
batch_size = 10
for i in range(0, len(doc_splits), batch_size):
    # 确保切片不会超出数组边界
    batch = doc_splits[i:min(i + batch_size, len(doc_splits))]
    
    vectorstore = Chroma.from_documents(
        documents=batch,
        collection_name="rag-chroma",
        embedding=embed,
        persist_directory="./chroma_db"
    )

retriever = vectorstore.as_retriever()

2. question_router llm模型

### Router

from typing import Literal

from langchain_core.prompts import ChatPromptTemplate
from langchain_openai import ChatOpenAI

from pydantic import BaseModel, Field
# Data model
class RouteQuery(BaseModel):
    """Route a user query to the most relevant datasource."""

    datasource: Literal["vectorstore", "web_search"] = Field(
        ...,
        description="Given a user question choose to route it to web search or a vectorstore.",
    )


from langchain_openai import ChatOpenAI

llm = ChatOpenAI(
    temperature=0,
    model="GLM-4-plus",
    openai_api_key="your api key",
    openai_api_base="https://open.bigmodel.cn/api/paas/v4/"
)
structured_llm_router = llm.with_structured_output(RouteQuery)

# Prompt
system = """You are an expert at routing a user question to a vectorstore or web search.
The vectorstore contains documents related to agents, prompt engineering, and adversarial attacks.
Use the vectorstore for questions on these topics. Otherwise, use web-search."""
route_prompt = ChatPromptTemplate.from_messages(
    [
        ("system", system),
        ("human", "{question}"),
    ]
)

question_router = route_prompt | structured_llm_router
print(
    question_router.invoke(
        {"question": "Who will the Bears draft first in the NFL draft?"}
    )
)
print(question_router.invoke({"question": "What are the types of agent memory?"}))
datasource='web_search'
datasource='vectorstore'

3. Retrieval Grader llm模型

# Data model
class GradeDocuments(BaseModel):
    """Binary score for relevance check on retrieved documents."""

    binary_score: str = Field(
        description="Documents are relevant to the question, 'yes' or 'no'"
    )


from langchain_openai import ChatOpenAI

llm = ChatOpenAI(
    temperature=0,
    model="GLM-4-plus",
    openai_api_key="your api key",
    openai_api_base="https://open.bigmodel.cn/api/paas/v4/"
)
structured_llm_grader = llm.with_structured_output(GradeDocuments)

# Prompt
system = """You are a grader assessing relevance of a retrieved document to a user question. \n 
    If the document contains keyword(s) or semantic meaning related to the user question, grade it as relevant. \n
    It does not need to be a stringent test. The goal is to filter out erroneous retrievals. \n
    Give a binary score 'yes' or 'no' score to indicate whether the document is relevant to the question."""
grade_prompt = ChatPromptTemplate.from_messages(
    [
        ("system", system),
        ("human", "Retrieved document: \n\n {document} \n\n User question: {question}"),
    ]
)

retrieval_grader = grade_prompt | structured_llm_grader
question = "agent memory"
docs = retriever.invoke(question)
doc_txt = docs[1].page_content
print(retrieval_grader.invoke({"question": question, "document": doc_txt}))
binary_score='yes'

4. Generate llm 模型

### Generate

from langchain import hub
from langchain_core.output_parsers import StrOutputParser

# Prompt
prompt = hub.pull("rlm/rag-prompt")


from langchain_openai import ChatOpenAI

llm = ChatOpenAI(
    temperature=0,
    model="GLM-4-plus",
    openai_api_key="your api key",
    openai_api_base="https://open.bigmodel.cn/api/paas/v4/"
)

# Post-processing
def format_docs(docs):
    return "\n\n".join(doc.page_content for doc in docs)


# Chain
rag_chain = prompt | llm | StrOutputParser()

# Run
generation = rag_chain.invoke({"context": docs, "question": question})
print(generation)
d:\soft\anaconda\envs\langchain\Lib\site-packages\langsmith\client.py:354: LangSmithMissingAPIKeyWarning: API key must be provided when using hosted LangSmith API
  warnings.warn(


In a LLM-powered autonomous agent system, memory is a crucial component. It includes various types of memory and utilizes techniques like Maximum Inner Product Search (MIPS) for efficient information retrieval. This memory system complements the LLM, which acts as the agent's brain, enabling the agent to perform complex tasks effectively.

5. Hallucination Grader

### Hallucination Grader


# Data model
class GradeHallucinations(BaseModel):
    """Binary score for hallucination present in generation answer."""

    binary_score: str = Field(
        description="Answer is grounded in the facts, 'yes' or 'no'"
    )


from langchain_openai import ChatOpenAI

llm = ChatOpenAI(
    temperature=0,
    model="GLM-4-plus",
    openai_api_key="your api key",
    openai_api_base="https://open.bigmodel.cn/api/paas/v4/"
)
structured_llm_grader = llm.with_structured_output(GradeHallucinations)

# Prompt
system = """You are a grader assessing whether an LLM generation is grounded in / supported by a set of retrieved facts. \n 
     Give a binary score 'yes' or 'no'. 'Yes' means that the answer is grounded in / supported by the set of facts."""
hallucination_prompt = ChatPromptTemplate.from_messages(
    [
        ("system", system),
        ("human", "Set of facts: \n\n {documents} \n\n LLM generation: {generation}"),
    ]
)

hallucination_grader = hallucination_prompt | structured_llm_grader
hallucination_grader.invoke({"documents": docs, "generation": generation})
GradeHallucinations(binary_score='yes')

6. Answer Grader llm 模型

# Data model
class GradeAnswer(BaseModel):
    """Binary score to assess answer addresses question."""

    binary_score: str = Field(
        description="Answer addresses the question, 'yes' or 'no'"
    )


from langchain_openai import ChatOpenAI

llm = ChatOpenAI(
    temperature=0,
    model="GLM-4-plus",
    openai_api_key="your api key",
    openai_api_base="https://open.bigmodel.cn/api/paas/v4/"
)
structured_llm_grader = llm.with_structured_output(GradeAnswer)

# Prompt
system = """You are a grader assessing whether an answer addresses / resolves a question \n 
     Give a binary score 'yes' or 'no'. Yes' means that the answer resolves the question."""
answer_prompt = ChatPromptTemplate.from_messages(
    [
        ("system", system),
        ("human", "User question: \n\n {question} \n\n LLM generation: {generation}"),
    ]
)

answer_grader = answer_prompt | structured_llm_grader
answer_grader.invoke({"question": question, "generation": generation})
GradeAnswer(binary_score='yes')

7. Question Re-writer llm 模型

# LLM
from langchain_openai import ChatOpenAI

llm = ChatOpenAI(
    temperature=0,
    model="GLM-4-plus",
    openai_api_key="your api key",
    openai_api_base="https://open.bigmodel.cn/api/paas/v4/"
)

# Prompt
system = """You a question re-writer that converts an input question to a better version that is optimized \n 
     for vectorstore retrieval. Look at the input and try to reason about the underlying semantic intent / meaning."""
re_write_prompt = ChatPromptTemplate.from_messages(
    [
        ("system", system),
        (
            "human",
            "Here is the initial question: \n\n {question} \n Formulate an improved question.",
        ),
    ]
)

question_rewriter = re_write_prompt | llm | StrOutputParser()
question_rewriter.invoke({"question": question})
'To optimize the initial question "agent memory" for vectorstore retrieval, we need to clarify the intent and provide more context. The term "agent memory" could refer to various concepts, such as memory in AI agents, memory management in software agents, or even human agents\' memory in certain contexts. \n\nImproved Question: "What are the key principles and mechanisms involved in memory management for AI agents?"\n\nThis version is more specific and provides clear context, making it easier for a vectorstore retrieval system to identify relevant information. It assumes the intent is to understand how memory is handled in the context of artificial intelligence agents. If the intent is different, please provide more context for further refinement.'

8. Websearch 工具

### Search
import os
from langchain_community.tools.tavily_search import TavilySearchResults
os.environ["TAVILY_API_KEY"] = "your api key"
web_search_tool = TavilySearchResults(k=3)

9. Graph中的State数据结构

from typing import List

from typing_extensions import TypedDict


class GraphState(TypedDict):
    """
    Represents the state of our graph.

    Attributes:
        question: question
        generation: LLM generation
        documents: list of documents
    """

    question: str
    generation: str
    documents: List[str]

10. Graph中的各个节点函数

from langchain.schema import Document


def retrieve(state):
    """
    Retrieve documents

    Args:
        state (dict): The current graph state

    Returns:
        state (dict): New key added to state, documents, that contains retrieved documents
    """
    print("---RETRIEVE---")
    question = state["question"]

    # Retrieval
    documents = retriever.invoke(question)
    return {"documents": documents, "question": question}


def generate(state):
    """
    Generate answer

    Args:
        state (dict): The current graph state

    Returns:
        state (dict): New key added to state, generation, that contains LLM generation
    """
    print("---GENERATE---")
    question = state["question"]
    documents = state["documents"]

    # RAG generation
    generation = rag_chain.invoke({"context": documents, "question": question})
    return {"documents": documents, "question": question, "generation": generation}


def grade_documents(state):
    """
    Determines whether the retrieved documents are relevant to the question.

    Args:
        state (dict): The current graph state

    Returns:
        state (dict): Updates documents key with only filtered relevant documents
    """

    print("---CHECK DOCUMENT RELEVANCE TO QUESTION---")
    question = state["question"]
    documents = state["documents"]

    # Score each doc
    filtered_docs = []
    for d in documents:
        score = retrieval_grader.invoke(
            {"question": question, "document": d.page_content}
        )
        grade = score.binary_score
        if grade == "yes":
            print("---GRADE: DOCUMENT RELEVANT---")
            filtered_docs.append(d)
        else:
            print("---GRADE: DOCUMENT NOT RELEVANT---")
            continue
    return {"documents": filtered_docs, "question": question}


def transform_query(state):
    """
    Transform the query to produce a better question.

    Args:
        state (dict): The current graph state

    Returns:
        state (dict): Updates question key with a re-phrased question
    """

    print("---TRANSFORM QUERY---")
    question = state["question"]
    documents = state["documents"]

    # Re-write question
    better_question = question_rewriter.invoke({"question": question})
    return {"documents": documents, "question": better_question}


def web_search(state):
    """
    Web search based on the re-phrased question.

    Args:
        state (dict): The current graph state

    Returns:
        state (dict): Updates documents key with appended web results
    """

    print("---WEB SEARCH---")
    question = state["question"]

    # Web search
    docs = web_search_tool.invoke({"query": question})
    web_results = "\n".join([d["content"] for d in docs])
    web_results = Document(page_content=web_results)

    return {"documents": web_results, "question": question}


### Edges ###


def route_question(state):
    """
    Route question to web search or RAG.

    Args:
        state (dict): The current graph state

    Returns:
        str: Next node to call
    """

    print("---ROUTE QUESTION---")
    question = state["question"]
    source = question_router.invoke({"question": question})
    if source.datasource == "web_search":
        print("---ROUTE QUESTION TO WEB SEARCH---")
        return "web_search"
    elif source.datasource == "vectorstore":
        print("---ROUTE QUESTION TO RAG---")
        return "vectorstore"


def decide_to_generate(state):
    """
    Determines whether to generate an answer, or re-generate a question.

    Args:
        state (dict): The current graph state

    Returns:
        str: Binary decision for next node to call
    """

    print("---ASSESS GRADED DOCUMENTS---")
    state["question"]
    filtered_documents = state["documents"]

    if not filtered_documents:
        # All documents have been filtered check_relevance
        # We will re-generate a new query
        print(
            "---DECISION: ALL DOCUMENTS ARE NOT RELEVANT TO QUESTION, TRANSFORM QUERY---"
        )
        return "transform_query"
    else:
        # We have relevant documents, so generate answer
        print("---DECISION: GENERATE---")
        return "generate"


def grade_generation_v_documents_and_question(state):
    """
    Determines whether the generation is grounded in the document and answers question.

    Args:
        state (dict): The current graph state

    Returns:
        str: Decision for next node to call
    """

    print("---CHECK HALLUCINATIONS---")
    question = state["question"]
    documents = state["documents"]
    generation = state["generation"]

    score = hallucination_grader.invoke(
        {"documents": documents, "generation": generation}
    )
    grade = score.binary_score

    # Check hallucination
    if grade == "yes":
        print("---DECISION: GENERATION IS GROUNDED IN DOCUMENTS---")
        # Check question-answering
        print("---GRADE GENERATION vs QUESTION---")
        score = answer_grader.invoke({"question": question, "generation": generation})
        grade = score.binary_score
        if grade == "yes":
            print("---DECISION: GENERATION ADDRESSES QUESTION---")
            return "useful"
        else:
            print("---DECISION: GENERATION DOES NOT ADDRESS QUESTION---")
            return "not useful"
    else:
        pprint("---DECISION: GENERATION IS NOT GROUNDED IN DOCUMENTS, RE-TRY---")
        return "not supported"

11. Graph中的各条边

from langgraph.graph import END, StateGraph, START

workflow = StateGraph(GraphState)

# Define the nodes
workflow.add_node("web_search", web_search)  # web search
workflow.add_node("retrieve", retrieve)  # retrieve
workflow.add_node("grade_documents", grade_documents)  # grade documents
workflow.add_node("generate", generate)  # generatae
workflow.add_node("transform_query", transform_query)  # transform_query

# Build graph
workflow.add_conditional_edges(
    START,
    route_question,
    {
        "web_search": "web_search",
        "vectorstore": "retrieve",
    },
)
workflow.add_edge("web_search", "generate")
workflow.add_edge("retrieve", "grade_documents")
workflow.add_conditional_edges(
    "grade_documents",
    decide_to_generate,
    {
        "transform_query": "transform_query",
        "generate": "generate",
    },
)
workflow.add_edge("transform_query", "retrieve")
workflow.add_conditional_edges(
    "generate",
    grade_generation_v_documents_and_question,
    {
        "not supported": "generate",
        "useful": END,
        "not useful": "transform_query",
    },
)

# Compile
app = workflow.compile()

12. Graph可视化

from IPython.display import Image, display

try:
    display(Image(app.get_graph(xray=True).draw_mermaid_png()))
except Exception:
    # This requires some extra dependencies and is optional
    pass

请添加图片描述

13. 不同实例的运行结果

第一个实例

from pprint import pprint

# Run
inputs = {
    "question": "What player at the Bears expected to draft first in the 2024 NFL draft?"
}
for output in app.stream(inputs):
    for key, value in output.items():
        # Node
        pprint(f"Node '{key}':")
        # Optional: print full state at each node
        # pprint.pprint(value["keys"], indent=2, width=80, depth=None)
    pprint("\n---\n")

# Final generation
pprint(value["generation"])

输出:

---ROUTE QUESTION---
---ROUTE QUESTION TO WEB SEARCH---
---WEB SEARCH---
"Node 'web_search':"
'\n---\n'
---GENERATE---
---CHECK HALLUCINATIONS---
---DECISION: GENERATION IS GROUNDED IN DOCUMENTS---
---GRADE GENERATION vs QUESTION---
---DECISION: GENERATION ADDRESSES QUESTION---
"Node 'generate':"
'\n---\n'
('The Chicago Bears were expected to draft Caleb Williams first in the 2024 '
 'NFL Draft. Williams, a quarterback from USC, was widely considered the top '
 'prospect after winning the Heisman Trophy in 2022. The Bears indeed selected '
 'him with the No. 1 overall pick.')

第二个实例

# Run
inputs = {"question": "What is the agent memory?"}
for output in app.stream(inputs):
    for key, value in output.items():
        # Node
        pprint(f"Node '{key}':")
        # Optional: print full state at each node
        # pprint.pprint(value["keys"], indent=2, width=80, depth=None)
    pprint("\n---\n")

# Final generation
pprint(value["generation"])

输出:

---ROUTE QUESTION---
---ROUTE QUESTION TO RAG---
---RETRIEVE---
"Node 'retrieve':"
'\n---\n'
---CHECK DOCUMENT RELEVANCE TO QUESTION---
---GRADE: DOCUMENT RELEVANT---
---GRADE: DOCUMENT RELEVANT---
---GRADE: DOCUMENT RELEVANT---
---GRADE: DOCUMENT RELEVANT---
---ASSESS GRADED DOCUMENTS---
---DECISION: GENERATE---
"Node 'grade_documents':"
'\n---\n'
---GENERATE---
---CHECK HALLUCINATIONS---
---DECISION: GENERATION IS GROUNDED IN DOCUMENTS---
---GRADE GENERATION vs QUESTION---
---DECISION: GENERATION ADDRESSES QUESTION---
"Node 'generate':"
'\n---\n'
('The agent memory in a LLM-powered autonomous agent system is a key component '
 "that complements the LLM, which functions as the agent's brain. It includes "
 'various types of memory and utilizes techniques like Maximum Inner Product '
 'Search (MIPS) to enhance its functionality. This memory system aids the '
 'agent in retaining and retrieving information crucial for decision-making '
 'and task execution.')

官网中的问题是:

inputs = {"question": "What are the types of agent memory?"}

但用智谱的模型会限制死循环, 具体修改方法可以参考这篇 文章

langgraph官网链接:https://langchain-ai.github.io/langgraph/tutorials/rag/langgraph_adaptive_rag/#use-graph

如果有任何问题,欢迎在评论区提问。

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:/a/926234.html

如若内容造成侵权/违法违规/事实不符,请联系我们进行投诉反馈qq邮箱809451989@qq.com,一经查实,立即删除!

相关文章

redis基础spark操作redis

Redis内存淘汰策略 将Redis用作缓存时,如果内存空间用满,就会自动驱逐老的数据。 为什么要使用内存淘汰策略呢? 当海量数据涌入redis,导致redis装不下了咋办,我们需要根据redis的内存淘汰策略,淘汰一些不那…

C++优选算法十七 多源BFS

1.单源最短路问题 一个起点一个终点。 定义:在给定加权图中,选择一个顶点作为源点,计算该源点到图中所有其他顶点的最短路径长度。 2.多源最短路问题 定义:多源最短路问题指的是在图中存在多个起点,需要求出从这些…

DDR3保姆级使用教程:ZYNQ 7010

内容:使用DDR3 IP核,向DDR3写入数据,然后再读出数据,通过串口打印。 设备:ZYNQ 7010 xc7z010clg-400-1。软件VIVADO 2018.3 (1)工程模块:一个写FIFO,一个读FIFO。一个ZYNQ IP核&am…

vue3使用monaco编辑器(VSCode网页版)

vue3使用monaco编辑器(VSCode网页版) 文章说明参考文章核心代码效果展示实践说明源码下载 文章说明 一直在找网页版的编辑器,网页版的VSCode功能很强大,这个monaco就是VSCode样式的编辑器,功能很强大,可以直…

Y20030019 基于java+jsp+mysql的微信小程序校园二手交易平台的设计与实现 源代码 文档

旅游度假区微信小程序 1.摘要2. 系统开发的目的和意义3.系统功能4.界面展示5.源码获取 1.摘要 随着移动互联网的发展,微信小程序已经成为人们生活中不可或缺的一部分。微信小程序的优点在于其快速、轻量、易用,用户无需下载即可使用,节省了用…

uniapp实现列表页面,实用美观

咨询列表页面 组件 <template><view><view class"news_item" click"navigator(item.id)" v-for"item in list" :key"item.id"><image :src"item.img_url"></image><view class"righ…

哈希表,哈希桶的实现

哈希概念 顺序结构以及平衡树中&#xff0c;元素关键码与其存储位置之间没有对应的关系&#xff0c;因此在查找一个元素 时&#xff0c;必须要经过关键码的多次比较。顺序查找时间复杂度为O(N)&#xff0c;平衡树中为树的高度&#xff0c;即 O(logN)&#xff0c;搜索的效率取决…

网络安全(三):网路安全协议

网络安全协议设计的要求是实现协议过程中的认证性、机密性与不可否认性。网络安全协议涉及网络层、传输层与应用层。 1、网络层安全与IPSec协议、IPSec VPN 1.1、IPSec安全体系结构 IP协议本质上是不安全的额&#xff0c;伪造一个IP分组、篡改IP分组的内容、窥探传输中的IP分…

Golang教程第8篇(语言条件语句)

Go 语言条件语句 条件语句需要开发者通过指定一个或多个条件&#xff0c;并通过测试条件是否为 true 来决定是否执行指定语句&#xff0c;并在条件为 false 的情况在执行另外的语句。 Go 语言 if 语句 Go 语言条件语句 Go 语言条件语句 if 语句由布尔表达式后紧跟一个或多个语…

基于MFC实现的银行模拟系统

基于MFC实现的银行模拟系统 1.软硬件运行环境 1.1 项目研究背景与意义 为了能给学生熟悉银行业务系统提供真实的操作环境, 使学生在掌握理论知识的同时熟悉银行业务的实际操作过程&#xff0c;改变其知识结构&#xff0c;培养商业银行真正需要的实用人才&#xff0c;增强学生…

Qt自定义 Widget 组件

自定义 Widget 子类 QmyBattery Qt 的 UI 设计器提供了很多 GUI 设计的界面组件&#xff0c;可以满足常见的界面设计需求。但是某些时候需要设计一些特殊的界面组件&#xff0c;而在 UI 设计器的组件面板里根本没有合适的组件&#xff0c;这时就需要设计自定义的界面组件。 所…

Flink双流Join

在离线 Hive 中&#xff0c;我们经常会使用 Join 进行多表关联。那么在实时中我们应该如何实现两条流的 Join 呢&#xff1f;Flink DataStream API 为我们提供了3个算子来实现双流 join&#xff0c;分别是&#xff1a; join coGroup intervalJoin 下面我们分别详细看一下这…

WEB攻防-通用漏洞XSS跨站绕过修复http_onlyCSP标签符号

修复&#xff1a; 1、过滤一些危险字符&#xff1b; 2、HTTP-only Cookie; 3、设置CSP&#xff08;Content Security Policy&#xff09;; 4、输入内容长度限制&#xff0c;转义等&#xff1b; XSS绕过-CTFSHOW-316到331 关卡绕过WP XSS修复-过滤函数&http_only&C…

python 生成tts语音

之前一直使用微软、或者国内大厂的接口&#xff0c;网页操作比较麻烦&#xff0c;最近发现一个python库可以完美解决&#xff0c;在这里分享给大家 在这里 GitHub - rany2/edge-tts: Use Microsoft Edges online text-to-speech service from Python WITHOUT needing Microsof…

嵌入式硬件实战提升篇(三)商用量产电源设计方案 三路电源输入设计 电源管理 多输入供电自动管理 DCDC降压

引言&#xff1a;本文你能实际的了解到实战量产产品中电源架构设计的要求和过程&#xff0c;并且从实际实践出发搞懂电源架构系统&#xff0c;你也可以模仿此架构抄板到你自己的项目&#xff0c;并结合硬件篇之前的项目以及理论形成正真的三路电源输入设计与开发板电源架构块供…

SAP SD学习笔记16 - 请求书的取消 - VF11

上一章讲了 返品处理流程中的 参照请求传票&#xff08;发票&#xff09;来生成返品传票。 SAP SD学习笔记15 - 返品处理流程2 - 参照请求传票&#xff08;发票&#xff09;来生成返品传票-CSDN博客 本章讲 请求传票的取消。 目录 1&#xff0c;请求书取消的概要 2&#xf…

CLIP-MMA: Multi-Modal Adapter for Vision-Language Models

当前的问题 CLIP-Adapter仅单独调整图像和文本嵌入&#xff0c;忽略了不同模态之间的交互作用。此外&#xff0c;适应性参数容易过拟合训练数据&#xff0c;导致新任务泛化能力的损失。 动机 图1所示。多模态适配器说明。 通过一种基于注意力的 Adapter &#xff0c;作者称之…

51单片机快速入门之中断的应用 2024/11/23 串口中断

51单片机快速入门之中断的应用 基本函数: void T0(void) interrupt 1 using 1 { 这里放入中断后需要做的操作 } void T0(void)&#xff1a; 这是一个函数声明&#xff0c;表明函数 T0 不接受任何参数&#xff0c;并且不返回任何值。 interrupt 1&#xff1a; 这是关键字和参…

【Spring】聊聊@EventListener注解原理

1.一个Demo出发 在平时的开发中&#xff0c;其实编写同步线程代码是比较容易的&#xff0c;但是如何将一些操作和另外一些操作进行解除耦合&#xff0c;而事件方式 是一种很好的解耦合方式&#xff0c;比如当一个用户注销一个APP之后&#xff0c;需要发送一些短信 让他引流回来…

【和春笋一起学C++】使用new创建动态数组

目录 1. 什么是动态数组 2. 怎么使用动态数组 1. 什么是动态数组 char name[20]; 上面这种方式创建的数组在程序编译时将为它分配内存空间&#xff0c;不管程序最终是否使用数组&#xff0c;数组都在那里&#xff0c;它占用了内存空间。在编译时给数组分配内存被称为静态联编…