LangChain Experssion Language之CookBook(一)

目录

LangChain Experssion Language简介

CookBook示例大赏

Prompt + LLM:正经本分事儿

RAG:检索的时候用上用户自己的数据吧

Multiple chains:玩转chain的叠加合并

Querying a SQL DB:根据用户的问题写SQL检索数据库

Agents:终于看到万众瞩目的Agent例子了


LangChain Experssion Language简介

LangChain Experssion Language 简称LCEL,感觉就是为了节省代码量,让程序猿们更好地搭建基于大语言模型的应用,而在LangChain框架中整了新的语法来搭建prompt+LLM的chain。来,大家直接看官网链接:LangChain Expression Language (LCEL) | 🦜️🔗 Langchain。

本文的例子主要来自官网给出的Cookbook(Cookbook | 🦜️🔗 Langchain)的示例。所谓Cookbook,那当然是不会厨艺的人每次做菜之前的必读物,我觉得这个官网的Cookbook不仅仅是关于如何使用LCEL来做大语言模型的应用了,就是给大家枚举了一下Langchain本身该怎么的几大使用方法。本人自己理解看了一遍代码,如果有问题的话欢迎来评论。为了让大家看着不累,这个CookBook的示例我分了三篇,分别是《LangChain Experssion Language之CookBook(一/二/三)》,快来学习吧。

另外,如果你运行代码发现报错是需要api key那就是代码里模型定义和加载的时候请加上api key的参数,记得申请key哦。

✅ gpt系列,需要参数openai_api_key,申请地址:https://platform.openai.com/api-keys

✅ anthropic也就是前几天发的Claude系列,需要参数anthropic_api_key,申请地址:App unavailable \ Anthropic

记得Create new secret key以后需要把你的key在别的地方存一下,因为不会再能展示给你看了。

CookBook示例大赏

Prompt + LLM:正经本分事儿

最最最基础案例,那当然是怎么把Prompt和LLM串起来了啦,Langchain给出了他们的框架流程大致如下:

下面直接上代码,第一个PromptTemplate + LLM:

from langchain_core.prompts import ChatPromptTemplate
from langchain_openai import ChatOpenAI

prompt = ChatPromptTemplate.from_template("tell me a joke about {foo}")
# 这里一般是需要你有个openai的api key哦,需要申请一下。
model = ChatOpenAI()
# 定义好chain
chain = prompt | model
# invoke你的chain吧那就
chain.invoke({"foo": "bears"})
# 返回的结果如下:
# AIMessage(content="Why don't bears wear shoes?\n\nBecause they have bear feet!", additional_kwargs={}, example=False)

# 一些可以进行的骚操作
利用停用词,截取LLM返回的结果,第一个stop词之前的内容
chain = prompt | model.bind(stop=["\n"])
chain.invoke({"foo": "bears"})
# AIMessage(content='Why did the bear never wear shoes?', additional_kwargs={}, example=False)

另外,你还可以通过functions加入函数调用的附加信息:

functions = [
    {
        "name": "joke",
        "description": "A joke",
        "parameters": {
            "type": "object",
            "properties": {
                "setup": {"type": "string", "description": "The setup for the joke"},
                "punchline": {
                    "type": "string",
                    "description": "The punchline for the joke",
                },
            },
            "required": ["setup", "punchline"],
        },
    }
]
chain = prompt | model.bind(function_call={"name": "joke"}, functions=functions)
chain.invoke({"foo": "bears"}, config={})
# 返回结果
# AIMessage(content='', additional_kwargs={'function_call': {'name': 'joke', 'arguments': '{\n  "setup": "Why don\'t bears wear shoes?",\n  "punchline": "Because they have bear feet!"\n}'}}, example=False)

第二个例子加上了output parser:

from langchain_core.output_parsers import StrOutputParser

chain = prompt | model | StrOutputParser()
chain.invoke({"foo": "bears"})
# 返回结果直接是字符串信息
# "Why don't bears wear shoes?\n\nBecause they have bear feet!"

# 如果限定返回的内容且利用jsonoutputfunctionsparser
from langchain.output_parsers.openai_functions import JsonOutputFunctionsParser

chain = (
    prompt
    | model.bind(function_call={"name": "joke"}, functions=functions)
    | JsonOutputFunctionsParser()
)
chain.invoke({"foo": "bears"})
# 结果直接返回了
# {'setup': "Why don't bears like fast food?",
#  'punchline': "Because they can't catch it!"}

# 还可以在JsonOutputFunctionsParser()中限定要返回的key
from langchain.output_parsers.openai_functions import JsonKeyOutputFunctionsParser

chain = (
    prompt
    | model.bind(function_call={"name": "joke"}, functions=functions)
    | JsonKeyOutputFunctionsParser(key_name="setup")
)
chain.invoke({"foo": "bears"})
# 返回结果
# "Why don't bears wear shoes?"

进一步简化输入的形式:

from langchain_core.runnables import RunnableParallel, RunnablePassthrough

map_ = RunnableParallel(foo=RunnablePassthrough())
chain = (
    map_
    | prompt
    | model.bind(function_call={"name": "joke"}, functions=functions)
    | JsonKeyOutputFunctionsParser(key_name="setup")
)
chain.invoke("bears")
# 返回
# "Why don't bears wear shoes?"

# 或者直接在chain中传参数
chain = (
    {"foo": RunnablePassthrough()}
    | prompt
    | model.bind(function_call={"name": "joke"}, functions=functions)
    | JsonKeyOutputFunctionsParser(key_name="setup")
)
chain.invoke("bears")
# 返回
# "Why don't bears like fast food?"

RAG:检索的时候用上用户自己的数据吧

RAG的全称是:retrieval-augmented generation,就是在构建chain的时候,第一环用的是检索用户自己数据后得到的结果来作为LLM的context输入。那首先用户的数据呢,是要存起来的,示例代码里用的是OpenAI的embedding,结合Facebook的FAISS来进行相似内容的检索,请看:

# 在这里进行用户数据embedding的存储
vectorstore = FAISS.from_texts(
    ["harrison worked at kensho"], embedding=OpenAIEmbeddings()
)
# 初始化一个存好embedding的检索器
retriever = vectorstore.as_retriever()

template = """Answer the question based only on the following context:
{context}

Question: {question}
"""
prompt = ChatPromptTemplate.from_template(template)

model = ChatOpenAI()

# 作为chain的第一环,input环节
chain = (
    {"context": retriever, "question": RunnablePassthrough()}
    | prompt
    | model
    | StrOutputParser()
)
chain.invoke("where did harrison work?")
# 返回
# 'Harrison worked at Kensho.'


# 或者你还可以把retriever自己加入到chainchain里面
template = """Answer the question based only on the following context:
{context}

Question: {question}

Answer in the following language: {language}
"""
prompt = ChatPromptTemplate.from_template(template)

chain = (
    {
        "context": itemgetter("question") | retriever,
        "question": itemgetter("question"),
        "language": itemgetter("language"),
    }
    | prompt
    | model
    | StrOutputParser()
)
chain.invoke({"question": "where did harrison work", "language": "italian"})
# 返回
# 'Harrison ha lavorato a Kensho.'

另外,检索也可能发生在跟用户的历史对话记录中,以下示例比较长,逐一分析看看,首先咱们的chain是

conversational_qa_chain = _inputs | _context | ANSWER_PROMPT | ChatOpenAI()

这里的_inputs是一个chain的结果,即利用CONDENSE_QUESTION_PROMPT,根据聊天记录和随之的问题,将问题改写成一个独立的问题。

这里的_context 是另一个chain的结果,在_combine_documents里检索standalone_question得到结果

最后用ANSWER_PROMPT,来得到问题的最终答案。

from langchain.prompts.prompt import PromptTemplate

_template = """Given the following conversation and a follow up question, rephrase the follow up question to be a standalone question, in its original language.

Chat History:
{chat_history}
Follow Up Input: {question}
Standalone question:"""
CONDENSE_QUESTION_PROMPT = PromptTemplate.from_template(_template)

template = """Answer the question based only on the following context:
{context}

Question: {question}
"""
ANSWER_PROMPT = ChatPromptTemplate.from_template(template)


DEFAULT_DOCUMENT_PROMPT = PromptTemplate.from_template(template="{page_content}")


def _combine_documents(
    docs, document_prompt=DEFAULT_DOCUMENT_PROMPT, document_separator="\n\n"
):
    doc_strings = [format_document(doc, document_prompt) for doc in docs]
    return document_separator.join(doc_strings)

# chain的输入
_inputs = RunnableParallel(
    standalone_question=RunnablePassthrough.assign(
        chat_history=lambda x: get_buffer_string(x["chat_history"])
    )
    | CONDENSE_QUESTION_PROMPT
    | ChatOpenAI(temperature=0)
    | StrOutputParser(),
)
_context = {
    "context": itemgetter("standalone_question") | retriever | _combine_documents,
    "question": lambda x: x["standalone_question"],
}
conversational_qa_chain = _inputs | _context | ANSWER_PROMPT | ChatOpenAI()


conversational_qa_chain.invoke(
    {
        "question": "where did harrison work?",
        "chat_history": [],
    }
)
# 返回
# AIMessage(content='Harrison was employed at Kensho.')

# 加载human message和ai message后
conversational_qa_chain.invoke(
    {
        "question": "where did he work?",
        "chat_history": [
            HumanMessage(content="Who wrote this notebook?"),
            AIMessage(content="Harrison"),
        ],
    }
)

# 返回
# AIMessage(content='Harrison worked at Kensho.')

然后吧,你还可以把咱们之前的问题和答案存起来作为memory的样子进行加载:

memory = ConversationBufferMemory(
    return_messages=True, output_key="answer", input_key="question"
)
# First we add a step to load memory
# This adds a "memory" key to the input object
loaded_memory = RunnablePassthrough.assign(
    chat_history=RunnableLambda(memory.load_memory_variables) | itemgetter("history"),
)
# Now we calculate the standalone question
standalone_question = {
    "standalone_question": {
        "question": lambda x: x["question"],
        "chat_history": lambda x: get_buffer_string(x["chat_history"]),
    }
    | CONDENSE_QUESTION_PROMPT
    | ChatOpenAI(temperature=0)
    | StrOutputParser(),
}
# Now we retrieve the documents
retrieved_documents = {
    "docs": itemgetter("standalone_question") | retriever,
    "question": lambda x: x["standalone_question"],
}
# Now we construct the inputs for the final prompt
final_inputs = {
    "context": lambda x: _combine_documents(x["docs"]),
    "question": itemgetter("question"),
}
# And finally, we do the part that returns the answers
answer = {
    "answer": final_inputs | ANSWER_PROMPT | ChatOpenAI(),
    "docs": itemgetter("docs"),
}
# And now we put it all together!
final_chain = loaded_memory | standalone_question | retrieved_documents | answer

# 然后我们来看一下
inputs = {"question": "where did harrison work?"}
result = final_chain.invoke(inputs)
# result打印结果如下
# {'answer': AIMessage(content='Harrison was employed at Kensho.'),
#  'docs': [Document(page_content='harrison worked at kensho')]}

#另外, memory需要手动存储一下
# Note that the memory does not save automatically
# This will be improved in the future
# For now you need to save it yourself
memory.save_context(inputs, {"answer": result["answer"].content})
memory.load_memory_variables({})
# 看看load了啥
# {'history': [HumanMessage(content='where did harrison work?'),
#  AIMessage(content='Harrison was employed at Kensho.')]}

# inputs = {"question": "but where did he really work?"}
result = final_chain.invoke(inputs)
# result中返回了docs对应的内容
# {'answer': AIMessage(content='Harrison actually worked at Kensho.'),
#  'docs': [Document(page_content='harrison worked at kensho')]}

Multiple chains:玩转chain的叠加合并

LCEL得优越性在这样的例子里好像就展现出来了,用一行简单的代码就可以在一个chain里加入另一个chain,比如下面这个示例:

from operator import itemgetter

from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate
from langchain_openai import ChatOpenAI

prompt1 = ChatPromptTemplate.from_template("what is the city {person} is from?")
prompt2 = ChatPromptTemplate.from_template(
    "what country is the city {city} in? respond in {language}"
)

model = ChatOpenAI()

chain1 = prompt1 | model | StrOutputParser()

# 可以看到chain2的第一个环节是chain1给的输入
chain2 = (
    {"city": chain1, "language": itemgetter("language")}
    | prompt2
    | model
    | StrOutputParser()
)

chain2.invoke({"person": "obama", "language": "spanish"})

然后我们最后再来看一个复杂的chain套chain,在这个案例里,每个chain有点像是一个agent,几条chain组成了一个辩论活动。有正方arguments_for,也有反方arguments_against,有出题的人planner,还有集成各个想法的final_responder。下面来看下具体代码:

planner = (
    ChatPromptTemplate.from_template("Generate an argument about: {input}")
    | ChatOpenAI()
    | StrOutputParser()
    | {"base_response": RunnablePassthrough()}
)

arguments_for = (
    ChatPromptTemplate.from_template(
        "List the pros or positive aspects of {base_response}"
    )
    | ChatOpenAI()
    | StrOutputParser()
)
arguments_against = (
    ChatPromptTemplate.from_template(
        "List the cons or negative aspects of {base_response}"
    )
    | ChatOpenAI()
    | StrOutputParser()
)

final_responder = (
    ChatPromptTemplate.from_messages(
        [
            ("ai", "{original_response}"),
            ("human", "Pros:\n{results_1}\n\nCons:\n{results_2}"),
            ("system", "Generate a final response given the critique"),
        ]
    )
    | ChatOpenAI()
    | StrOutputParser()
)

chain = (
    planner
    | {
        "results_1": arguments_for,
        "results_2": arguments_against,
        "original_response": itemgetter("base_response"),
    }
    | final_responder
)

是不是有那味儿了。

Querying a SQL DB:根据用户的问题写SQL检索数据库

虽然之前一直知道现在的大语言模型可以写代码写SQL这种了,但是第一次发现,所以是可以直接去检索数据库的吗?瞅一眼Langchain官网的示例:

from langchain_core.prompts import ChatPromptTemplate

# 首先,这是定义了一个prompt,基于数据库表的schema来根据用户问题写一个sql查询
template = """Based on the table schema below, write a SQL query that would answer the user's question:
{schema}

Question: {question}
SQL Query:"""
prompt = ChatPromptTemplate.from_template(template)

# 定义要查询的数据库
db = SQLDatabase.from_uri("sqlite:///./Chinook.db")

# 取数据库的表信息
def get_schema(_):
    return db.get_table_info()
# 运行用户查询
def run_query(query):
    return db.run(query)

from langchain_core.output_parsers import StrOutputParser
from langchain_core.runnables import RunnablePassthrough
from langchain_openai import ChatOpenAI

model = ChatOpenAI()

# 定义chain,分别是查询表结构信息,然后放到prompt里,取出LLM出来的SQLResult部分,解析结果
sql_response = (
    RunnablePassthrough.assign(schema=get_schema)
    | prompt
    | model.bind(stop=["\nSQLResult:"])
    | StrOutputParser()
)
sql_response.invoke({"question": "How many employees are there?"})
# 这一步得到了咱们的查询语句
# 'SELECT COUNT(*) FROM Employee'

# 这个template,则是需要在基于数据库表的schema来根据用户问题写好sql查询
# 并且查询完结果后,回答用户的问题
template = """Based on the table schema below, question, sql query, and sql response, write a natural language response:
{schema}

Question: {question}
SQL Query: {query}
SQL Response: {response}"""
prompt_response = ChatPromptTemplate.from_template(template)

full_chain = (
    RunnablePassthrough.assign(query=sql_response).assign(
        schema=get_schema,
        response=lambda x: db.run(x["query"]),
    )
    | prompt_response
    | model
)

full_chain.invoke({"question": "How many employees are there?"})
# 返回结果直接就是语句化的答案了:
# AIMessage(content='There are 8 employees.', additional_kwargs={}, example=False)

是不是有,一些工业问答场景下,比如查询库存数据库或者其他的数据库,可以拿来用用的感觉?

Agents:终于看到万众瞩目的Agent例子了

看了一遍代码,哦,原来Agent是咱们Langchain里一个个需要定义的tool。本节的例子是做一个询问天气的agent,代码如下:

from langchain import hub
from langchain.agents import AgentExecutor, tool
from langchain.agents.output_parsers import XMLAgentOutputParser
from langchain_community.chat_models import ChatAnthropic

# 首先当然是定义好咱们的model啦
model = ChatAnthropic(model="claude-2")

# 这里咱们的tool叫做search,自定义,示例代码直接返回了“32degree”
@tool
def search(query: str) -> str:
    """Search things about current events."""
    return "32 degrees"

tool_list = [search]
# Get the prompt to use - you can modify this!
prompt = hub.pull("hwchase17/xml-agent-convo")

#这里将中间步骤的action.tool和action.tool_input,以及observation结果截取出来返回给了模型
# Logic for going from intermediate steps to a string to pass into model
# This is pretty tied to the prompt
def convert_intermediate_steps(intermediate_steps):
    log = ""
    for action, observation in intermediate_steps:
        log += (
            f"<tool>{action.tool}</tool><tool_input>{action.tool_input}"
            f"</tool_input><observation>{observation}</observation>"
        )
    return log


# Logic for converting tools to string to go in prompt
def convert_tools(tools):
    return "\n".join([f"{tool.name}: {tool.description}" for tool in tools])

agent = (
    {
        "input": lambda x: x["input"],
        "agent_scratchpad": lambda x: convert_intermediate_steps(
            x["intermediate_steps"]
        ),
    }
    | prompt.partial(tools=convert_tools(tool_list))
    | model.bind(stop=["</tool_input>", "</final_answer>"])
    | XMLAgentOutputParser()
)
agent_executor = AgentExecutor(agent=agent, tools=tool_list, verbose=True)
agent_executor.invoke({"input": "whats the weather in New york?"})

执行后的结果是:

这里我们先来看看prompt长得什么样子,直接打印一下prompt,print(agent.get_prompts()),得到:

[ChatPromptTemplate(input_variables=['agent_scratchpad', 'input'], partial_variables={'chat_history': '', 'tools': 'search: search(query: str) -> str - Search things about current events.'}, messages=[HumanMessagePromptTemplate(prompt=PromptTemplate(input_variables=['agent_scratchpad', 'chat_history', 'input', 'tools'], template="You are a helpful assistant. Help the user answer any questions.\n\nYou have access to the following tools:\n\n{tools}\n\nIn order to use a tool, you can use <tool></tool> and <tool_input></tool_input> tags. You will then get back a response in the form <observation></observation>\nFor example, if you have a tool called 'search' that could run a google search, in order to search for the weather in SF you would respond:\n\n<tool>search</tool><tool_input>weather in SF</tool_input>\n<observation>64 degrees</observation>\n\nWhen you are done, respond with a final answer between <final_answer></final_answer>. For example:\n\n<final_answer>The weather in SF is 64 degrees</final_answer>\n\nBegin!\n\nPrevious Conversation:\n{chat_history}\n\nQuestion: {input}\n{agent_scratchpad}"))])]


# prompt template长下面这样子
"""You are a helpful assistant. Help the user answer any questions.

You have access to the following tools:

{tools}

In order to use a tool, you can use <tool></tool> and <tool_input></tool_input> tags. You will then get back a response in the form <observation></observation>
For example, if you have a tool called 'search' that could run a google search, in order to search for the weather in SF you would respond:

<tool>search</tool><tool_input>weather in SF</tool_input>
<observation>64 degrees</observation>

When you are done, respond with a final answer between <final_answer></final_answer>. For example:

<final_answer>The weather in SF is 64 degrees</final_answer>

Begin!

Previous Conversation:
{chat_history}

Question: {input}
{agent_scratchpad}"""

然后回头看了一眼intermediate_steps里的action和observation。

action 打印出来:
tool='search' tool_input='weather in New york' log=' <tool>search</tool><tool_input>weather in New york'

observation打印出来:
32 degrees

有没有那种感觉,通过tool得到了结果,然后把里面的重点信息组织组织得到最后的答案。不过这里只用了一个Tool,后面我多定义了一个tool并且把其加入了tool_list里,不过返回的结果没变。目测可能是因为在prompt里已经说明了search工具,且可以用search来查询SF的天气:

For example, if you have a tool called 'search' that could run a google search, in order to search for the weather in SF you would respond:

<tool>search</tool><tool_input>weather in SF</tool_input>
<observation>64 degrees</observation>

回头看到Agents的例子再更新!

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:/a/441965.html

如若内容造成侵权/违法违规/事实不符,请联系我们进行投诉反馈qq邮箱809451989@qq.com,一经查实,立即删除!

相关文章

uniapp使用华为云OBS进行上传

前言&#xff1a;无论是使用华为云还是阿里云&#xff0c;使用其产品的时候必须阅读文档 1、以华为云为例&#xff0c;刚接触此功能肯定是无从下手的情况&#xff0c;那么我们需要思考&#xff0c;我们使用该产品所用到的文档是什么 2、我们要使用obs 文件上传&#xff0c;肯…

iOS-系统弹窗调用

代码&#xff1a; UIAlertController *alertViewController [UIAlertController alertControllerWithTitle:"请选择方式" message:nil preferredStyle:UIAlertControllerStyleActionSheet];// style 为 sheet UIAlertAction *cancle [UIAlertAction actionWithTit…

Docker基础教程 - 9 常用容器部署-Tomcat

更好的阅读体验&#xff1a;点这里 &#xff08; www.doubibiji.com &#xff09; 9 常用容器部署-Tomcat 下面介绍一下常用容器的部署。可以先简单了解下&#xff0c;用到再来详细查看。 在 Docker 中部署 Tomcat 容器。 9.1 搜索镜像 首先搜索镜像&#xff0c;命令&…

来说说看到的求职路上可以提高的地方——简历

要进行求职的时候应该遇到的第一件事情就是简历。 随着看到的简历越来越多&#xff0c;也发现了一些问题&#xff0c;来开个帖子来说说这些问题。 格式 让参加面试的人最头疼的地方就是简历格式没有空格。 最近发现好多人的简历格式上都不空格&#xff0c;很多内容完全都在…

植物病虫害:YOLO玉米病虫害识别数据集

玉米病虫害识别数据集&#xff1a;玉米枯萎病&#xff0c;玉米灰斑病&#xff0c;玉米锈病叶&#xff0c;粘虫幼虫&#xff0c;玉米条斑病&#xff0c;黄二化螟&#xff0c;黄二化螟幼虫7类&#xff0c;yolo标注完整&#xff0c;3900多张图像&#xff0c;全部原始数据&#xff…

el-table-column嵌套el-form-item不能进行校验问题解决

项目为vue3elementPlus开发的项目 业务要求&#xff1a;table表格展示数据&#xff0c;其中有一行是ip地址可展示可修改&#xff0c;此处要求增加自定义校验规则 先看一下效果&#xff1a; 此处先描述一下&#xff0c;问题出在了哪里&#xff0c;我将el-table的data,使用一个…

LabVIEW质谱仪开发与升级

LabVIEW质谱仪开发与升级 随着科技的发展和实验要求的提高&#xff0c;传统基于VB的质谱仪系统已经无法满足当前的高精度和高效率需求。这些系统通常存在着功能不全和操作复杂的问题&#xff0c;影响了科研和生产的进度。为了解决这些问题&#xff0c;开发了一套基于LabVIEW开…

考研复习C语言初阶(3)

目录 一.函数是什么? 二.C语言中函数的分类 2.1库函数 2.2自定义函数 三.函数的参数 3.1实际参数&#xff08;实参&#xff09; 3.2 形式参数&#xff08;形参&#xff09; 四.函数的调用 4.1 传值调用 4.2 传址调用 五. 函数的嵌套调用和链式访问 5.1 嵌套调用 5…

Nginx 基础知识及实例解析

一、简介 Nginx (“engine x”) 是一个高性能的 HTTP 和反向代理服务器&#xff0c;特点是占有内存少&#xff0c;并发能力强&#xff0c;目前使用最多的就是负载均衡。Nginx 可以作为静态页面的 web 服务器&#xff0c;同时还支持 CGI 协议的动态语言&#xff0c;比如 perl、…

探索考古文字场景,基于YOLOv5全系列【n/s/m/l/x】参数模型开发构建文本考古场景下的甲骨文字符图像检测识别系统

甲骨文是一种非常历史悠久的古老文字&#xff0c;在前面我们基本上很少有涉及这块的内容&#xff0c;最近正好在做文字相关的项目开发研究&#xff0c;就想着基于甲骨文的场景来开发对应的检测识别系统&#xff0c;在前文中我们基于YOLOv7开发构建了在仿真数据实验场景下的目标…

Mamba-minimal Mamba的最小限度实现 (一)

文章目录 参数和数据尺寸约定class MambaBlockdef forwarddef __ int__def ssmdef selective_scan johnma2006/mamba-minimal: Simple, minimal implementation of the Mamba SSM in one file of PyTorch. (github.com) manba的简单最小限度实现&#xff0c;和原始论文实现stat…

智能音箱技术解析

目录 前言智能音箱执行步骤解析1.1 探测唤醒词或触发词1.2 语音识别1.3 意图识别1.4 执行指令 2 典型的智能音箱2.1 百度小度音响2.2 小米小爱同学2.3 苹果 HomePod 3 功能应用举例3.1 设置计时器3.2 播放音乐 结语 前言 智能音箱已经成为日常生活中不可或缺的一部分&#xff…

亚信安慧AntDB:为数据安全和稳定而生

AntDB充分考虑了用户的需求&#xff0c;将用户体验置于优先位置&#xff0c;通过深入分析用户的使用情况&#xff0c;对数据库的性能和功能进行了全方位的优化。无论是对于小规模应用还是大规模企业级系统&#xff0c;AntDB都能够提供稳定高效的数据库服务&#xff0c;满足不同…

[BUG] docker运行Java程序时配置代理-Dhttp.proxyHost后启动报错

[BUG] docker运行Java程序时配置代理-Dhttp.proxyHost后启动报错 bug现象描述 版本&#xff1a;2.0.4&#xff08;客户端和服务端都是&#xff09; 环境&#xff1a;私有云环境&#xff0c;只有少量跳板机器可以访问公网&#xff0c;其他机器均通过配置代理方式访问公网 bug现…

新一代 Git 工具,AI 赋能!深度集成、简化操作 | 开源日报 No.194

gitbutlerapp/gitbutler Stars: 7.2k License: NOASSERTION gitbutler 是一个基于 Git 的版本控制客户端。旨在为现代工作流程构建一个全新的 Git 分支管理工具。 虚拟分支&#xff1a;可以同时在多个分支上工作&#xff0c;而无需不断切换分支简化提交管理&#xff1a;通过拖…

码垛【FB块】

转载&#xff1a; FUNCTION BLOCK 码垛 VAR INPUT 当前数:INT; 点l:Point; 点2:Point; X行数:REAL; Y列数:REAL; 2层数:REAL; END VAR VAR OUTPUT 目标点:Point; 点数量:INT; END VAR VAR // X差值:点2.x-点1.x; IF X行数>1 AND X差值<>0 THEN X间隔:X差值/(X行数-1)…

07.axios封装实例

一.简易axios封装-获取省份列表 1. 需求&#xff1a;基于 Promise 和 XHR 封装 myAxios 函数&#xff0c;获取省份列表展示到页面 2. 核心语法&#xff1a; function myAxios(config) {return new Promise((resolve, reject) > {// XHR 请求// 调用成功/失败的处理程序}) …

偶极子和环形天线的辐射机理仿真分析

目录 0 引言 1 偶极子天线的辐射因素分析 1.1 偶极子天线模型设计 1.2 谐振点的出现规律 1.3 天线尺寸对辐射的影响 1.4 天线角度对辐射的影响

c++ primer plus 第十五章笔记 友元,异常和其他

友元类&#xff1a; 两个类不存在继承和包含的关系&#xff0c;但是我想通过一个类的成员函数来修改另一个类的私有成员和保护成员的时候&#xff0c;可以使用友元类。 class A {private:int num;//私有成员//...public: //...friend class B;//声明一个友元类 }class…

ChatGPT Plus 自动扣费失败,如何续订

ChatGPT Plus 自动扣费失败&#xff0c;如何续订 如果您的 ChatGPT Plus 订阅过期或扣费失败&#xff0c;本教程将指导您如何重新订阅。 本周更新 ChatGPT Plus 是一种每月20美元的订阅服务。扣费会自动进行&#xff0c;如果您的账户余额不足&#xff0c;OpenAI 将在一次扣费…