深度学习系列78:使用langchain的api进行RAG

用起来很麻烦,看api的工夫都已经能自己写完代码了。但现在有些开源api用的是langchain的接口,还是了解一下。参考官方文档:https://www.langchain.com.cn/docs/how_to/

1. LLM和langserve示例

以openai接口为例,可以看到分为3步:定义model,调用invoke方法,进行parse。所谓的chain,就是把带invoke的类合并起来使用:

from fastapi import FastAPI
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser
from langchain_openai import ChatOpenAI
from langserve import add_routes

# 1. Create prompt template
prompt_template = ChatPromptTemplate.from_messages([
    ('system', "Translate the following into {language}:"),
    ('user', '{text}')
])

# 2. Create model
model = ChatOpenAI()

# 3. Create parser
parser = StrOutputParser()

# 4. Create chain
chain = prompt_template | model | parser


# 4. App definition
app = FastAPI(
  title="LangChain Server",
  version="1.0",
  description="A simple API server using LangChain's Runnable interfaces",
)

# 5. Adding chain route
add_routes(
    app,
    chain,
    path="/chain",
)

if __name__ == "__main__":
    import uvicorn
    uvicorn.run(app, host="localhost", port=8000)

调用可以使用requests,也可以使用langserve的接口:

from langserve import RemoteRunnable
remote_chain = RemoteRunnable("http://localhost:8000/chain/")
remote_chain.invoke({"language": "italian", "text": "hi"})

调用自己的LLM,则需要实现_call和_llm_type方法:
在这里插入图片描述
下面是一个例子。message类组装prompt就不看了,直接写在自定义的llm里面就好。甚至parser也可以写在llm里面。

from langchain_core.output_parsers import StrOutputParser
from langchain_core.language_models.llms import LLM
requests.packages.urllib3.disable_warnings()
class Qwen(LLM):
    def _call(self,prompt: str,stop = None):
        headers = {'accept': 'application/json','Content-Type': 'application/json'}
        data = json.dumps({"messages":[{'role': 'system','content': 'Translate the following into Chinese:'},{'role': 'user','content': prompt}] ,
                           "model": 'Qwen/Qwen2.5-72B-Instruct',"temperature": 0,"max_tokens": 1024})
        res = requests.post('https://localhost/v1/chat/completions', headers=headers, data=data, verify=False).json()
        return res['choices'][0]['message']['content']
    def _llm_type(self):
        return "Qwen"  
chain = Qwen() | StrOutputParser()
chain.invoke("hi")

2. 文档加载器

文档加载器会返回一个Document对象

from langchain_community.document_loaders import UnstructuredMarkdownLoader
data = UnstructuredMarkdownLoader(file_path,mode='elements').load()
content = data[0].page_content

from langchain_community.document_loaders import PyPDFLoader
data = PyPDFLoader(file_path).lazy_load()
content = data[0].page_content

自定义文档加载器的例子如下:
在这里插入图片描述

from langchain_core.document_loaders import BaseLoader
from langchain_core.documents import Document
class CustomDocumentLoader(BaseLoader):
    def __init__(self, file_path: str):
        self.file_path = file_path
    def lazy_load(self): 
        with open(self.file_path, encoding="utf-8") as f:
            for line_number,line in enumerate(f):
                yield Document(page_content=line,metadata={"line_number": line_number, "source": self.file_path})
d = CustomDocumentLoader('data/biology/contents/m44386.md')
for di in d.lazy_load():
    print(di)

也可以用blob接口实现加载数据功能:

from langchain_core.document_loaders import BaseBlobParser, Blob
class MyParser(BaseBlobParser):
    """A simple parser that creates a document from each line."""
    def lazy_parse(self, blob: Blob) -> Iterator[Document]:
        """Parse a blob into a document line by line."""
        line_number = 0
        with blob.as_bytes_io() as f:
            for line in f:
                line_number += 1
                yield Document(page_content=line,metadata={"line_number": line_number, "source": blob.source})
blob = Blob(data=b"some data from memory\nmeow")
list(parser.lazy_parse(blob))

Blob.from_path("./meow.txt")可以将文件读入为blob格式。

3. 分割器

  • CharacterTextSplitter:最简单的按字数分割
  • RecursiveCharacterTextSplitter:按字符列表顺序在这些字符上进行分割,直到块的大小足够小。
  • html/markdown按标题分割/部分分割
  • 使用spacy或者分割,这是一个按照句子内容进行分割的模型。类似的还有NLTK。模型在https://github.com/explosion/spacy-models/releases这里下载或者直接pip install并安装即可
from langchain.text_splitter import CharacterTextSplitter
from langchain.document_loaders import TextLoader
text_splitter = CharacterTextSplitter(chunk_size=100, chunk_overlap=0)
loader = TextLoader("./sidamingzhu.txt", encoding="utf-8")
documents = loader.load()
docs = text_splitter.split_documents(documents)


from langchain_text_splitters import SpacyTextSplitter
from langchain_text_splitters import RecursiveCharacterTextSplitter
with open("data/biology/contents/m44386.md") as f:
    state_of_the_union = f.read()

text_splitter = RecursiveCharacterTextSplitter(chunk_size=100,chunk_overlap=20)
texts = text_splitter.create_documents([state_of_the_union])
print(texts[10])
print(texts[11])

from langchain_text_splitters import HTMLHeaderTextSplitter
html_string = """
<!DOCTYPE html>
<html>
<body>
    <div>
        <h1>Foo</h1>
        <p>Some intro text about Foo.</p>
        <div>
            <h2>Bar main section</h2>
            <p>Some intro text about Bar.</p>
            <h3>Bar subsection 1</h3>
            <p>Some text about the first subtopic of Bar.</p>
            <h3>Bar subsection 2</h3>
            <p>Some text about the second subtopic of Bar.</p>
        </div>
        <div>
            <h2>Baz</h2>
            <p>Some text about Baz</p>
        </div>
        <br>
        <p>Some concluding text about Foo</p>
    </div>
</body>
</html>
"""
headers_to_split_on = [
    ("h1", "Header 1"),
    ("h2", "Header 2"),
    ("h3", "Header 3"),
]
html_splitter = HTMLHeaderTextSplitter(headers_to_split_on)
html_header_splits = html_splitter.split_text(html_string)
html_header_splits

from langchain_text_splitters import MarkdownHeaderTextSplitter
markdown_document = "# Foo\n\n    ## Bar\n\nHi this is Jim\n\nHi this is Joe\n\n ### Boo \n\n Hi this is Lance \n\n ## Baz\n\n Hi this is Molly"
headers_to_split_on = [
    ("#", "Header 1"),
    ("##", "Header 2"),
    ("###", "Header 3"),
]
markdown_splitter = MarkdownHeaderTextSplitter(headers_to_split_on)
md_header_splits = markdown_splitter.split_text(markdown_document)
md_header_splits

from langchain_text_splitters import SpacyTextSplitter
SpacyTextSplitter(pipeline=’zh_core_web_sm‘)

自定义splitter需要实现下面的接口:

interface TextSplitter {
  chunkSize: number;
  chunkOverlap: number;
  createDocuments(
    texts: string[],
    metadatas?: Record<string, any>[],
    chunkHeaderOptions: TextSplitterChunkHeaderOptions = {}
  ): Promise<Document[]>;

  splitDocuments(
    documents: Document[],
    chunkHeaderOptions: TextSplitterChunkHeaderOptions = {}
  ): Promise<Document[]>;
}

4. embedding

VectorStore是使用embedding向量化之后的文档库。
这里介绍两种embedding的方法。第一种是直接从本地加载模型到内存:

from langchain.embeddings.sentence_transformer import SentenceTransformerEmbeddings
from langchain_core.vectorstores import InMemoryVectorStore
embedding_function = SentenceTransformerEmbeddings(model_name="embedding/")

第二种是自定义embedding:
在这里插入图片描述

from langchain_core.embeddings import Embeddings
class ParrotLinkEmbeddings(Embeddings):
    def __init__(self, model: str):
        self.model = model

    def embed_documents(self, texts: List[str]) -> List[List[float]]:
        """Embed search docs."""
        return [[0.5, 0.6, 0.7] for _ in texts]

    def embed_query(self, text: str) -> List[float]:
        """Embed query text."""
        return self.embed_documents([text])[0]

embedding模型可以调用embed_documents和embed_query方法:

embeddings = embeddings_model.embed_documents(
    [
        "Hi there!",
        "Oh, hello!",
        "What's your name?",
        "My friends call me World",
        "Hello World!"
    ]
)
len(embeddings), len(embeddings[0])

embedded_query = embeddings_model.embed_query("What was the name mentioned in the conversation?")
embedded_query[:5]

5. vectorStore和retriver

5.1 常用vectorStore

这里介绍3种向量库,基本步骤:1. 建库from_documents;2. 向量搜索similarity_search或者similarity_search_by_vector

vector_store = InMemoryVectorStore.from_documents(pages, embedding_function)
docs = vector_store.similarity_search(" Humans have inhabited this planet for how long?", k=2)
for doc in docs:
    print(f'Page {doc.metadata["page_number"]}: {doc.page_content[:300]}\n')

from langchain_community.vectorstores import Chroma
from langchain_community.vectorstores.utils import filter_complex_metadata
pages = filter_complex_metadata(pages)
db = Chroma.from_documents(pages, embedding_function)
db.similarity_search(" Humans have inhabited this planet for how long?", k=2)

from langchain_community.vectorstores import FAISS
db = FAISS.from_documents(pages, embedding_function)
db.similarity_search(" Humans have inhabited this planet for how long?", k=2)

5.2 转为retriver

可以直接将vectorStore作为retriver,这样就可以调用invoke方法了:

retriever = vectorstore.as_retriever()
retriever = vectorstore.as_retriever(
    search_type="similarity_score_threshold", search_kwargs={"score_threshold": 0.5}
)
retriever = vectorstore.as_retriever(search_kwargs={"k": 1})
docs = retriever.invoke("what did the president say about ketanji brown jackson?")

5.3 MultiQueryRetriever

这里特别介绍一下MultiQueryRetriever和retriever_from_llm,可以把问题转为相似的几个问题:

import logging,json
question = "你是谁?"
logging.basicConfig()
logging.getLogger("langchain.retrievers.multi_query").setLevel(logging.INFO)
unique_docs = retriever_from_llm.invoke(question,)
len(unique_docs)

5.4 自定义retriver

如果要自定义的话,需要实现parser和模板:

from langchain_core.output_parsers import BaseOutputParser
from langchain_core.prompts import PromptTemplate
from pydantic import BaseModel, Field
class LineListOutputParser(BaseOutputParser):
    def parse(self, text: str):
        lines = text.strip().split("\n")
        return list(filter(None, lines)) 

output_parser = LineListOutputParser()
QUERY_PROMPT = PromptTemplate(
    input_variables=["question"],
    template="""You are an AI language model assistant. Your task is to generate five 
    different versions of the given user question to retrieve relevant documents from a vector 
    database. By generating multiple perspectives on the user question, your goal is to help
    the user overcome some of the limitations of the distance-based similarity search. 
    Provide these alternative questions separated by newlines.
    Original question: {question}""",
)
llm_chain = QUERY_PROMPT | llm | output_parser
retriever = MultiQueryRetriever(
    retriever=db.as_retriever(), llm_chain=llm_chain, parser_key="lines"
)  # "lines" is the key (attribute name) of the parsed output
retriever.invoke("What does the course say about regression?")

import uuid
from langchain.retrievers.multi_vector import MultiVectorRetriever
# The storage layer for the parent documents
store = InMemoryByteStore()
id_key = "doc_id"
# The retriever (empty to start)
retriever = MultiVectorRetriever(
    vectorstore=vectorstore,
    byte_store=store,
    id_key=id_key,
)

doc_ids = [str(uuid.uuid4()) for _ in docs]
child_text_splitter = RecursiveCharacterTextSplitter(chunk_size=400)
sub_docs = []
for i, doc in enumerate(docs):
    _id = doc_ids[i]
    _sub_docs = child_text_splitter.split_documents([doc])
    for _doc in _sub_docs:
        _doc.metadata[id_key] = _id
    sub_docs.extend(_sub_docs)
retriever.vectorstore.add_documents(sub_docs)
retriever.docstore.mset(list(zip(doc_ids, docs)))
retriever.vectorstore.similarity_search("justice breyer")[0]

自定义检索器需要:
在这里插入图片描述

from langchain_core.retrievers import BaseRetriever
class ToyRetriever(BaseRetriever):
    documents: List[Document]
    """List of documents to retrieve from."""
    k: int
    """Number of top results to return"""

    def _get_relevant_documents(
        self, query: str, *, run_manager: CallbackManagerForRetrieverRun
    ) -> List[Document]:
        matching_documents = []
        for document in documents:
            if len(matching_documents) > self.k:
                return matching_documents

            if query.lower() in document.page_content.lower():
                matching_documents.append(document)
        return matching_documents

retriever = ToyRetriever(documents=documents, k=3)
retriever.invoke("that")
await retriever.ainvoke("that")
retriever.batch(["dog", "cat"])
async for event in retriever.astream_events("bar", version="v1"):
    print(event)

5.5 基于metadata的结构化查询

下面是基于metadata的结构化查询例子,使用SelfQueryRetriever

from langchain_chroma import Chroma
from langchain_core.documents import Document
from langchain_openai import OpenAIEmbeddings

docs = [
    Document(
        page_content="A bunch of scientists bring back dinosaurs and mayhem breaks loose",
        metadata={"year": 1993, "rating": 7.7, "genre": "science fiction"},
    ),
    Document(
        page_content="Leo DiCaprio gets lost in a dream within a dream within a dream within a ...",
        metadata={"year": 2010, "director": "Christopher Nolan", "rating": 8.2},
    ),
    Document(
        page_content="A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea",
        metadata={"year": 2006, "director": "Satoshi Kon", "rating": 8.6},
    ),
    Document(
        page_content="A bunch of normal-sized women are supremely wholesome and some men pine after them",
        metadata={"year": 2019, "director": "Greta Gerwig", "rating": 8.3},
    ),
    Document(
        page_content="Toys come alive and have a blast doing so",
        metadata={"year": 1995, "genre": "animated"},
    ),
    Document(
        page_content="Three men walk into the Zone, three men walk out of the Zone",
        metadata={
            "year": 1979,
            "director": "Andrei Tarkovsky",
            "genre": "thriller",
            "rating": 9.9,
        },
    ),
]
vectorstore = Chroma.from_documents(docs, OpenAIEmbeddings())

from langchain.chains.query_constructor.base import AttributeInfo
from langchain.retrievers.self_query.base import SelfQueryRetriever
from langchain_openai import ChatOpenAI

metadata_field_info = [
    AttributeInfo(
        name="genre",
        description="The genre of the movie. One of ['science fiction', 'comedy', 'drama', 'thriller', 'romance', 'action', 'animated']",
        type="string",
    ),
    AttributeInfo(
        name="year",
        description="The year the movie was released",
        type="integer",
    ),
    AttributeInfo(
        name="director",
        description="The name of the movie director",
        type="string",
    ),
    AttributeInfo(
        name="rating", description="A 1-10 rating for the movie", type="float"
    ),
]
document_content_description = "Brief summary of a movie"
llm = ChatOpenAI(temperature=0)
retriever = SelfQueryRetriever.from_llm(
    llm,
    vectorstore,
    document_content_description,
    metadata_field_info,
)

retriever.invoke("I want to watch a movie rated higher than 8.5")

我们可以通过将 enable_limit=True来限制要获取的文档数量。

5.6 BM25检索

BM25 是一种基于词频和逆文档频率(TF-IDF)的传统检索算法,非常适合关键词匹配。我们使用 BM25Retriever.from_texts 方法来创建 BM25 检索器:

from langchain.retrievers import EnsembleRetriever
from langchain_community.retrievers import BM25Retriever
from langchain_community.vectorstores import FAISS
# 定义第一组文档,这些文档将用于 BM25 检索器
doc_list_1 = [
    "这是一个测试句子",
    "温格高赢得了2023环法冠军",
    "波士顿马拉松是历史悠久的一项比赛",
    "何杰即将出战巴黎奥运会的马拉松项目",
    "珍宝将不再赞助温格高所在的车队",
]

# 定义第二组文档,这些文档将用于 FAISS 检索器
doc_list_2 = [
    "波加查擅长陡坡进攻,而温格高则更擅长长坡",
    "温格高的最大摄氧量居然有97!",
    "北京奥运会在2008年8月8日开幕",
    "基普乔格是东京马拉松的金牌得主",
]
bm25_retriever = BM25Retriever.from_texts(
    doc_list_1, metadatas=[{"source": 1}] * len(doc_list_1)
)
bm25_retriever.k = 2  # 设置 BM25 检索器返回的文档数量
faiss_vectorstore = FAISS.from_texts(
    doc_list_2, embedding_function, metadatas=[{"source": 2}] * len(doc_list_2)
)
faiss_retriever = faiss_vectorstore.as_retriever(search_kwargs={"k": 2})
ensemble_retriever = EnsembleRetriever(
    retrievers=[bm25_retriever, faiss_retriever], weights=[0.5, 0.5]
)
docs = ensemble_retriever.invoke("温格高")
print(docs)

page_contents = [doc.page_content for doc in docs]
print(page_contents)

6. 结果压缩

上下文压缩检索器将查询传递给基础检索器,获取初始文档并将其传递给文档压缩器。文档压缩器接收文档列表,通过减少文档内容或完全删除文档来缩短列表。

from langchain.retrievers import ContextualCompressionRetriever
from langchain.retrievers.document_compressors import LLMChainExtractor
compressor = LLMChainExtractor.from_llm(llm)
compression_retriever = ContextualCompressionRetriever(base_compressor=compressor, base_retriever=retriever)
compressed_docs = compression_retriever.invoke("Humans have inhabited this planet for how long?")

可以再加一个嵌入过滤器,通过对文档和查询进行嵌入,仅返回与查询具有足够相似嵌入的文档:

from langchain.retrievers.document_compressors import EmbeddingsFilter
embeddings_filter = EmbeddingsFilter(embeddings=embedding_function, similarity_threshold=0.6)
compression_retriever = ContextualCompressionRetriever(base_compressor=embeddings_filter, base_retriever=retriever)
compression_retriever.invoke("Humans have inhabited this planet for how long?")

使用文档压缩器管道,我们还可以轻松地将多个压缩器按顺序组合在一起:

from langchain.retrievers.document_compressors import DocumentCompressorPipeline
from langchain_community.document_transformers import EmbeddingsRedundantFilter
from langchain_text_splitters import CharacterTextSplitter

splitter = CharacterTextSplitter(chunk_size=300, chunk_overlap=0, separator=". ")
redundant_filter = EmbeddingsRedundantFilter(embeddings=embeddings)
relevant_filter = EmbeddingsFilter(embeddings=embeddings, similarity_threshold=0.76)
pipeline_compressor = DocumentCompressorPipeline(transformers=[splitter, redundant_filter, relevant_filter])
compression_retriever = ContextualCompressionRetriever(base_compressor=pipeline_compressor, base_retriever=retriever)

compressed_docs = compression_retriever.invoke("What did the president say about Ketanji Jackson Brown")

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:/a/984925.html

如若内容造成侵权/违法违规/事实不符,请联系我们进行投诉反馈qq邮箱809451989@qq.com,一经查实,立即删除!

相关文章

LiveCommunicationKit OC 实现

一、实现效果: ‌ LiveCommunicationKit‌是苹果公司在iOS 17.4、watchOS 10.4和visionOS 1.1中引入的一个新框架,旨在优化VoIP通话的交互体验。该框架提供了与

SQL Server查询计划操作符(7.3)——查询计划相关操作符(10)

7.3. 查询计划相关操作符 88&#xff09;Sequence Project&#xff1a;该操作符通过对一个排序集合增加字段来进行计算。其基于一个或多个字段的值将其输入的数据行分成多个段&#xff0c;这样&#xff0c;该操作符每次输出一个段&#xff0c;这些字段显示为该操作符的参数。该…

mac使用Homebrew安装miniconda(mac搭建python环境),并在IDEA中集成miniconda环境

一、安装Homebrew mac安装brew 二、使用Homebrew安装miniconda brew search condabrew install miniconda安装完成后的截图&#xff1a; # 查看是否安装成功 brew list环境变量&#xff08;无需手动配置&#xff09; 先执行命令看能不能正常返回&#xff0c;如果不能正常…

vue-cli + echarts 组件封装 (Vue2版)

在Vue2中使用ECharts还是比较麻烦的&#xff0c;今天做了几个组件让我们能更加简单的调用Echars来显示图表。 效果展示 echarts 导入 这里我们使用 package.json 方式导入Echars。配置好后使用命令 npm install或者其他方式都可以 {// ... "scripts": {// ... &qu…

基于编译器特性浅析C++程序性能优化

最近在恶补计算机基础知识&#xff0c;学到CSAPP第五章的内容&#xff0c;在这里总结并且展开一下C程序性能优化相关的内容。 衡量程序性能的方式 一般而言&#xff0c;程序的性能可以用CPE&#xff08;Cycles Per Element&#xff09;来衡量&#xff0c;其指的是处理每个元素…

多模态融合的分类、跨模态对齐的方法

两者的主要区别 维度扩模态对齐扩模态融合目标对齐模态间的表示&#xff0c;使其语义一致融合模态间的信息&#xff0c;生成联合表示关注点模态间的相似性和语义一致性模态间的互补性和信息整合空间映射到共享的公共语义空间生成新的联合特征空间方法对比学习、共享空间、注意…

计算机网络--访问一个网页的全过程

文章目录 访问一个网页的全过程应用层在浏览器输入URL网址http://www.aspxfans.com:8080/news/index.aspboardID5&ID24618&page1#r_70732423通过DNS获取IP地址生成HTTP请求报文应用层最后 传输层传输层处理应用层报文建立TCP连接传输层最后 网络层网络层对TCP报文进行处…

自动化测试脚本语言选择

测试人员在选择自动化测试脚本语言时面临多种选项。Python、Java、C#、JavaScript 和 Ruby 都是常见选择&#xff0c;但哪种语言最适合&#xff1f;本文将详细分析这些语言的特点、适用场景和优劣势&#xff0c;结合行业趋势和社会现象&#xff0c;为测试人员提供全面指导。 选…

Oracle 字符类型对比

本文以 Oracle12c 为例 1.主要区别对比 类型存储方式最大长度字符集支持适用场景备注​CHAR(M)固定长度空格填充2000 字节&#xff0c;M 代表字节长度默认字符集固定长度编码实际存储长度固定为定义长度&#xff08;如 CHAR(10) 始终占 10 字节&#xff09;​VARCHAR2(M)可变长…

Nginx(基础安装+配置文件)

目录 一.Nginx基础 1.基础知识点 2.异步非阻塞机制 二.Nginx安装 2.1安装nginx3种方式 1.包管理工具安装&#xff08;yum/apt&#xff09; 2.本地包安装&#xff08;rpm/dpkg&#xff09; 3.源码编译安装 3.1 源码编译安装nginx流程&#xff08;ubuntu&#xff09; 1.…

PyCharm 接入 DeepSeek、OpenAI、Gemini、Mistral等大模型完整版教程(通用)!

PyCharm 接入 DeepSeek、OpenAI、Gemini、Mistral等大模型完整版教程&#xff08;通用&#xff09;&#xff01; 当我们成功接入大模型时&#xff0c;可以选中任意代码区域进行解答&#xff0c;共分为三个区域&#xff0c;分别是选中区域、提问区域以及回答区域&#xff0c;我…

Python——计算机网络

一.ip 1.ip的定义 IP是“Internet Protocol”的缩写&#xff0c;即“互联网协议”。它是用于计算机网络通信的基础协议之一&#xff0c;属于TCP/IP协议族中的网络层协议。IP协议的主要功能是负责将数据包从源主机传输到目标主机&#xff0c;并确保数据能够在复杂的网络环境中正…

【LeetCode合并区间C++实现】【c++】【合并区间】

LeetCode合并区间C实现 LeetCode 56题思路图示完整代码运行结果代码或思路哪里有误还请指正&#xff01;&#xff01;thank you!! LeetCode 56题 以数组 intervals 表示若干个区间的集合&#xff0c;其中单个区间为 intervals[i] [starti, endi] 。请你合并所有重叠的区间&am…

笔记六:单链表链表介绍与模拟实现

在他一生中&#xff0c;从来没有人能够像你们这样&#xff0c;以他的视角看待这个世界。 ---------《寻找天堂》 目录 文章目录 一、什么是链表&#xff1f; 二、为什么要使用链表&#xff1f; 三、 单链表介绍与使用 3.1 单链表 3.1.1 创建单链表节点 3.1.2 单链表的头插、…

使用Modelsim手动仿真

FPGA设计流程 在设计输入之后,设计综合前进行 RTL 级仿真,称为综合前仿真,也称为前仿真或 功能仿真。前仿真也就是纯粹的功能仿真,主旨在于验证电路的功能是否符合设计要求,其特点是不考虑电路门延迟与线延迟。在完成一个设计的代码编写工作之后,可以直接对代码进行仿真,…

Docker搭建Redis哨兵模式【一主两从三哨兵】

Docker搭建Redis哨兵模式 系统: CentOS 7 Dockder 版本: VMware虚拟机 网络适配器 网络连接 桥接模式:直接连接物理网络查看IP命令 ip addr一、哨兵模式概述 1. 官方文档与关联博客 官方文档:https://redis.io/docs/latest/operate/oss_and_stack/management/sentinel关联博…

(更新完)LPZero: Language Model Zero-cost Proxy Search from Zero

LPZero代码 摘要 神经架构搜索 (NAS) 有助于自动执行有效的神经网络搜索&#xff0c;同时需要大量的计算资源&#xff0c;尤其是对于语言模型。零样本 NAS 利用零成本 (ZC) 代理来估计模型性能&#xff0c;从而显着降低计算需求。然而&#xff0c;现有的 ZC 代理严重依赖于深…

【互联网性能指标】QPS/TPS/PV/UV/IP/GMV/DAU/MAU/RPS

&#x1f4d5;我是廖志伟&#xff0c;一名Java开发工程师、《Java项目实战——深入理解大型互联网企业通用技术》&#xff08;基础篇&#xff09;、&#xff08;进阶篇&#xff09;、&#xff08;架构篇&#xff09;清华大学出版社签约作家、Java领域优质创作者、CSDN博客专家、…

【Linux docker】关于docker启动出错的解决方法。

无论遇到什么docker启动不了的问题 就是 查看docker状态sytemctl status docker查看docker日志sudo journalctl -u docker.service查看docker三个配置文件&#xff08;可能是配置的时候格式错误&#xff09;&#xff1a;/etc/docker/daemon.json&#xff08;如果存在&#xf…

拉取gitlab项目时出现500的错误的权限问题

title: 拉取gitlab项目时出现500的错误的权限问题 date: 2025-03-10 18:09:08 tags: gitlabgit拉取gitlab项目时出现500的错误的权限问题 Gitlab克隆代码**我遇到的问题错误**:**问题解决步骤**:1、确定你可以浏览器访问到项目页面2、确定你的邮箱或账号已添加,有权限可以拉…