LangChain 18 LangSmith监控评估Agent并创建对应的数据库

LangChain系列文章

  1. LangChain 实现给动物取名字,
  2. LangChain 2模块化prompt template并用streamlit生成网站 实现给动物取名字
  3. LangChain 3使用Agent访问Wikipedia和llm-math计算狗的平均年龄
  4. LangChain 4用向量数据库Faiss存储,读取YouTube的视频文本搜索Indexes for information retrieve
  5. LangChain 5易速鲜花内部问答系统
  6. LangChain 6根据图片生成推广文案HuggingFace中的image-caption模型
  7. LangChain 7 文本模型TextLangChain和聊天模型ChatLangChain
  8. LangChain 8 模型Model I/O:输入提示、调用模型、解析输出
  9. LangChain 9 模型Model I/O 聊天提示词ChatPromptTemplate, 少量样本提示词FewShotPrompt
  10. LangChain 10思维链Chain of Thought一步一步的思考 think step by step
  11. LangChain 11实现思维树Implementing the Tree of Thoughts in LangChain’s Chain
  12. LangChain 12调用模型HuggingFace中的Llama2和Google Flan t5
  13. LangChain 13输出解析Output Parsers 自动修复解析器
  14. LangChain 14 SequencialChain链接不同的组件
  15. LangChain 15根据问题自动路由Router Chain确定用户的意图
  16. LangChain 16 通过Memory记住历史对话的内容
  17. LangChain 17 LangSmith调试、测试、评估和监视基于任何LLM框架构建的链和智能代理

在这里插入图片描述

1. 评估Agent

除了记录运行,LangSmith还允许您测试和评估LLM应用程序。

在本节中,您将利用LangSmith创建基准数据集,并在代理上运行AI辅助评估器。您将按照以下几个步骤进行:

  1. 创建数据集
  2. 初始化一个新的代理来进行基准测试
  3. 配置评估器来对代理的输出进行评分
  4. 在数据集上运行代理并评估结果

1. 1. 创建一个LangSmith数据集

在下面,我们使用LangSmith客户端从上面的输入问题和标签列表创建一个数据集。您将在以后使用这些数据来衡量新代理的性能。数据集是一组示例,只是您可以用作应用程序测试用例的输入-输出对。

有关数据集的更多信息,包括如何从CSV文件或其他文件创建它们,或者如何在平台上创建它们,请参阅LangSmith文档。

outputs = [
    "LangChain is an open-source framework for building applications using large language models. It is also the name of the company building LangSmith.",
    "LangSmith is a unified platform for debugging, testing, and monitoring language model applications and agents powered by LangChain",
    "July 18, 2023",
    "The langsmith cookbook is a github repository containing detailed examples of how to use LangSmith to debug, evaluate, and monitor large language model-powered applications.",
    "September 5, 2023",
]
dataset_name = f"agent-qa-{unique_id}"

dataset = client.create_dataset(
    dataset_name,
    description="An example dataset of questions over the LangSmith documentation.",
)

for query, answer in zip(inputs, outputs):
    client.create_example(
        inputs={"input": query}, outputs={"output": answer}, dataset_id=dataset.id
    )

smith.langchain
在这里插入图片描述

1.2. 初始化一个新的代理以进行基准测试

LangSmith允许您评估任何LLM、Chains、Agents,甚至是自定义函数。会话代理是有状态的(它们有记忆);为了确保这种状态不会在数据集运行之间共享,我们将传入一个chain_factory(也称为构造函数)函数来为每次调用进行初始化。

在这种情况下,我们将测试一个使用OpenAI的函数调用端点的代理。

from langchain import hub
from langchain.agents import AgentExecutor, AgentType, initialize_agent, load_tools
from langchain.agents.format_scratchpad import format_to_openai_function_messages
from langchain.agents.output_parsers import OpenAIFunctionsAgentOutputParser
from langchain.chat_models import ChatOpenAI
from langchain.tools.render import format_tool_to_openai_function


# Since chains can be stateful (e.g. they can have memory), we provide
# a way to initialize a new chain for each row in the dataset. This is done
# by passing in a factory function that returns a new chain for each row.
def agent_factory(prompt):
    llm_with_tools = llm.bind(
        functions=[format_tool_to_openai_function(t) for t in tools]
    )
    runnable_agent = (
        {
            "input": lambda x: x["input"],
            "agent_scratchpad": lambda x: format_to_openai_function_messages(
                x["intermediate_steps"]
            ),
        }
        | prompt
        | llm_with_tools
        | OpenAIFunctionsAgentOutputParser()
    )
    return AgentExecutor(agent=runnable_agent, tools=tools, handle_parsing_errors=True)

1.3. 配置评估

在UI中手动比较链的结果是有效的,但可能会耗费时间。使用自动化指标和AI辅助反馈来评估您的组件性能可能会有所帮助。

接下来,我们将创建一些预先实现的运行评估器,执行以下操作:

  • 将结果与基本真实标签进行比较。
  • 使用嵌入距离测量语义(不)相似性
  • 使用自定义标准以无参考方式评估代理响应的“方面”

有关如何选择适当的评估器以及如何创建自己的自定义评估器的更多讨论,请参阅LangSmith文档。

from langchain.evaluation import EvaluatorType
from langchain.smith import RunEvalConfig

evaluation_config = RunEvalConfig(
    # Evaluators can either be an evaluator type (e.g., "qa", "criteria", "embedding_distance", etc.) or a configuration for that evaluator
    evaluators=[
        # Measures whether a QA response is "Correct", based on a reference answer
        # You can also select via the raw string "qa"
        EvaluatorType.QA,
        # Measure the embedding distance between the output and the reference answer
        # Equivalent to: EvalConfig.EmbeddingDistance(embeddings=OpenAIEmbeddings())
        EvaluatorType.EMBEDDING_DISTANCE,
        # Grade whether the output satisfies the stated criteria.
        # You can select a default one such as "helpfulness" or provide your own.
        RunEvalConfig.LabeledCriteria("helpfulness"),
        # The LabeledScoreString evaluator outputs a score on a scale from 1-10.
        # You can use default criteria or write our own rubric
        RunEvalConfig.LabeledScoreString(
            {
                "accuracy": """
Score 1: The answer is completely unrelated to the reference.
Score 3: The answer has minor relevance but does not align with the reference.
Score 5: The answer has moderate relevance but contains inaccuracies.
Score 7: The answer aligns with the reference but has minor errors or omissions.
Score 10: The answer is completely accurate and aligns perfectly with the reference."""
            },
            normalize_by=10,
        ),
    ],
    # You can add custom StringEvaluator or RunEvaluator objects here as well, which will automatically be
    # applied to each prediction. Check out the docs for examples.
    custom_evaluators=[],
)

1.4. 运行代理和评估者

使用run_on_dataset(或异步arun_on_dataset)函数来评估你的模型。这将:

  1. 从指定的数据集中获取示例行。
  2. 在每个示例上运行你的代理(或任何自定义函数)。
  3. 将评估器应用于生成的运行轨迹和相应的参考示例,以生成自动反馈。

结果将在LangSmith应用程序中可见。

from langchain import hub

# We will test this version of the prompt
prompt = hub.pull("wfh/langsmith-agent-prompt:798e7324")
import functools

from langchain.smith import (
    arun_on_dataset,
    run_on_dataset,
)

chain_results = run_on_dataset(
    dataset_name=dataset_name,
    llm_or_chain_factory=functools.partial(agent_factory, prompt=prompt),
    evaluation=evaluation_config,
    verbose=True,
    client=client,
    project_name=f"runnable-agent-test-5d466cbc-{unique_id}",
    tags=[
        "testing-notebook",
        "prompt:5d466cbc",
    ],  # Optional, adds a tag to the resulting chain runs
)

# Sometimes, the agent will error due to parsing issues, incompatible tool inputs, etc.
# These are logged as warnings here and captured as errors in the tracing UI.
    View the evaluation results for project 'runnable-agent-test-5d466cbc-bf2162aa' at:
    https://smith.langchain.com/o/ebbaf2eb-769b-4505-aca2-d11de10372a4/projects/p/0c3d22fa-f8b0-4608-b086-2187c18361a5
    [>                                                 ] 0/5

    Chain failed for example 54b4fce8-4492-409d-94af-708f51698b39 with inputs {'input': 'Who trained Llama-v2?'}
    Error Type: TypeError, Message: DuckDuckGoSearchResults._run() got an unexpected keyword argument 'arg1'


    [------------------------------------------------->] 5/5
     Eval quantiles:
                                   0.25       0.5      0.75      mean      mode
    embedding_cosine_distance  0.086614  0.118841  0.183672  0.151444  0.050158
    correctness                0.000000  0.500000  1.000000  0.500000  0.000000
    score_string:accuracy      0.775000  1.000000  1.000000  0.775000  1.000000
    helpfulness                0.750000  1.000000  1.000000  0.750000  1.000000

1.5 请查看测试结果

您可以通过点击上面输出中的URL或导航到LangSmith“agent-qa-{unique_id}”数据集中的“测试和数据集”页面来查看下面的测试结果跟踪UI。

在这里插入图片描述

2. 整合代码运行

Agents/chat_agents_search_evaluate.py这段代码使用 Langchain 和 LangSmith 库构建了一个复杂的问答系统,利用大型语言模型和其他工具(如 DuckDuckGo 搜索)来回答问题,并进行评估和测试。以下是对代码的详细解释和注释:

# 导入与 OpenAI 语言模型进行交互的模块。
from langchain.llms import OpenAI  

# 导入创建和管理提示模板的模块。
from langchain.prompts import PromptTemplate  

# 导入构建基于大型语言模型的处理链的模块。
from langchain.chains import LLMChain  

# 导入从 .env 文件加载环境变量的库。
from dotenv import load_dotenv  

# 导入创建和管理 OpenAI 聊天模型实例的类。
from langchain.chat_models import ChatOpenAI

# 加载 .env 文件中的环境变量。
load_dotenv()  

# 设置环境变量,包括唯一项目 ID 和 Langchain API 设置。
import os
from uuid import uuid4
unique_id = uuid4().hex[0:8]
os.environ["LANGCHAIN_PROJECT"] = f"Tracing Walkthrough - {unique_id}"
os.environ["LANGCHAIN_ENDPOINT"] = "https://api.smith.langchain.com"
os.environ["LANGCHAIN_API_KEY"] = "ls__xxxx"  # 替换为你的 API 密钥

# 初始化 LangSmith 客户端。
from langsmith import Client
client = Client()

# 导入 Langchain 的其他必要模块和工具。
from langchain import hub
from langchain.agents import AgentExecutor
from langchain.agents.format_scratchpad import format_to_openai_function_messages
from langchain.agents.output_parsers import OpenAIFunctionsAgentOutputParser
from langchain.tools import DuckDuckGoSearchResults
from langchain.tools.render import format_tool_to_openai_function

# 创建 ChatOpenAI 实例。
llm = ChatOpenAI(model="gpt-3.5-turbo-16k", temperature=0)

# 定义工具列表。
tools = [DuckDuckGoSearchResults(name="duck_duck_go")]

# 定义输入问题列表。
inputs = [
    "What is LangChain?",
    "What's LangSmith?",
    "When was Llama-v2 released?",
    "What is the langsmith cookbook?",
    "When did langchain first announce the hub?",
]

# 创建数据集。
outputs = [
    "LangChain is an open-source framework for building applications using large language models. It is also the name of the company building LangSmith.",
    "LangSmith is a unified platform for debugging, testing, and monitoring language model applications and agents powered by LangChain",
    "July 18, 2023",
    "The langsmith cookbook is a github repository containing detailed examples of how to use LangSmith to debug, evaluate, and monitor large language model-powered applications.",
    "September 5, 2023",
]
dataset_name = f"agent-qa-{unique_id}"
dataset = client.create_dataset(
    dataset_name,
    description="An example dataset of questions over the LangSmith documentation.",
)

# 为每个问题创建数据集示例。
for query, answer in zip(inputs, outputs):
    client.create_example(
        inputs={"input": query}, outputs={"output": answer}, dataset_id=dataset.id
    )

# 导入并使用 Langchain 和 LangSmith 的评估模块。
from langchain.evaluation import EvaluatorType
from langchain.smith import RunEvalConfig
from langchain import hub
from langchain.agents import AgentExecutor, AgentType, initialize_agent, load_tools
from langchain.agents.format_scratchpad import format_to_openai_function_messages
from langchain.agents.output_parsers import OpenAIFunctionsAgentOutputParser
from langchain.chat_models import ChatOpenAI
from langchain.tools.render import format_tool_to_openai_function
from langchain.smith import arun_on_dataset, run_on_dataset

# Since chains can be stateful (e.g. they can have memory), we provide
# a way to initialize a new chain for each row in the dataset. This is done
# by passing in a factory function that returns a new chain for each row.
def agent_factory(prompt):
    llm_with_tools = llm.bind(
        functions=[format_tool_to_openai_function(t) for t in tools]
    )
    runnable_agent = (
        {
            "input": lambda x: x["input"],
            "agent_scratchpad": lambda x: format_to_openai_function_messages(
                x["intermediate_steps"]
            ),
        }
        | prompt
        | llm_with_tools
        | OpenAIFunctionsAgentOutputParser()
    )
    return AgentExecutor(agent=runnable_agent, tools=tools, handle_parsing_errors=True)
# 设置评估配置。
from langchain.evaluation import EvaluatorType
from langchain.smith import RunEvalConfig

evaluation_config = RunEvalConfig(
    # Evaluators can either be an evaluator type (e.g., "qa", "criteria", "embedding_distance", etc.) or a configuration for that evaluator
    evaluators=[
        # Measures whether a QA response is "Correct", based on a reference answer
        # You can also select via the raw string "qa"
        EvaluatorType.QA,
        # Measure the embedding distance between the output and the reference answer
        # Equivalent to: EvalConfig.EmbeddingDistance(embeddings=OpenAIEmbeddings())
        EvaluatorType.EMBEDDING_DISTANCE,
        # Grade whether the output satisfies the stated criteria.
        # You can select a default one such as "helpfulness" or provide your own.
        RunEvalConfig.LabeledCriteria("helpfulness"),
        # The LabeledScoreString evaluator outputs a score on a scale from 1-10.
        # You can use default criteria or write our own rubric
        RunEvalConfig.LabeledScoreString(
            {
                "accuracy": """
Score 1: The answer is completely unrelated to the reference.
Score 3: The answer has minor relevance but does not align with the reference.
Score 5: The answer has moderate relevance but contains inaccuracies.
Score 7: The answer aligns with the reference but has minor errors or omissions.
Score 10: The answer is completely accurate and aligns perfectly with the reference."""
            },
            normalize_by=10,
        ),
    ],
    # You can add custom StringEvaluator or RunEvaluator objects here as well, which will automatically be
    # applied to each prediction. Check out the docs for examples.
    custom_evaluators=[],
)

from langchain import hub
# 从 Langchain Hub 拉取最新版本的提示。
prompt = hub.pull("wfh/langsmith-agent-prompt:798e7324")
print(prompt)

import functools
# 定义代理工厂函数。
from langchain import hub
from langchain.agents import AgentExecutor, AgentType, initialize_agent, load_tools
from langchain.agents.format_scratchpad import format_to_openai_function_messages
from langchain.agents.output_parsers import OpenAIFunctionsAgentOutputParser
from langchain.chat_models import ChatOpenAI
from langchain.tools.render import format_tool_to_openai_function

from langchain.smith import (
    arun_on_dataset,
    run_on_dataset,
)

chain_results = run_on_dataset(
    dataset_name=dataset_name,
    llm_or_chain_factory=functools.partial(agent_factory, prompt=prompt),
    evaluation=evaluation_config,
    verbose=True,
    client=client,
    project_name=f"runnable-agent-test-5d466cbc-{unique_id}",
    tags=[
        "testing-notebook",
        "prompt:5d466cbc",
    ],  # Optional, adds a tag to the resulting chain runs
)

# 打印链运行结果。
print(chain_results)

输出结果:
在这里插入图片描述

看来访问OpenAI受限制很大,需要突破一下。。

$ python Agents/chat_agents_search_evaluate.py
input_variables=['agent_scratchpad', 'input'] input_types={'agent_scratchpad': typing.List[typing.Union[langchain.schema.messages.AIMessage, langchain.schema.messages.HumanMessage, langchain.schema.messages.ChatMessage, langchain.schema.messages.SystemMessage, langchain.schema.messages.FunctionMessage, langchain.schema.messages.ToolMessage]]} messages=[SystemMessagePromptTemplate(prompt=PromptTemplate(input_variables=[], template='You are an expert senior software engineer. You are responsible for answering questions about LangChain. Use functions to consult the documentation before answering.')), HumanMessagePromptTemplate(prompt=PromptTemplate(input_variables=['input'], template='{input}')), MessagesPlaceholder(variable_name='agent_scratchpad')]
View the evaluation results for project 'runnable-agent-test-5d466cbc-3c42290b' at:
https://smith.langchain.com/o/1441af63-d5a4-549b-893f-4f8d06c24390/projects/p/e685bf56-6fe9-4af3-af5c-3f7b6d674dcd?eval=true

View all tests for Dataset agent-qa-3c42290b at:
https://smith.langchain.com/o/1441af63-d5a4-549b-893f-4f8d06c24390/datasets/b4423fd6-5d73-4029-a9f2-a4d4afbd27dd

[--------->                                        ] 1/5Chain failed for example b2bbbc7e-41f7-409f-a566-4afc01f9f1a5 with inputs {'input': 'When did langchain first announce the hub?'}
Error Type: RateLimitError, Message: Rate limit reached for gpt-3.5-turbo-16k in organization org-jkd8QtrppR9UcAr9C841gy2b on requests per min (RPM): Limit 3, Used 3, Requested 1. Please try again in 20s. Visit https://platform.openai.com/account/rate-limits to learn more. You can increase your rate limit by adding a payment method to your account at https://platform.openai.com/account/billing.

[------------------->                              ] 2/5Chain failed for example 6bb422a3-27f4-4d59-a87d-1287e0820597 with inputs {'input': 'What is the langsmith cookbook?'}
Error Type: RateLimitError, Message: Rate limit reached for gpt-3.5-turbo-16k in organization 

[--------------------------------------->          ] 4/5Chain failed for example a4bc6b3e-b757-425e-91e5-76f46ee27ade with inputs {'input': 'When was Llama-v2 released?'}
Error Type: RateLimitError, Message: Rate limit reached for gpt-3.5-turbo-16k in organization 

[------------------------------------------------->] 5/5Chain failed for example 41dd48e4-6193-4e1d-afe2-d0e5f678f280 with inputs {'input': "What's LangSmith?"}
Error Type: RateLimitError, Message: Rate limit reached for gpt-3.5-turbo-16k in organization 

 Eval quantiles:
                                0.25        0.5       0.75       mean       mode
execution_time             15.619272  15.619272  15.619272  15.619272  15.619272
correctness                      NaN        NaN        NaN        NaN        NaN
score_string:accuracy            NaN        NaN        NaN        NaN        NaN
helpfulness                      NaN        NaN        NaN        NaN        NaN
embedding_cosine_distance   0.092627   0.092627   0.092627   0.092627   0.092627

{'project_name': 'runnable-agent-test-5d466cbc-3c42290b', 'results': {'b2bbbc7e-41f7-409f-a566-4afc01f9f1a5': {'output': {'Error': "RateLimitError"}, 'input': {'input': 'When did langchain first announce the hub?'}, 'feedback': [], 'execution_time': 15.619272, 'reference': {'output': 'September 5, 2023'}}, '6bb422a3-27f4-4d59-a87d-1287e0820597': {'output': {'Error': "RateLimitError"}, 'input': {'input': 'What is the langsmith cookbook?'}, 'feedback': [], 'execution_time': 15.619272, 'reference': {'output': 'The langsmith cookbook is a github repository containing detailed examples of how to use LangSmith to debug, evaluate, and monitor large language model-powered applications.'}}, 'a4bc6b3e-b757-425e-91e5-76f46ee27ade': {'output': {'Error': "RateLimitError"}, 'input': {'input': 'When was Llama-v2 released?'}, 'feedback': [], 'execution_time': 15.619272, 'reference': {'output': 'July 18, 2023'}}, '41dd48e4-6193-4e1d-afe2-d0e5f678f280': {'output': {'Error': "RateLimitError"}, 'input': {'input': "What's LangSmith?"}, 'feedback': [], 'execution_time': 15.619272, 'reference': {'output': 'LangSmith is a unified platform for debugging, testing, and monitoring language model applications and agents powered by LangChain'}}, 'e2912900-dc5c-4b2b-bae7-2867ef761edd': {'output': {'input': 'What is LangChain?', 'output': 'LangChain is a blockchain-based platform that aims to bridge the language barrier by providing translation and interpretation services. It utilizes smart contracts and a decentralized network of translators to facilitate secure and efficient language translation. LangChain aims to revolutionize the language industry by providing a transparent and reliable platform for language services.'}, 'input': {'input': 'What is LangChain?'}, 'feedback': [EvaluationResult(key='correctness', score=None, value=None, comment='Error evaluating run 6b9e0ca1-3bd4-4d15-8816-3c34ca4b4f04: The model `gpt-4` does not exist or you do not have access to it. Learn more: https://help.openai.com/en/articles/7102672-how-can-i-access-gpt-4.', correction=None, evaluator_info={}, source_run_id=None, target_run_id=None), EvaluationResult(key='score_string:accuracy', score=None, value=None, comment='Error evaluating run 6b9e0ca1-3bd4-4d15-8816-3c34ca4b4f04: The model `gpt-4` does not exist or you do not have access to it. Learn more: https://help.openai.com/en/articles/7102672-how-can-i-access-gpt-4.', correction=None, evaluator_info={}, source_run_id=None, target_run_id=None), EvaluationResult(key='helpfulness', score=None, value=None, comment='Error evaluating run 6b9e0ca1-3bd4-4d15-8816-3c34ca4b4f04: The model `gpt-4` does not exist or you do not have access to it. Learn more: https://help.openai.com/en/articles/7102672-how-can-i-access-gpt-4.', correction=None, evaluator_info={}, source_run_id=None, target_run_id=None), EvaluationResult(key='embedding_cosine_distance', score=0.09262746580850112, value=None, comment=None, correction=None, evaluator_info={'__run': RunInfo(run_id=UUID('2e07f133-983b-452d-a5be-e2323ac3bd42'))}, source_run_id=None, target_run_id=None)], 'execution_time': 15.619272, 'reference': {'output': 'LangChain is an open-source framework for building applications using large language models. It is also the name of the company building LangSmith.'}}}}

代码
https://github.com/zgpeace/pets-name-langchain/tree/develop

参考

  • https://python.langchain.com/docs/langsmith/walkthrough
  • https://docs.smith.langchain.com/

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:/a/211145.html

如若内容造成侵权/违法违规/事实不符,请联系我们进行投诉反馈qq邮箱809451989@qq.com,一经查实,立即删除!

相关文章

最多不一定最好,只有适合的才是最好的!电脑的内存多大才是合理的

RAM,或称随机存取存储器,是最好的笔记本电脑和最好的电脑最重要的组成部分之一。硬盘驱动器(HDD)或固态驱动器(SSD)存储可以被视为电脑的长期内存,内存是其短期内存。内存可以跟踪后台运行的应用…

背包9讲系列2-完全背包问题

一、前言 又到周末了,这几天可以腾出时间来把背包系列的其他内容好好肝一肝,本次介绍的是完全背包问题,之前的系列内容请查看: 背包9讲系列1-01背包问题 二、完全背包 2.1 问题描述 有n个物品和一个容量为capacity的背包&…

误用STM32串口发送标志位 “USART_FLAG_TXE” “USART_FLAG_TC”造成的BUG

当你使用串口发送数据时是否出现过这样的情况: 1.发送时第一个字节丢失。 2.发送时出现莫名的字节丢失。 3.各种情况字节丢失。 1.先了解一下串口发送的流程图(手动描绘): 可以假想USART_FLAG_TXE是用于检测"弹仓"&…

C++11——initializer_list

initializer_list的简介 initializer_list是C11新出的一个类型,正如类型的简介所说,initializer_list一般用于作为构造函数的参数,来让我们更方便赋值 但是光看这些,我们还是不知道initializer_list到底是个什么类型,…

关于标准库中的vector - (涉及迭代器失效,深浅拷贝,构造函数,内置类型构造函数,匿名对象)

目录 关于vector vector中的常见接口 vector常见接口的实现 迭代器失效 关于深浅拷贝 关于vector 关于vector的文档介绍 1. vector是表示可变大小数组的序列容器。 2. 就像数组一样,vector也采用的连续存储空间来存储元素。也就是意味着可以采用下标对vector的元…

零售数字化“逆熵”的6项原则和8种能力建设|ShopeX徐礼昭

作者:徐礼昭 来源:《三体零售逆熵法则》节选 旧的规则与秩序被打破,无序成为常态 新时代洪流裹挟冲击着传统零售 无序带来的“熵增”侵蚀企业生命 所有人都在不确定性中寻找确定 数字化如何助力企业铸就「反熵增」神器? 如何…

【交换排序 简单选择排序 堆排序 归并排序】

文章目录 交换排序简单选择排序堆排序归并排序 交换排序 冒泡排序的算法分析: 冒泡排序最好的时间复杂度是O(n)冒泡排序最好的时间复杂度是O(n平方)冒泡排序平均时间复杂度为O(n的平方)冒泡排…

spring boot定时器实现定时同步数据

文章目录 目录 文章目录 前言 一、依赖和目录结构 二、使用步骤 2.1 两个数据源的不同引用配置 2.2 对应的mapper 2.3 定时任务处理 总结 前言 一、依赖和目录结构 <dependencies><dependency><groupId>org.springframework.boot</groupId><artifa…

无线物理层安全学习

文章目录 3.17到3.203.85到3.88 3.17到3.20 3.85到3.88

#计算机毕业设计#微信小程序#社区团购#小猪优选

小猪优选 简介 类似于美团优选&#xff0c;多多买菜等线上平台。 是一套社区团购的项目&#xff0c;是依托真实社区的一种区域化、小众化、本地化、网络化的团购形式&#xff0c;同事也是一种生鲜商品流通的新零售模式。 背景&#xff1a; 社区团购是真实具名团体的一种互…

电脑提示mfc100u.dll缺失如何解决?分享有效的5个解决方法

由于各种原因&#xff0c;电脑可能会出现一些问题&#xff0c;其中之一就是电脑提示mfc100u.dll的错误。这个问题可能会导致电脑无法正常运行某些程序或功能。为了解决这个问题&#xff0c;我将分享验证有效的五个修复方法&#xff0c;帮助大家恢复电脑的正常运行。 首先&#…

SALib敏感性分析入门实践笔记

1. 敏感性分析 敏感性分析是指从定量分析的角度研究有关因素发生某种变化对某一个或一组关键指标影响程度的一种不确定分析技术。 其实质是通过逐一改变相关变量数值的方法来解释关键指标受这些因素变动影响大小的规律。 敏感性因素一般可选择主要参数&#xff08;如销售收入、…

Redis数据结构之跳表

跳表是一种有序的数据结构&#xff0c;它通过在每个节点中维持多个指向其他节点的指针&#xff0c;从而达到快速访问节点的目的。其核心思想就是通过建立多级索引来实现空间换时间。 在Redis中&#xff0c;使用跳表作为Zset的一种底层实现之一&#xff0c;这也是跳表在Redis中的…

IO延迟引起的虚拟机故障排查

vmware 虚拟机连上之后总感觉非常卡&#xff0c;查看CPU 内存资源使用率是正常的。 message 日志有cpu卡住的报错 NMI watchdog: BUG: soft lockup - CPU#8 stuck for 23s! [container-31451:45878]下面分析是什么导致的服务器cpu卡住。 1、打开prometheus&#xff0c;观察服务…

CSS新手入门笔记整理:CSS图片样式

图片大小 语法 width:像素值; height:像素值; 图片边框&#xff1a;border 语法 边框&#xff1a;宽度值 样式值 颜色值&#xff1b; border:1px solid red; 图片对齐 水平对齐&#xff1a;text-align 语法 text-align:取值; 属性值 说明 left 左对齐(默认值) cent…

神经网络 代价函数

神经网络 代价函数 首先引入一些便于稍后讨论的新标记方法&#xff1a; 假设神经网络的训练样本有 m m m个&#xff0c;每个包含一组输入 x x x和一组输出信号 y y y&#xff0c; L L L表示神经网络层数&#xff0c; S I S_I SI​表示每层的neuron个数( S l S_l Sl​表示输出…

[英语学习][5][Word Power Made Easy]的精读与翻译优化

[序言] 今日完成第18页的阅读, 发现大量的翻译错误以及不准确. 需要分两篇文章进行讲解. [英文学习的目标] 提升自身的英语水平, 对日后编程技能的提升有很大帮助. 希望大家这次能学到东西, 同时加入我的社区讨论与交流英语相关的内容. [原著英文与翻译版对照][第18页] Wh…

基于SpringBoot的企业客户管理系统的设计与实现

摘 要 本论文主要论述了如何使用JAVA语言开发一个企业客户管理系统&#xff0c;本系统将严格按照软件开发流程进行各个阶段的工作&#xff0c;采用B/S架构&#xff0c;面向对象编程思想进行项目开发。在引言中&#xff0c;作者将论述企业客户管理系统的当前背景以及系统开发的目…

STM32---MDK工程创建

本节我们带领大家学习如何新建一个寄存器库版本MDK的详细步骤&#xff1b; 由于51单片机的学习时&#xff0c;所涉及的寄存器很少&#xff0c;所以往往几个头文件、驱动文件就可以完成相关的功能&#xff0c;但是对于STM32来讲&#xff0c;涉及的寄存器、头文件等都很多&#…

CleanMyMac X4.16.2最新2024注册许可证

都说苹果的闪存是金子做的&#xff0c;这句话并非空穴来风&#xff0c;普遍都是256G起步&#xff0c;闪存没升级一个等级&#xff0c;价格都要增加上千元。昂贵的价格让多数消费者都只能选择低容量版本的mac。而低容量的mac是很难满足用户的需求的&#xff0c;伴随着时间的推移…