【Iceberg分析】Spark集成Iceberg采集输出

Spark集成Iceberg采集输出

文章目录

  • Spark集成Iceberg采集输出
    • Iceberg提供了两类指标和提供了两类指标输出器
      • ScanReport
      • CommitReport
    • LoggingMetricsReporter
    • RESTMetricsReporter
    • 验证示例
      • 相关环境配置
      • 结果说明

Iceberg提供了两类指标和提供了两类指标输出器

ScanReport

包含在对给定表进行扫描规划期间收集到的指标。除了涉及表的一些一般信息(如快照 id 或表名)外,它还包括以下指标:

  • 扫描规划总持续时间
  • 结果中包含的数据,删除文件数量
  • 扫描/跳过的数据,删除清单数量
  • 扫描/跳过的数据,删除文件数
  • 扫描的相等,位置删除文件数

CommitReport

载有在提交对表的更改(又称生成快照)后收集的指标。除了涉及表的一些一般信息(如快照 id 或表名)外,它还包括以下指标:

  • 总持续时间
  • 提交成功所需的尝试次数
  • 添加/删除的数据,删除文件数
  • 添加/删除的相等,位置删除文件数
  • 添加/删除的相等,位置删除文件数

LoggingMetricsReporter

日志指标输出器,输出在日志文件中。

RESTMetricsReporter

Rest指标输出器,发送至Rest服务中

只能在使用restcatalog时,才能使用该指标输出器。

验证示例

在这里插入图片描述

相关环境配置

iceberg-demo相关配置

<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>

    <groupId>com.donny.demo</groupId>
    <artifactId>iceberg-demo</artifactId>
    <version>1.0-SNAPSHOT</version>
    <packaging>jar</packaging>

    <name>iceberg-demo</name>
    <url>http://maven.apache.org</url>

    <properties>
        <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
        <spark.version>3.4.2</spark.version>
        <iceberg.version>1.6.1</iceberg.version>
        <parquet.version>1.13.1</parquet.version>
        <avro.version>1.11.3</avro.version>
        <parquet.hadoop.bundle.version>1.8.1</parquet.hadoop.bundle.version>
    </properties>

    <dependencies>

        <dependency>
            <groupId>org.apache.spark</groupId>
            <artifactId>spark-core_2.12</artifactId>
            <version>${spark.version}</version>
            <exclusions>
                <exclusion>
                    <groupId>org.apache.avro</groupId>
                    <artifactId>avro</artifactId>
                </exclusion>
            </exclusions>
        </dependency>
        <dependency>
            <groupId>org.apache.spark</groupId>
            <artifactId>spark-sql_2.12</artifactId>
            <version>${spark.version}</version>
            <exclusions>
                <exclusion>
                    <groupId>org.apache.parquet</groupId>
                    <artifactId>parquet-column</artifactId>
                </exclusion>
                <exclusion>
                    <groupId>org.apache.parquet</groupId>
                    <artifactId>parquet-hadoop-bundle</artifactId>
                </exclusion>
                <exclusion>
                    <groupId>org.apache.parquet</groupId>
                    <artifactId>parquet-hadoop</artifactId>
                </exclusion>
            </exclusions>
        </dependency>

        <dependency>
            <groupId>org.apache.iceberg</groupId>
            <artifactId>iceberg-core</artifactId>
            <version>${iceberg.version}</version>
        </dependency>
        <dependency>
            <groupId>org.apache.iceberg</groupId>
            <artifactId>iceberg-spark-3.4_2.12</artifactId>
            <version>${iceberg.version}</version>
        </dependency>
        <dependency>
            <groupId>org.apache.iceberg</groupId>
            <artifactId>iceberg-spark-extensions-3.4_2.12</artifactId>
            <version>${iceberg.version}</version>
            <exclusions>
                <exclusion>
                    <groupId>org.antlr</groupId>
                    <artifactId>antlr4</artifactId>
                </exclusion>
                <exclusion>
                    <groupId>org.antlr</groupId>
                    <artifactId>antlr4-runtime</artifactId>
                </exclusion>
            </exclusions>
        </dependency>
        <dependency>
            <groupId>org.apache.parquet</groupId>
            <artifactId>parquet-column</artifactId>
            <version>${parquet.version}</version>
        </dependency>
        <dependency>
            <groupId>org.apache.parquet</groupId>
            <artifactId>parquet-hadoop</artifactId>
            <version>${parquet.version}</version>
        </dependency>
        <dependency>
            <groupId>org.apache.parquet</groupId>
            <artifactId>parquet-hadoop-bundle</artifactId>
            <version>${parquet.hadoop.bundle.version}</version>
        </dependency>
        <dependency>
            <groupId>org.apache.avro</groupId>
            <artifactId>avro</artifactId>
            <version>${avro.version}</version>
        </dependency>

        <dependency>
            <groupId>junit</groupId>
            <artifactId>junit</artifactId>
            <version>3.8.1</version>
            <scope>test</scope>
        </dependency>
    </dependencies>
</project>

重写日志输出配置文件log4j2.properties,将指标日志输出至指标日志文件。spark的默认日志配置文件来自spark-core包,org.apache.spark.log4j2-defaults.properties

# Set everything to be logged to the console
rootLogger.level = info
rootLogger.appenderRef.stdout.ref = console
logger.icebergMetric.appenderRef.file.ref = RollingFile
logger.icebergMetric.appenderRef.stdout.ref = console

appender.console.type = Console
appender.console.name = console
appender.console.target = SYSTEM_ERR
appender.console.layout.type = PatternLayout
appender.console.layout.pattern = %d{yyyy-MM-dd HH:mm:ss.SSS} %p %c{1}: %m%n%ex

appender.CUSTOM.type = RollingFile
appender.CUSTOM.name = RollingFile
appender.CUSTOM.fileName = logs/iceberg_metrics.log
appender.CUSTOM.filePattern = logs/iceberg_metrics.%d{yyyy-MM-dd}-%i.log.gz
appender.CUSTOM.layout.type = PatternLayout
appender.CUSTOM.layout.pattern = %d{yyyy-MM-dd HH:mm:ss.SSS} %-5p %c{1}:%L - %m%n
appender.CUSTOM.strategy.type = DefaultRolloverStrategy
appender.CUSTOM.strategy.delete.type = Delete
appender.CUSTOM.strategy.delete.basePath = logs
appender.CUSTOM.strategy.delete.0.type = IfFileName
appender.CUSTOM.strategy.delete.0.regex = iceberg_metrics.*.log.gz
appender.CUSTOM.strategy.delete.1.type = IfLastModified
appender.CUSTOM.strategy.delete.1.age = P15D
appender.CUSTOM.policy.type = TimeBasedTriggeringPolicy

# Settings to quiet third party logs that are too verbose
logger.jetty.name = org.sparkproject.jetty
logger.jetty.level = warn
logger.jetty2.name = org.sparkproject.jetty.util.component.AbstractLifeCycle
logger.jetty2.level = error
logger.repl1.name = org.apache.spark.repl.SparkIMain$exprTyper
logger.repl1.level = info
logger.repl2.name = org.apache.spark.repl.SparkILoop$SparkILoopInterpreter
logger.repl2.level = info

# Set the default spark-shell log level to WARN. When running the spark-shell, the
# log level for this class is used to overwrite the root logger's log level, so that
# the user can have different defaults for the shell and regular Spark apps.
logger.repl.name = org.apache.spark.repl.Main
logger.repl.level = warn

# SPARK-9183: Settings to avoid annoying messages when looking up nonexistent UDFs
# in SparkSQL with Hive support
logger.metastore.name = org.apache.hadoop.hive.metastore.RetryingHMSHandler
logger.metastore.level = fatal
logger.hive_functionregistry.name = org.apache.hadoop.hive.ql.exec.FunctionRegistry
logger.hive_functionregistry.level = error

# Parquet related logging
logger.parquet.name = org.apache.parquet.CorruptStatistics
logger.parquet.level = error
logger.parquet2.name = parquet.CorruptStatistics
logger.parquet2.level = error

# Custom logger for your application
logger.icebergMetric.name = org.apache.iceberg.metrics.LoggingMetricsReporter
logger.icebergMetric.level = Info
logger.icebergMetric.additivity = false

Java主类,主要为表配置指标输出类,才能进行指标输出。

package com.donny.demo;

import org.apache.iceberg.FileScanTask;
import org.apache.iceberg.TableScan;
import org.apache.iceberg.io.CloseableIterable;
import org.apache.iceberg.metrics.LoggingMetricsReporter;
import org.apache.iceberg.spark.Spark3Util;
import org.apache.spark.sql.AnalysisException;
import org.apache.spark.sql.Dataset;
import org.apache.spark.sql.Row;
import org.apache.spark.sql.SparkSession;

import java.io.IOException;


/**
 * @author 1792998761@qq.com
 * @version 1.0
 */
public class IcebergSparkDemo {

    public static void main(String[] args) throws AnalysisException, IOException, InterruptedException {
        SparkSession spark = SparkSession
                .builder()
                .master("local")
                .appName("Iceberg spark example")
                .config("spark.sql.extensions", "org.apache.iceberg.spark.extensions.IcebergSparkSessionExtensions")
                .config("spark.sql.catalog.local", "org.apache.iceberg.spark.SparkCatalog")
                .config("spark.sql.catalog.local.type", "hadoop") //指定catalog 类型
                .config("spark.sql.catalog.local.warehouse", "iceberg_warehouse")
                .getOrCreate();

        spark.sql("CREATE TABLE local.iceberg_db.table2( id bigint, data string, ts timestamp) USING iceberg PARTITIONED BY (day(ts))");
        spark.sql("INSERT INTO local.iceberg_db.table2 VALUES (1, 'a', cast(1727601585 as timestamp)),(2, 'b', cast(1724923185 as timestamp)),(3, 'c', cast(1724919585 as timestamp))");
        spark.sql("INSERT INTO local.iceberg_db.table2 VALUES (4, 'd', cast(1727605185 as timestamp)),(5, 'e', cast(1725963585 as timestamp)),(6, 'f', cast(1726827585 as timestamp))");
        spark.sql("DELETE FROM local.iceberg_db.table2  where id in (2)");

        org.apache.iceberg.Table table = Spark3Util.loadIcebergTable(spark, "local.iceberg_db.table2");
        spark.sql("INSERT INTO local.iceberg_db.table2 VALUES (4, 'd', cast(1724750385 as timestamp)),(5, 'e', cast(1724663985 as timestamp)),(6, 'f', cast(1727342385 as timestamp))");
        spark.sql("INSERT INTO local.iceberg_db.table2 VALUES (7, 'h', cast(1727601585 as timestamp)),(8, 'i', cast(1724923185 as timestamp)),(9, 'j', cast(1724836785 as timestamp))");
        spark.sql("INSERT INTO local.iceberg_db.table2 VALUES (10, 'k', cast(1727601585 as timestamp)),(11, 'l', cast(1724923185 as timestamp)),(12, 'm', cast(1724836785 as timestamp))");
        // 配置表的指标输出器
        table.updateProperties()
                .set("metrics.reporters", LoggingMetricsReporter.class.getName())
                .commit();
        // 主动表扫描
        TableScan tableScan =
                table.newScan();
        try (CloseableIterable<FileScanTask> fileScanTasks = tableScan.planFiles()) {
        }

        spark.sql("INSERT INTO local.iceberg_db.table2 VALUES (30, 't', cast(1727605185 as timestamp)),(31, 'y', cast(1725963585 as timestamp)),(32, 'i', cast(1726827585 as timestamp))");

        Dataset<Row> result = spark.sql("SELECT * FROM local.iceberg_db.table2 where ts >= '2024-09-20'");
        result.show();
        spark.close();
    }
}

结果说明

目前验证的时候只发现是需要主动调用scan,输出的指标(主动输出指标)

2024-10-07 09:38:11.903 INFO  LoggingMetricsReporter:38 - Received metrics report: ScanReport{
tableName=local.iceberg_db.table2,
snapshotId=3288641599702333945,
filter=true,
schemaId=0,
projectedFieldIds=[1, 2, 3],
projectedFieldNames=[id, data, ts],
scanMetrics=ScanMetricsResult{
	totalPlanningDuration=TimerResult{timeUnit=NANOSECONDS, totalDuration=PT0.294853952S, count=1}, 
	resultDataFiles=CounterResult{unit=COUNT, value=0},
    resultDeleteFiles=CounterResult{unit=COUNT, value=0},
    totalDataManifests=CounterResult{unit=COUNT, value=6},
    totalDeleteManifests=CounterResult{unit=COUNT, value=0},
    scannedDataManifests=CounterResult{unit=COUNT, value=0},
    skippedDataManifests=CounterResult{unit=COUNT, value=0},
    totalFileSizeInBytes=CounterResult{unit=BYTES, value=0},
    totalDeleteFileSizeInBytes=CounterResult{unit=BYTES, value=0},
    skippedDataFiles=CounterResult{unit=COUNT, value=0},
    skippedDeleteFiles=CounterResult{unit=COUNT, value=0},
    scannedDeleteManifests=CounterResult{unit=COUNT, value=0},
    skippedDeleteManifests=CounterResult{unit=COUNT, value=0},
    indexedDeleteFiles=CounterResult{unit=COUNT, value=0},
    equalityDeleteFiles=CounterResult{unit=COUNT, value=0},
    positionalDeleteFiles=CounterResult{unit=COUNT, value=0}},
metadata={
	engine-version=3.4.2, 
	iceberg-version=Apache Iceberg 1.6.1 (commit 8e9d59d299be42b0bca9461457cd1e95dbaad086), 
	app-id=local-1728265088818, 
	engine-name=spark}}

删除语句触发scan指标,(被动指标输出)

2024-10-07 11:15:54.708 INFO  LoggingMetricsReporter:38 - Received metrics report: ScanReport{
tableName=local.iceberg_db.table2,
snapshotId=7181960343136679052,
filter=ref(name="id") == "(1-digit-int)",
schemaId=0,
projectedFieldIds=[1, 2, 3],
projectedFieldNames=[id, data, ts],
scanMetrics=ScanMetricsResult{
	totalPlanningDuration=TimerResult{timeUnit=NANOSECONDS, totalDuration=PT0.098792497S, count=1},
	resultDataFiles=CounterResult{unit=COUNT, value=1},
    resultDeleteFiles=CounterResult{unit=COUNT, value=0},
    totalDataManifests=CounterResult{unit=COUNT, value=2},
    totalDeleteManifests=CounterResult{unit=COUNT, value=0},
    scannedDataManifests=CounterResult{unit=COUNT, value=2},
    skippedDataManifests=CounterResult{unit=COUNT, value=0},
    totalFileSizeInBytes=CounterResult{unit=BYTES, value=898},
    totalDeleteFileSizeInBytes=CounterResult{unit=BYTES, value=0},
    skippedDataFiles=CounterResult{unit=COUNT, value=4},
    skippedDeleteFiles=CounterResult{unit=COUNT, value=0},
    scannedDeleteManifests=CounterResult{unit=COUNT, value=0},
    skippedDeleteManifests=CounterResult{unit=COUNT, value=0},
    indexedDeleteFiles=CounterResult{unit=COUNT, value=0},
    equalityDeleteFiles=CounterResult{unit=COUNT, value=0},
    positionalDeleteFiles=CounterResult{unit=COUNT, value=0}}, 
metadata={
	engine-version=3.4.2, 
	iceberg-version=Apache Iceberg 1.6.1 (commit 8e9d59d299be42b0bca9461457cd1e95dbaad086), 
	app-id=local-1728270940331, 
	engine-name=spark}}

insert触发的commit指标,(被动指标输出)

2024-10-06 15:48:47 INFO  LoggingMetricsReporter:38 - Received metrics report: 
CommitReport{
	tableName=local.iceberg_db.table2, 
	snapshotId=3288641599702333945, 
	sequenceNumber=6, 
	operation=append, 
	commitMetrics=CommitMetricsResult{
		totalDuration=TimerResult{timeUnit=NANOSECONDS, totalDuration=PT0.430784537S, count=1}, 
		attempts=CounterResult{unit=COUNT, value=1}, 
		addedDataFiles=CounterResult{unit=COUNT, value=3}, 
		removedDataFiles=null, 
		totalDataFiles=CounterResult{unit=COUNT, value=14}, 
		addedDeleteFiles=null,
        addedEqualityDeleteFiles=null,
        addedPositionalDeleteFiles=null,
        removedDeleteFiles=null,
        removedEqualityDeleteFiles=null,
        removedPositionalDeleteFiles=null,
        totalDeleteFiles=CounterResult{unit=COUNT, value=0}, addedRecords=CounterResult{unit=COUNT, value=3}, 
        removedRecords=null, 
        totalRecords=CounterResult{unit=COUNT, value=14}, 
        addedFilesSizeInBytes=CounterResult{unit=BYTES, value=2646}, 
        removedFilesSizeInBytes=null, 
        totalFilesSizeInBytes=CounterResult{unit=BYTES, value=12376}, 
        addedPositionalDeletes=null,
        removedPositionalDeletes=null, 
        totalPositionalDeletes=CounterResult{unit=COUNT, value=0}, 
        addedEqualityDeletes=null, 
        removedEqualityDeletes=null, 
        totalEqualityDeletes=CounterResult{unit=COUNT, value=0}}, 
    metadata={
        engine-version=3.4.2, 
        app-id=local-1728200916879, 
        engine-name=spark, 
        iceberg-version=Apache Iceberg 1.6.1 (commit 8e9d59d299be42b0bca9461457cd1e95dbaad086)}}

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mfbz.cn/a/890481.html

如若内容造成侵权/违法违规/事实不符,请联系我们进行投诉反馈qq邮箱809451989@qq.com,一经查实,立即删除!

相关文章

论文笔记:Prompt-Based Meta-Learning For Few-shot Text Classification

论文来源&#xff1a;EMNLP 2022 论文地址&#xff1a;2022.emnlp-main.87.pdf (aclanthology.org) 代码地址&#xff1a;GitHub - MGHZHANG/PBML GB/T 7714 Zhang H, Zhang X, Huang H, et al. Prompt-Based Meta-Learning For Few-shot Text Classification[C]//Proceedi…

一维数组的引用

#define SIZE 5 int main(void) { int i 0; int arr[SIZE] { 86,85,85,896,45 };//同理五个数据只是偶然&#xff0c;可能会更多 //输入 for (i 0;i < SIZE;i) { printf("请输入你的第%d个值&#xff1a;",i1); scanf_s(&…

设计模式之适配器模式(通俗易懂--代码辅助理解【Java版】)

文章目录 设计模式概述1、适配器模式2、适配器模式的使用场景3、优点4、缺点5、主要角色6、代码示例1&#xff09;UML图2&#xff09;源代码&#xff08;1&#xff09;定义一部手机&#xff0c;它有个typec口。&#xff08;2&#xff09;定义一个vga接口。&#xff08;3&#x…

拆解学习【无线充,EMMC,锂电池电量计,OTA】(二)

主要学习到了&#xff1a;无线充&#xff0c;EMMC&#xff0c;手表CPU方案&#xff0c;锂电池电量计&#xff0c;OTA。 无线充电功能是产品的核心卖点之一&#xff0c;充电头网通过拆解发现&#xff0c;手表内部使用恒玄BES2500BP智能手表单芯片解决方案&#xff0c;内置四核C…

图书馆自习室座位预约管理微信小程序+ssm(lw+演示+源码+运行)

摘 要 随着电子商务快速发展世界各地区,各个高校对图书馆也起来越重视.图书馆代表着一间学校或者地区的文化标志&#xff0c;因为图书馆丰富的图书资源能够带给我们重要的信息资源&#xff0c;图书馆管理系统是学校管理机制重要的一环&#xff0c;,面对这一世界性的新动向和新…

linux线程 | 线程的控制(二)

前言&#xff1a; 本节内容是线程的控制部分的第二个小节。 主要是列出我们的线程控制部分的几个细节性问题以及我们的线程分离。这些都是需要大量的代码去进行实验的。所以&#xff0c; 准备好接受新知识的友友们请耐心观看。 现在开始我们的学习吧。 ps:本节内容适合了解线程…

如何用AI两小时上线自己的小程序

ChatGPT这个轰动全球的产品自问世以来&#xff0c;已经过了将近2年的时间&#xff0c;各行各业的精英们如火如荼的将AI能力应用到自己生产的产品中来。 为分担人类的部分工作&#xff0c;AI还具有非常大的想象空间&#xff0c;例如对于一个程序员来说&#xff0c;使用AI生成快…

Redis——持久化

文章目录 Redis持久化Redis的两种持久化的策略定期备份&#xff1a;RDB触发机制rdb的触发时机&#xff1a;手动执行save&bgsave保存测试不手动执行bgsave测试bgsave操作流程测试通过配置&#xff0c;自动生成rdb快照RDB的优缺点 实时备份&#xff1a;AOFAOF是否会影响到red…

Redis:分布式 - 主从复制

Redis&#xff1a;分布式 - 主从复制 概念配置主从模式info replicationslave-read-onlytcp-nodelay 命令slaveof 主从结构一主一从一主多从 主从复制流程数据同步命令全量同步部分同步实时同步 节点晋升 概念 Redis的最佳应用&#xff0c;还是要在分布式系统中。对于非分布式…

前端优化,解决页面加载慢

问题&#xff1a;vue项目使用vite打包后&#xff0c;部署在nginx服务器上&#xff0c;页面上访问时很慢&#xff0c;发现有个js文件很大导致加载很慢 先说结论&#xff1a; 方式时间未优化前21s开启压缩&#xff08;6级&#xff09;6s去掉大依赖&#xff08;flowable&#xf…

YoloV8改进策略:BackBone改进|CAFormer在YoloV8中的创新应用,显著提升目标检测性能

摘要 在目标检测领域,模型性能的提升一直是研究者和开发者们关注的重点。近期,我们尝试将CAFormer模块引入YoloV8模型中,以替换其原有的主干网络,这一创新性的改进带来了显著的性能提升。 CAFormer,作为MetaFormer框架下的一个变体,结合了深度可分离卷积和普通自注意力…

解决海外社媒风控问题的工具——云手机

随着中国企业逐步进入海外市场&#xff0c;海外社交媒体的风控问题严重影响了企业的推广效果与账号运营。这种背景下&#xff0c;云手机作为一种新型技术解决方案&#xff0c;正日益成为企业应对海外社媒风控的重要工具。 由于海外社媒的严格监控&#xff0c;企业经常面临账号流…

【计算机网络 - 基础问题】每日 3 题(三十八)

✍个人博客&#xff1a;https://blog.csdn.net/Newin2020?typeblog &#x1f4e3;专栏地址&#xff1a;http://t.csdnimg.cn/fYaBd &#x1f4da;专栏简介&#xff1a;在这个专栏中&#xff0c;我将会分享 C 面试中常见的面试题给大家~ ❤️如果有收获的话&#xff0c;欢迎点赞…

MongoDB初学者入门教学:与MySQL的对比理解

&#x1f3dd;️ 博主介绍 大家好&#xff0c;我是一个搬砖的农民工&#xff0c;很高兴认识大家 &#x1f60a; ~ &#x1f468;‍&#x1f393; 个人介绍&#xff1a;本人是一名后端Java开发工程师&#xff0c;坐标北京 ~ &#x1f389; 感谢关注 &#x1f4d6; 一起学习 &…

利用弹性盒子完成移动端布局(第二次实验作业)

需要实现的效果如下&#xff1a; 下面是首先是这个项目的框架&#xff1a; 然后是html页面的代码&#xff1a; <!DOCTYPE html> <html lang"en"><head><meta charset"UTF-8"><meta name"viewport" content"wid…

基于SpringBoot+Vue+uniapp的高校教务管理小程序系统设计和实现

2. 详细视频演示 文章底部名片&#xff0c;联系我获取更详细的演示视频 3. 论文参考 4. 项目运行截图 代码运行&#xff0c;效果展示图 代码运行&#xff0c;效果展示图 代码运行&#xff0c;效果展示图 代码运行&#xff0c;效果展示图 代码运行&#xff0c;效果展示图 5. 技…

深入Semantic Kernel:插件开发与实践应用(进阶篇)

文章目录 一、引言二、开发Semantic Kernel插件三、实战3.1 时间信息插件3.2 小部件工厂插件3.3 初始化Semantic Kernel实例3.4 四个实战示例3.4.1 模型幻觉3.4.2 给模型提供时间信息3.4.3 AI自动调用函数3.4.4 AI自动调用和使用枚举 四、结论 一、引言 在上一篇入门文章《探索…

集成方案 | 借助 Microsoft Copilot for Sales 与 Docusign,加速销售流程!

加速协议信息提取&#xff0c;随时优化邮件内容~ 在当今信息爆炸的时代&#xff0c;销售人员掌握着丰富的数据资源。他们能够通过 CRM 平台、电子邮件、合同库以及其他多种记录系统&#xff0c;随时检索特定个人或组织的关键信息。这些数据对于销售沟通至关重要。然而&#x…

Halcon Blob分析提取小光斑

文章目录 算子complement 返回一个区域的补集select_region_point 选择包含指定像素的所有区域intensity 计算灰度值的均值和偏差 案例 算子 complement 返回一个区域的补集 complement(Region : RegionComplement : : )Region (输入对象)&#xff1a;这指的是输入的一个或多…

AI金融攻防赛:金融场景凭证篡改检测(DataWhale组队学习)

引言 大家好&#xff0c;我是GISer Liu&#x1f601;&#xff0c;一名热爱AI技术的GIS开发者。本系列文章是我跟随DataWhale 2024年10月学习赛的AI金融攻防赛学习总结文档。本文主要讲解如何解决 金融场景凭证篡改检测的核心问题&#xff0c;以及解决思路和代码实现过程。希望…