Flink学习连载文章13--FlinkSQL高级部分

eventTime

测试数据如下:

{"username":"zs","price":20,"event_time":"2023-07-17 10:10:10"}
{"username":"zs","price":15,"event_time":"2023-07-17 10:10:30"}
{"username":"zs","price":20,"event_time":"2023-07-17 10:10:40"}
{"username":"zs","price":20,"event_time":"2023-07-17 10:11:03"}
{"username":"zs","price":20,"event_time":"2023-07-17 10:11:04"}
{"username":"zs","price":20,"event_time":"2023-07-17 10:12:04"}
{"username":"zs","price":20,"event_time":"2023-07-17 11:12:04"}
{"username":"zs","price":20,"event_time":"2023-07-17 11:12:04"}
{"username":"zs","price":20,"event_time":"2023-07-17 12:12:04"}
{"username":"zs","price":20,"event_time":"2023-07-18 12:12:04"}

需求:每隔1分钟统计这1分钟的每个用户的总消费金额和消费次数

需要用到滚动窗口

编写好sql:

CREATE TABLE table1 (
  `username` string,
  `price` int,
  `event_time` TIMESTAMP(3),
  watermark for event_time as event_time - interval '3' second
) WITH (
  'connector' = 'kafka',
  'topic' = 'topic1',
  'properties.bootstrap.servers' = 'bigdata01:9092',
  'properties.group.id' = 'g1',
  'scan.startup.mode' = 'latest-offset',
  'format' = 'json'
);

编写sql:
select 
   window_start,
   window_end,
   username,
   count(1) zongNum,
   sum(price) totalMoney 
   from table(TUMBLE(TABLE table1, DESCRIPTOR(event_time), INTERVAL '60' second))
group by window_start,window_end,username;

分享一个错误:

Exception in thread "main" org.apache.flink.table.api.ValidationException: SQL validation failed. The window function TUMBLE(TABLE table_name, DESCRIPTOR(timecol), datetime interval) requires the timecol is a time attribute type, but is VARCHAR(2147483647).

at org.apache.flink.table.planner.calcite.FlinkPlannerImpl.org$apache$flink$table$planner$calcite$FlinkPlannerImpl$$validate(FlinkPlannerImpl.scala:156)

at org.apache.flink.table.planner.calcite.FlinkPlannerImpl.validate(FlinkPlannerImpl.scala:107)

说明创建窗口的时候,使用的字段不是时间字段,需要写成时间字段TIMESTAMP(3),使用了eventtime需要添加水印,否则报错。

需求:按照滚动窗口和EventTime进行统计,每隔1分钟统计每个人的消费总额是多少

package com.bigdata.day08;

import org.apache.flink.api.common.RuntimeExecutionMode;
import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
import org.apache.flink.table.api.bridge.java.StreamTableEnvironment;

/**
 * @基本功能:
 * @program:FlinkDemo
 * @author: 闫哥
 * @create:2023-11-28 14:12:28
 **/
public class _03EventTimeGunDongWindowDemo {

    public static void main(String[] args) throws Exception {

        //1. env-准备环境
        StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
        env.setParallelism(1);
        StreamTableEnvironment tenv = StreamTableEnvironment.create(env);

        //2. 创建表
        tenv.executeSql("CREATE TABLE table1 (\n" +
                        "  `username` String,\n" +
                        "  `price` int,\n" +
                        "  `event_time` TIMESTAMP(3),\n" +
                        "   watermark for event_time as event_time - interval '3' second\n" +
                        ") WITH (\n" +
                        "  'connector' = 'kafka',\n" +
                        "  'topic' = 'topic1',\n" +
                        "  'properties.bootstrap.servers' = 'bigdata01:9092',\n" +
                        "  'properties.group.id' = 'testGroup1',\n" +
                        "  'scan.startup.mode' = 'group-offsets',\n" +
                        "  'format' = 'json'\n" +
                        ")");
        //3. 通过sql语句统计结果

        tenv.executeSql("select \n" +
                        "   window_start,\n" +
                        "   window_end,\n" +
                        "   username,\n" +
                        "   count(1) zongNum,\n" +
                        "   sum(price) totalMoney \n" +
                        "   from table(TUMBLE(TABLE table1, DESCRIPTOR(event_time), INTERVAL '60' second))\n" +
                        "group by window_start,window_end,username").print();
        //4. sink-数据输出


        //5. execute-执行
        env.execute();
    }
}

统计结果如下:

测试一下滑动窗口,每隔10秒钟,计算前1分钟的数据:

package com.bigdata.day08;

import org.apache.flink.api.common.RuntimeExecutionMode;
import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
import org.apache.flink.table.api.bridge.java.StreamTableEnvironment;

/**
 * @基本功能:
 * @program:FlinkDemo
 * @author: 闫哥
 * @create:2023-11-28 14:12:28
 **/
public class _03EventTimeGunDongWindowDemo {

    public static void main(String[] args) throws Exception {

        //1. env-准备环境
        StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
        env.setParallelism(1);
        StreamTableEnvironment tenv = StreamTableEnvironment.create(env);

        //2. 创建表
        tenv.executeSql("CREATE TABLE table1 (\n" +
                "  `username` String,\n" +
                "  `price` int,\n" +
                "  `event_time` TIMESTAMP(3),\n" +
                "   watermark for event_time as event_time - interval '3' second\n" +
                ") WITH (\n" +
                "  'connector' = 'kafka',\n" +
                "  'topic' = 'topic1',\n" +
                "  'properties.bootstrap.servers' = 'bigdata01:9092',\n" +
                "  'properties.group.id' = 'testGroup1',\n" +
                "  'scan.startup.mode' = 'group-offsets',\n" +
                "  'format' = 'json'\n" +
                ")");
        //3. 通过sql语句统计结果

        tenv.executeSql("select \n" +
                "   window_start,\n" +
                "   window_end,\n" +
                "   username,\n" +
                "   count(1) zongNum,\n" +
                "   sum(price) totalMoney \n" +
                "   from table(HOP(TABLE table1, DESCRIPTOR(event_time), INTERVAL '10' second,INTERVAL '60' second))\n" +
                "group by window_start,window_end,username").print();
        //4. sink-数据输出


        //5. execute-执行
        env.execute();
    }
}

结果如图所示:

package com.bigdata.day08;

import org.apache.flink.api.common.RuntimeExecutionMode;
import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
import org.apache.flink.table.api.bridge.java.StreamTableEnvironment;

/**
 * @基本功能:
 * @program:FlinkDemo
 * @author: 闫哥
 * @create:2023-11-28 14:12:28
 **/
public class _03EventTimeGunDongWindowDemo {

    public static void main(String[] args) throws Exception {

        //1. env-准备环境
        StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
        env.setParallelism(1);
        StreamTableEnvironment tenv = StreamTableEnvironment.create(env);

        //2. 创建表
        tenv.executeSql("CREATE TABLE table1 (\n" +
                "  `username` String,\n" +
                "  `price` int,\n" +
                "  `event_time` TIMESTAMP(3),\n" +
                "   watermark for event_time as event_time - interval '3' second\n" +
                ") WITH (\n" +
                "  'connector' = 'kafka',\n" +
                "  'topic' = 'topic1',\n" +
                "  'properties.bootstrap.servers' = 'bigdata01:9092',\n" +
                "  'properties.group.id' = 'testGroup1',\n" +
                "  'scan.startup.mode' = 'group-offsets',\n" +
                "  'format' = 'json'\n" +
                ")");
        //3. 通过sql语句统计结果

        tenv.executeSql("select \n" +
                "   window_start,\n" +
                "   window_end,\n" +
                "   username,\n" +
                "   count(1) zongNum,\n" +
                "   sum(price) totalMoney \n" +
                "   from table(CUMULATE(TABLE table1, DESCRIPTOR(event_time), INTERVAL '1' hours,INTERVAL '1' days))\n" +
                "group by window_start,window_end,username").print();
        //4. sink-数据输出


        //5. execute-执行
        env.execute();
    }
}

累积窗口演示效果:

processTime

测试数据:

{"username":"zs","price":20}
{"username":"lisi","price":15}
{"username":"lisi","price":20}
{"username":"zs","price":20}
{"username":"zs","price":20}
{"username":"zs","price":20}
{"username":"zs","price":20}
/**
 * 滚动窗口大小1分钟 延迟时间3秒
 *
 * {"username":"zs","price":20}
 * {"username":"lisi","price":15}
 * {"username":"lisi","price":20}
 * {"username":"zs","price":20}
 * {"username":"zs","price":20}
 * {"username":"zs","price":20}
 * {"username":"zs","price":20}
 *
 */
package com.bigdata.day08;

import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
import org.apache.flink.table.api.bridge.java.StreamTableEnvironment;

/**
 * @基本功能:
 * @program:FlinkDemo
 * @author: 闫哥
 * @create:2023-11-28 14:12:28
 **/
public class _04ProcessingTimeGunDongWindowDemo {

    public static void main(String[] args) throws Exception {

        //1. env-准备环境
        StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
        env.setParallelism(1);
        StreamTableEnvironment tenv = StreamTableEnvironment.create(env);

        //2. 创建表
        tenv.executeSql("CREATE TABLE table1 (\n" +
                "  `username` String,\n" +
                "  `price` int,\n" +
                "  `event_time` as proctime()\n" +
                ") WITH (\n" +
                "  'connector' = 'kafka',\n" +
                "  'topic' = 'topic1',\n" +
                "  'properties.bootstrap.servers' = 'bigdata01:9092',\n" +
                "  'properties.group.id' = 'testGroup1',\n" +
                "  'scan.startup.mode' = 'group-offsets',\n" +
                "  'format' = 'json'\n" +
                ")");
        //3. 通过sql语句统计结果

        tenv.executeSql("select \n" +
                "   window_start,\n" +
                "   window_end,\n" +
                "   username,\n" +
                "   count(1) zongNum,\n" +
                "   sum(price) totalMoney \n" +
                "   from table(TUMBLE(TABLE table1, DESCRIPTOR(event_time), INTERVAL '60' second ))\n" +
                "group by window_start,window_end,username").print();
        //4. sink-数据输出


        //5. execute-执行
        env.execute();
    }
}

计算结果:

结果需要等1分钟,才能显示出来,不要着急!

窗口分为滚动和滑动,时间分为事件时间和处理时间,两两组合,4个案例。

以下是滑动窗口+处理时间:

package com.bigdata.sql;

import org.apache.flink.api.common.RuntimeExecutionMode;
import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
import org.apache.flink.table.api.bridge.java.StreamTableEnvironment;

/**
 * @基本功能:
 * @program:FlinkDemo
 * @author: 闫哥
 * @create:2024-11-29 14:28:19
 **/
public class _04_FlinkSQLProcessTime_HOP {

    public static void main(String[] args) throws Exception {

        //1. env-准备环境
        StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
        env.setRuntimeMode(RuntimeExecutionMode.AUTOMATIC);
        // 获取tableEnv对象
        // 通过env 获取一个table 环境
        StreamTableEnvironment tEnv = StreamTableEnvironment.create(env);

        tEnv.executeSql("CREATE TABLE table1 (\n" +
                        "  `username` string,\n" +
                        "  `price` int,\n" +
                        "  `event_time` as proctime() \n"+
                        ") WITH (\n" +
                        "  'connector' = 'kafka',\n" +
                        "  'topic' = 'topic1',\n" +
                        "  'properties.bootstrap.servers' = 'bigdata01:9092',\n" +
                        "  'properties.group.id' = 'g1',\n" +
                        "  'scan.startup.mode' = 'latest-offset',\n" +
                        "  'format' = 'json'\n" +
                        ")");

        // 语句中的 ; 不能添加
        tEnv.executeSql("select \n" +
                        "   window_start,\n" +
                        "   window_end,\n" +
                        "   username,\n" +
                        "   count(1) zongNum,\n" +
                        "   sum(price) totalMoney \n" +
                        "   from table(HOP(TABLE table1, DESCRIPTOR(event_time),INTERVAL '10' second, INTERVAL '60' second))\n" +
                        "group by window_start,window_end,username").print();


        //5. execute-执行
        env.execute();
    }
}

测试时假如你的控制台不出数据,触发不了,请进入如下操作:

1、重新创建一个新的 topic,分区数为 1

2、kafka 对接的 server,写全 bigdata01:9092,bigdata02:9092,bigdata03:9092

二、窗口TopN(不是新的技术)

需求:在每个小时内找出点击量最多的Top 3网页。

测试数据
{"ts": "2023-09-05 12:00:00", "page_id": 1, "clicks": 100}
{"ts": "2023-09-05 12:01:00", "page_id": 2, "clicks": 90}
{"ts": "2023-09-05 12:10:00", "page_id": 3, "clicks": 110}
{"ts": "2023-09-05 12:20:00", "page_id": 4, "clicks": 23}
{"ts": "2023-09-05 12:30:00", "page_id": 5, "clicks": 456}
{"ts": "2023-09-05 13:10:00", "page_id": 5, "clicks": 456}
假如没有每隔1小时的需求,仅仅是统计点击量最多的Top 3网页,结果如下
select * from (
select 
    page_id,
    totalSum, 
    row_number() over (order by totalSum desc) px
  from (
     select page_id,
      sum(clicks)  totalSum
      from kafka_page_clicks group by page_id )  ) where px <=3;

根据以上代码,添加滚动窗口的写法:

select 
    window_start,
    window_end,
    page_id,
    sum(clicks) totalSum  
    from 
   table ( 
     tumble( table kafka_page_clicks, descriptor(ts), INTERVAL '1' HOUR ) 
         ) 
    group by window_start,window_end,page_id;


在这个基础之上添加排名的写法:
select 
   window_start,
   window_end,
   page_id,
   pm
  from   (
select 
    window_start,
    window_end,
    page_id,
    row_number() over(partition by window_start,window_end order by totalSum desc ) pm
  from (
select 
    window_start,
    window_end,
    page_id,
    sum(clicks) totalSum  
    from 
   table ( 
     tumble( table kafka_page_clicks, descriptor(ts), INTERVAL '1' HOUR ) 
         ) 
    group by window_start,window_end,page_id ) t2 ) t1  where pm <= 3;

编写建表语句:

{"ts": "2023-09-05 12:00:00", "page_id": 1, "clicks": 100}

CREATE TABLE kafka_page_clicks (
  `ts` TIMESTAMP(3),
  `page_id` int,
  `clicks` int,
  watermark for ts as ts - interval '3' second
) WITH (
  'connector' = 'kafka',
  'topic' = 'topic1',
  'properties.bootstrap.servers' = 'bigdata01:9092',
  'properties.group.id' = 'g1',
  'scan.startup.mode' = 'latest-offset',
  'format' = 'json'
)
package com.bigdata.day08;

import org.apache.flink.api.common.RuntimeExecutionMode;
import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
import org.apache.flink.table.api.bridge.java.StreamTableEnvironment;

/**
 * @基本功能:
 * @program:FlinkDemo
 * @author: 闫哥
 * @create:2023-11-28 15:23:46
 **/
public class _05TopNDemo {

    public static void main(String[] args) throws Exception {

        //1. env-准备环境
        StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();

        // ctrl + y 删除光标所在的那一行数据  ctrl + d 复制当前行
        StreamTableEnvironment tenv = StreamTableEnvironment.create(env);

        //2. source-加载数据
        // 一定要注意:ts 是一个年月日时分秒的数据,所以在建表时一定要是TIMESTAMP,否则进行WATERMARK 报错
        // 因为使用的是event_time 所以,需要指定WATERMARK
        tenv.executeSql("CREATE TABLE kafka_page_clicks (" +
                "    `ts` TIMESTAMP(3),\n" +
                "    page_id INT,\n" +
                "    clicks INT,\n" +
                "  WATERMARK FOR ts AS ts - INTERVAL '10' SECOND \n" +
                ") WITH (\n" +
                "    'connector' = 'kafka',\n" +
                "    'topic' = 'topic1',\n" +
                "    'properties.bootstrap.servers' = 'bigdata01:9092',\n" +
                "   'scan.startup.mode' = 'group-offsets',\n" +
                "    'format' = 'json'\n" +
                ")");


        tenv.executeSql("select \n" +
                "   window_start,\n" +
                "   window_end,\n" +
                "   page_id,\n" +
                "   pm\n" +
                "  from   (\n" +
                "select \n" +
                "    window_start,\n" +
                "    window_end,\n" +
                "    page_id,\n" +
                "    row_number() over(partition by window_start,window_end order by totalSum desc ) pm\n" +
                "  from (\n" +
                "select \n" +
                "    window_start,\n" +
                "    window_end,\n" +
                "    page_id,\n" +
                "    sum(clicks) totalSum  \n" +
                "    from \n" +
                "   table ( \n" +
                "     tumble( table kafka_page_clicks, descriptor(ts), INTERVAL '1' HOUR ) \n" +
                "         ) \n" +
                "    group by window_start,window_end,page_id ) t2 ) t1  where pm <= 3").print();
        //4. sink-数据输出


        //5. execute-执行
        env.execute();
    }
}

最后的运行结果如下:

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:/a/934566.html

如若内容造成侵权/违法违规/事实不符,请联系我们进行投诉反馈qq邮箱809451989@qq.com,一经查实,立即删除!

相关文章

UnityShaderLab 实现程序化形状(一)

1.实现一个长宽可变的矩形&#xff1a; 代码&#xff1a; fixed4 frag (v2f i) : SV_Target{return saturate(length(saturate(abs(i.uv - 0.5)-0.13)))/0.03;} 2.实现一个半径可变的圆形&#xff1a; 代码&#xff1a; fixed4 frag (v2f i) : SV_Target{return (distance(a…

【C++】CUDA线程模型

文章目录 1. 线程模型概述2. 一维线程模型3. 多维线程模型3.1 多维线程模型概述3.2 多维线程模型定义3.3 多维线程模型中的索引 4. 网格和线程块的限制条件 1. 线程模型概述 在CUDA编程中&#xff0c;线程的组织形式是多维的&#xff0c;主要通过网格&#xff08;Grid&#xff…

【JAVA高级篇教学】第二篇:使用 Redisson 实现高效限流机制

在高并发系统中&#xff0c;限流是一项非常重要的技术手段&#xff0c;用于保护后端服务&#xff0c;防止因流量过大导致系统崩溃。本文将详细介绍如何使用 Redisson 提供的 RRateLimiter 实现分布式限流&#xff0c;以及其原理、使用场景和完整代码示例。 目录 一、什么是限流…

聊聊在应用层面实现内网穿透功能是否可行

前言 最近接手了供方开发的网关项目&#xff0c;交接文档里面有个内网穿透的功能&#xff0c;一下子就吸引的我的目光。实现这个内网穿透的背景是业务部门有些业务是部署在公网&#xff0c;这些公网的业务想访问内网的业务&#xff0c;但因为公网和内网没打通&#xff0c;导致…

TPC-H数据集使用说明

TPCH数据使用说明 表模式&#xff1a; TPCH官网链接&#xff1a;TPC-H Homepage 同学们可以自行下载TPCH-tools自行生成数据&#xff08;10GB&#xff09;&#xff0c;下面主要是以mysql为例说明TPC-H的使用方法。 供同学自行参考&#xff1a; windows &#xff1a;TPC-H测…

vue2+html2canvas+js PDF实现试卷导出和打印功能

1.首先安装 import html2canvas from html2canvas; import { jsPDF } from jspdf; 2.引入打印插件print.js import Print from "/assets/js/print"; Vue.use(Print) // 打印类属性、方法定义 /* eslint-disable */ const Print function (dom, options) {if (…

Simdroid-EC:液冷仿真新星,助力新能源汽车电机控制器高效散热

近年来&#xff0c;新能源电动车的销量呈现出快速增长的态势。据统计&#xff0c;2024 年1-10月中国新能源汽车销量达728万辆&#xff0c;同比增长37.8%。 电机控制器在新能源汽车中对于保障动力和安全性能扮演着至关重要的角色&#xff0c;其核心部件IGBT&#xff08;绝缘栅双…

静态路由与交换机配置实验

1.建立网络拓扑 添加2台计算机&#xff0c;标签名为PC0、PC1&#xff1b;添加2台二层交换机2960&#xff0c;标签名为S0、S1&#xff1b;添加2台路由器2811&#xff0c;标签名为R0、R1&#xff1b;交换机划分的VLAN及端口根据如下拓扑图&#xff0c;使用直通线、DCE串口线连接…

深度学习:MindSpore自动并行

随着模型规模的逐渐增大&#xff0c;需要的算力逐渐增强&#xff0c;但是算力需求增长速度远高于芯片算力增长速度。现在唯一的解决方案只有通过超大规模集群训练大模型。 大集群训练大模型的挑战 内存墙 200B参数量的模型&#xff0c;参数内存占用745GB内存&#xff0c;训练…

前端成长之路:HTML(2)

HTML中有两个非常重要的标签——表格和表单&#xff0c;在介绍之前需要先了解表格和表单的区别&#xff1a;表格是用于展示数据的&#xff1b;表单是用于提交数据的。本文主要介绍表格。 表格标签 表格主要是用于显示、展示数据的&#xff0c;并非是页面布局。它可以使本来难…

如何使用WinCC DataMonitor基于Web发布浏览Excel报表文档

本文介绍使用 WinCC DataMonitor 的 "Excel Workbooks" 功能&#xff0c;通过 Excel 表格显示 WinCC 项目的过程值、归档变量值和报警归档消息。并可以通过 Web 发布浏览访问数据 1&#xff0e;WinCC DataMonitor是什么 ? DataMonitor 是 SIMATIC WinCC 工厂智能中…

Facebook广告突然无消耗?原因解析与解决方案。

在Facebook广告投放中&#xff0c;广告突然无消耗是很多广告主都会遇到的难题。这种情况不仅浪费时间&#xff0c;还可能导致营销活动停滞&#xff0c;影响业务发展。那么&#xff0c;广告无消耗的原因是什么&#xff1f;又该如何解决呢&#xff1f; 一、Facebook广告无消耗的…

安卓调试环境搭建

前言 前段时间电脑重装了系统&#xff0c;最近准备调试一个apk&#xff0c;没想到装环境的过程并不顺利&#xff0c;很让人火大&#xff0c;于是记录一下。 反编译工具下载 下载apktool.bat和apktool.jar 官网地址&#xff1a;https://ibotpeaches.github.io/Apktool/install…

shell基础知识4----正则表达式

一、文本搜索工具——grep grep -参数 条件 文件名 其中参数有以下&#xff1a; -i 忽略大小写 -c 统计匹配的行数 -v 取反&#xff0c;不显示匹配的行 -w 匹配单词 -E 等价于 egrep &#xff0c;即启用扩展正则表达式 -n 显示行号 -rl 将指定目录内的文件打…

git branch -vv(显示本地分支与远程分支的最新状态和提交信息)(very verbose mode)

文章目录 字段说明下一步操作建议字段说明当前状态分析相关操作建议 -vv功能说明-vv 与单个 -v 的区别总结 出现如下状况&#xff0c;是因为我把本地的develop分支没有提交到gitlab上的develop分支。 而是把develop分支的内容提交到了gitlab上的master分支&#xff0c;这样是不…

树莓派4B android 系统添加led灯 Hal 层

本文内容需要用到我上一篇文章做的驱动&#xff0c;可以先看文章https://blog.csdn.net/ange_li/article/details/136759249 一、Hal 层的实现 1.Hal 层的实现一般放在 vendor 目录下&#xff0c;我们在 vendor 目录下创建如下的目录 aosp/vendor/arpi/hardware/interfaces/…

Apache DolphinScheduler 限制秒级别的定时调度

背景 Apache DolphinScheduler 定时任务配置采用的 7 位 Crontab 表达式&#xff0c;分别对应秒、分、时、月天、月、周天、年。 在团队日常开发工作中&#xff0c;工作流的定时调度一般不会细化到秒级别。但历史上出现过因配置的疏忽大意而产生故障时间&#xff0c;如应该配…

MTK Android12 开机向导

文章目录 需求-场景参考资料&#xff1a;博客资料官网参考资料&#xff1a;参考资料注意点 附件资料文件说明&#xff1a;推荐工具&#xff1a;配置定制的 声明叠加层 APK需求实现替换字符、删减开机向导界面、添加开机向导界面删除部分界面需求&#xff0c;官网说明如下更新部…

Text2SQL(NL2sql)对话数据库:设计、实现细节与挑战

Text2SQL&#xff08;NL2sql&#xff09;对话数据库&#xff1a;设计、实现细节与挑战 前言1.何为Text2SQL&#xff08;NL2sql&#xff09;2.Text2SQL结构与挑战3.金融领域实际业务场景4.注意事项5.总结 前言 随着信息技术的迅猛发展&#xff0c;人机交互的方式也在不断演进。…

长沙数字孪生工业互联网三维可视化技术,赋能新型工业化智能制造工厂

长沙正积极拥抱数字化转型的浪潮&#xff0c;特别是在工业互联网和智能制造领域&#xff0c;长沙数字孪生技术的广泛应用&#xff0c;为新型工业化智能制造工厂的建设与发展注入了强劲动力。 在长沙智能制造工厂内&#xff0c;三维可视化技术被广泛应用于产线设计仿真优化和产…