Hive实战 --- 电子商务消费行为分析

目录

数据结构

Customer表

Transaction表

Store表

Review表

上传数据

创建目录用于存放数据

 把本地文件上传到HDFS上

创建外部表

创建数据库

创建表

数据清洗

对transaction_details中的重复数据生成新ID

过滤掉store_review中没有评分的数据

找出PII (personal information identification‘个人信息识别’)  数据进行加密

重新组织transaction数据按照日期YYYY-MM(按月)做分区

Customer分析

1.找出顾客最常用的信用卡

2.找出户客资料中排名前五的职位名称

3.在美国女性最常用的信用卡

4.按性别和国家进行客户统计

Transaction

1.计算每月总收入

2.计算每个季度的总收入

3.按年计算总收入

4.按工作日计算总收入

5.按时间段计算总收入(需要清理数据)

6.按时间段计算平均消费

7.按工作日计算平均消费

8.计算年、月、日的交易总数

9.找出交易量最大的10个客户

10.找出消费最多的前10位顾客

11.计该期间交易数量最少的用户

12.算每个季度的独立客户总数

​13.算每周的独立客户总数

​14.算整个活动客户平均花费的最大值

​15.计每月花费最多的客户

16.计每月访问次数最多的客户

17.总价找出最受欢迎的5种产品

​18.据购买频率找出最畅销的5种产品

​19.据客户数量找出最受欢迎的5种产品

Store分析

1.按客流量找出最受欢迎的商店

2.根据顾客消费价格找出最受欢迎的商店

3.根据顾客交易情况找出最受欢迎的商店

4.根据商店和唯一的顾客id获取最受欢迎的产品

5.获取每个商店的员工与顾客比

6.按年和月计算每家店的收入

7.按店铺制作总收益饼图

8.找出每个商店最繁忙的时间段

9.找出每家店的忠实顾客

10.根据每位员工的最高收入找出明星商店

 Review分析

1.在ext_store_review中找出存在冲突的交易映射关系

2.了解客户评价的覆盖率

3.根据评分了解客户的分布情况

4.根据交易了解客户的分布情况

5.客户给出的最佳评价是否总是同一家门店


使用工具zeppelin

数据结构

Customer表

customer_detailsdetails

customer_id

Int, 1 - 500

first_name

string

last_name

string

email

string, such as willddy@gmail.com

gender

string, Male or female

address

string

country

string

language

string

job

string, job title/position

credit_type

string, credit card type, such as visa

credit_no

string, credit card number

Transaction表

transaction_detailsdetails

transaction_id

Int, 1 - 1000

customer_id

Int, 1 - 500

store_id

Int, 1 - 5

price

decimal, such as 5.08

product

string, things bought

date

string, when to purchase

time

string, what time to purchase

Store表

store_detailsdetails

store_id

Int, 1 - 5

store_name

string

employee_number

Int, 在store有多少employee

Review表

store_reviewdetails

stransaction_id

Int, 1 - 8000

store_id

Int, 1 - 5

review_store

Int, 1 - 5

上传数据

创建目录用于存放数据

%sh
hdfs dfs -mkdir -p /shopping/data/customer
hdfs dfs -mkdir -p /shopping/data/store
hdfs dfs -mkdir -p /shopping/data/review
hdfs dfs -mkdir -p /shopping/data/transaction

 把本地文件上传到HDFS上

%sh
cd /opt/stufile/storetransaction
ls -al

hdfs dfs -put ./customer_details.csv /shopping/data/customer
hdfs dfs -put ./store_details.csv /shopping/data/store
hdfs dfs -put ./store_review.csv /shopping/data/review
hdfs dfs -put ./transaction_details.csv /shopping/data/transaction

创建外部表

创建数据库

%hive
-- 如果存在此表则删除
drop database if exists shopping cascade;
--创建数据库
create database if not exists shopping;

创建表

%hive
create external table if not exists ext_customer_detail
(
    customer_id string,
    first_name  string,
    last_name   string,
    email       string,
    gender      string,
    address     string,
    country     string,
    language    string,
    job         string,
    credit_type string,
    credit_no   string
)
    row format serde 'org.apache.hadoop.hive.serde2.OpenCSVSerde'
    location '/shopping/data/customer'
    tblproperties ('skip.header.line.count' = '1');

create external table if not exists ext_transaction_details
(
    transaction_id string,
    customer_id    string,
    store_id       string,
    price          decimal(8, 2),
    product        string,
    purchase_date  string,
    purchase_time  string
) row format serde 'org.apache.hadoop.hive.serde2.OpenCSVSerde'
    location '/shopping/data/transaction'
    tblproperties ('skip.header.line.count' = '1');

drop table ext_store_details;
create external table if not exists ext_store_details
(
    store_id        string,
    store_name      string,
    employee_number string

) row format serde 'org.apache.hadoop.hive.serde2.OpenCSVSerde'
    location '/shopping/data/store'
    tblproperties ('skip.header.line.count' = '1');

drop table ext_store_review;
create external table if not exists ext_store_review
(
    store_id       string,
    transaction_id string,
    review_score   string
) row format serde 'org.apache.hadoop.hive.serde2.OpenCSVSerde'
    location '/shopping/data/review'
    tblproperties ('skip.header.line.count' = '1');

TIP:

OpenCSVSerde 默认的分隔符 (separator)、quote 以及逃逸字符(escape characters )分别为 、" 以及 '

如果我们查看表结构的时候,我们会发现如果 row format serde 为 org.apache.hadoop.hive.serde2.OpenCSVSerde,不管你建表的时候指定字段是什么类型,其显示的都是 string 类型

tblproperties ('skip.header.line.count' = '1');从外部表导入数据跳过(忽略)首行。

数据清洗

对transaction_details中的重复数据生成新ID

%hive
use shopping;
with basetb as (
    select row_number() over (partition by transaction_id order by transaction_id) as rn
         , transaction_id
         , customer_id
         , store_id
         , price
         , product
         , purchase_date
         , purchase_time
         , substr(purchase_date, 0, 7)                                             as purchase_month
    from ext_transaction_details),
     basetb2 as (
         select `if`(rn = 1, transaction_id, concat(transaction_id, '_fix', rn)) transaction_id
              , customer_id
              , store_id
              , price
              , product
              , purchase_date
              , purchase_time
              , purchase_month
         from basetb)
select *
from basetb2
where transaction_id like '%fix%';

 因为查询的数据比较多,显示不全所有要在后面加一个查询

过滤掉store_review中没有评分的数据

查出有评分的数据

%hive
use shopping;
create view if not exists vm_store_review as
select *
from ext_store_review
where review_score <> '';


select * from vm_store_review;

找出PII (personal information identification‘个人信息识别’)  数据进行加密

%hive
use shopping;
drop view customer_detail;
create view vm_customer_details as
select 
customer_id,first_name,unbase64(last_name)as last_name,unbase64(email) as email,
gender,unbase64(address) as address,
unbase64(concat(unbase64(credit_no),'hello')) as credit_no --二次加密
from ext_customer_details;

重新组织transaction数据按照日期YYYY-MM(按月)做分区

%hive
use shopping;
set hive.exec.dynamic.partition.mode=nonstrict;
with basetb as (
    select row_number() over (partition by transaction_id order by transaction_id) as rn
         , transaction_id
         , customer_id
         , store_id
         , price
         , product
         , purchase_date
         , purchase_time
         , substr(purchase_date, 0, 7)                                             as purchase_month
    from ext_transaction_details)
insert overwrite table transaction_details partition (purchase_month)
select `if`(rn = 1, transaction_id, concat(transaction_id, '_fix', rn)) transaction_id
     , customer_id
     , store_id
     , price
     , product
     , purchase_date
     , purchase_time
     , purchase_month
from basetb;


show partitions transaction_details;

Customer分析

1.找出顾客最常用的信用卡

%hive
use shopping;
select credit_type, max(credit_type) counts
from ext_customer_details
group by credit_type
order by counts desc;

2.找出户客资料中排名前五的职位名称

select job, count(job) counts
from ext_customer_details
group by job
order by counts desc
limit 5;

3.在美国女性最常用的信用卡

%hive
use shopping;
select credit_type, count(credit_type) counts
from ext_customer_details
where gender = 'Female' and country='United States'
group by credit_type
order by counts desc;

4.按性别和国家进行客户统计

%hive
use shopping;
select country, gender,count(customer_id)
from ext_customer_details
group by gender, country;

Transaction

1.计算每月总收入

%hive
use shopping;
select substr(purchase_date,0,7) month,round(sum(price),2) sum
from ext_transaction_details
group by substr(purchase_date,0,7);

2.计算每个季度的总收入

with basetb as (
    select concat_ws('-', cast(year(purchase_date) as string),
                     cast(quarter(purchase_date) as string)) as year_quarter,
           price
    from transaction_details)
select year_quarter ,sum(price) sumMoney from basetb group by year_quarter;

3.按年计算总收入

%hive
use shopping;
select year(purchase_date) years , round(sum(price),2) sum
from ext_transaction_details
group by year(purchase_date);

4.按工作日计算总收入

%hive
use shopping;
with basetb as (
    select `dayofweek`(purchase_date) weekday, price
    from transaction_details)
select case
           when (weekday - 1) = 1 then '星期一'
           when (weekday - 1) = 2 then '星期二'
           when (weekday - 1) = 3 then '星期三'
           when (weekday - 1) = 4 then '星期四'
           when (weekday - 1) = 5 then '星期五'
           end as weekday,
       sum(price) sum
from basetb
group by weekday
having weekday between 2 and 6;

5.按时间段计算总收入(需要清理数据)

%hive
use shopping;

with basetb1 as (
    select price,
           purchase_time,
           case
               when purchase_time like '%AM' then split(purchase_time, '\\s+')[0]
               when purchase_time like '%PM' then concat_ws(':',cast(`if`(
                                                                    (cast(split(purchase_time, ':')[0] as int) + 12) == 24,0,          (cast(split(purchase_time, ':')[0] as int) + 12)) as string)
                   , split(split(purchase_time, ':')[1], '\\s+')[0])
               else purchase_time
               end time_format
    from transaction_details),
     basetb2 as (select price,
                        purchase_time,
                        (cast(split(time_format, ':')[0] as decimal(4, 2)) +
                         cast(split(time_format, ':')[1] as decimal(4, 2)) / 60) purchase_time_num
                 from basetb1),
     basetb3 as (select price,
                        purchase_time,
                        `if`(purchase_time_num > 5 and purchase_time_num <= 8, 'early morning',
                        `if`(purchase_time_num > 8 and purchase_time_num <= 11, ' morning',
                        `if`(purchase_time_num > 11 and purchase_time_num <= 13, 'noon',
                        `if`(purchase_time_num > 13 and purchase_time_num <= 18, 'afternoon',
                        `if`(purchase_time_num > 18 and purchase_time_num <= 22, 'evening','night'))))) as time_bucket
                 from basetb2)
select time_bucket, sum(price) sum
from basetb3
group by time_bucket;

6.按时间段计算平均消费

%hive
use shopping;

with basetb1 as (
    select price,
           purchase_time,
           case
               when purchase_time like '%AM' then split(purchase_time, '\\s+')[0]
               when purchase_time like '%PM' then concat_ws(':',cast(`if`(
                                                                    (cast(split(purchase_time, ':')[0] as int) + 12) == 24,0,          (cast(split(purchase_time, ':')[0] as int) + 12)) as string)
                   , split(split(purchase_time, ':')[1], '\\s+')[0])
               else purchase_time
               end time_format
    from transaction_details),
     basetb2 as (select price,
                        purchase_time,
                        (cast(split(time_format, ':')[0] as decimal(4, 2)) +
                         cast(split(time_format, ':')[1] as decimal(4, 2)) / 60) purchase_time_num
                 from basetb1),
     basetb3 as (select price,
                        purchase_time,
                        `if`(purchase_time_num > 5 and purchase_time_num <= 8, 'early morning',
                        `if`(purchase_time_num > 8 and purchase_time_num <= 11, ' morning',
                        `if`(purchase_time_num > 11 and purchase_time_num <= 13, 'noon',
                        `if`(purchase_time_num > 13 and purchase_time_num <= 18, 'afternoon',
                        `if`(purchase_time_num > 18 and purchase_time_num <= 22, 'evening','night'))))) as time_bucket
                 from basetb2)
select time_bucket, avg(price) avg
from basetb3
group by time_bucket;

7.按工作日计算平均消费

%hive
use shopping;
with basetb as (
    select `dayofweek`(purchase_date) weekday, price
    from transaction_details)
select case
           when (weekday - 1) = 1 then '星期一'
           when (weekday - 1) = 2 then '星期二'
           when (weekday - 1) = 3 then '星期三'
           when (weekday - 1) = 4 then '星期四'
           when (weekday - 1) = 5 then '星期五'
           end weekday,
    avg(price) avg
from basetb
group by weekday
having weekday between 2 and 6;

8.计算年、月、日的交易总数

%hive
use shopping;
select distinct purchase_date,
       purchase_month,
       year(purchase_date),
       count(1) over (partition by year(purchase_date))                                         years,
       count(1) over (partition by year(purchase_date),month(purchase_date))                    months,
       count(1) over (partition by year(purchase_date),month(purchase_date),day(purchase_date)) days
from transaction_details;

9.找出交易量最大的10个客户

%hive
use shopping;
select
customer_id,count(1) as num
from transaction_details
group by customer_id
order by num desc
limit 10

10.找出消费最多的前10位顾客

%hive
use shopping;
select customer_id,
       sum(price) as sum
from transaction_details
group by customer_id
order by sum desc
limit 10

11.计该期间交易数量最少的用户

use shopping;
select customer_id, count(transaction_id)
from transaction_details
group by customer_id
order by count(transaction_id)
limit 1;


12.算每个季度的独立客户总数

%hive
use shopping;
with basetb as (
    select distinct concat_ws('-', cast(year(purchase_date) as string),
                     cast(quarter(purchase_date) as string)) as year_quarter,
           customer_id
    from transaction_details)
select  year_quarter, count(customer_id) counts
from basetb
group by year_quarter;


13.算每周的独立客户总数

%hive
use shopping;
with basetb as (
    select distinct concat(cast(year(purchase_date) as string), '-', cast(weekofyear(purchase_date)as string)) weeks,
                    customer_id
    from transaction_details)
select weeks, count(customer_id) counts
from basetb
group by weeks;


14.算整个活动客户平均花费的最大值

%hive
use shopping;
select customer_id,avg(price) avgs
from transaction_details
group by customer_id
order by avgs desc
limit 1;


15.计每月花费最多的客户

%hive
use shopping;
with basetb as (
    select purchase_month,
           customer_id,
           sum(price) sum_price
    from transaction_details
    group by purchase_month, customer_id),
     basetb2 as (
         select purchase_month,
                customer_id,
                sum_price,
                row_number() over (partition by purchase_month order by sum_price desc ) rn
         from basetb)
select purchase_month, customer_id, sum_price
from basetb2
where rn = 1;


16.计每月访问次数最多的客户

%hive
use shopping;
with basetb as (
    select purchase_month,
           customer_id,
           count(customer_id) counts
    from transaction_details
    group by purchase_month, customer_id),
     basetb2 as (
         select purchase_month,
                customer_id,
                counts,
                row_number() over (partition by purchase_month order by counts desc ) rn
         from basetb)
select purchase_month, customer_id, counts
from basetb2
where rn = 1;


17.总价找出最受欢迎的5种产品

select product,sum(price) sum
from transaction_details
group by product
order by sum desc
limit 5;


18.据购买频率找出最畅销的5种产品

select product,
count(1) counts
from transaction_details
group by product
order by counts desc
limit 5;


19.据客户数量找出最受欢迎的5种产品

select product,
count(distinct customer_id) counts
from transaction_details
group by product
order by counts desc
limit 5

Store分析

1.按客流量找出最受欢迎的商店

%hive
use shopping;
select store_name, count(distinct customer_id) counts
from transaction_details td
         join ext_store_details esd on td.store_id = esd.store_id
group by store_name
order by counts desc;

2.根据顾客消费价格找出最受欢迎的商店

%hive
use shopping;
select store_name, sum(price) sums
from transaction_details td
         join ext_store_details esd on td.store_id = esd.store_id
group by store_name
order by sums desc;

3.根据顾客交易情况找出最受欢迎的商店

%hive
use shopping;
select store_name, count(td.store_id) counts
from transaction_details td
         join ext_store_details esd on td.store_id = esd.store_id
group by store_name
order by counts desc;

4.根据商店和唯一的顾客id获取最受欢迎的产品

%hive
use shopping;
with basetb as (
    select store_id, product, count(distinct customer_id) counts
    from transaction_details
    group by store_id, product),
     basetb2 as (
         select store_id,
                product,
                counts,
                rank() over (partition by store_id order by counts desc ) as rn
         from basetb)
select store_name, product, counts
from basetb2 tb2
         join ext_store_details esd on tb2.store_id = esd.store_id
where rn = 1;

5.获取每个商店的员工与顾客比

%hive
use shopping;
with t1 as (select count(1) c1, store_id
            from transaction_details td
            group by td.store_id)
select t1.store_id,
       esd.store_name,
       concat(substring(cast(esd.employee_number / t1.c1 as decimal(9, 8)) * 100.0, 0, 4), '%') proportion
from t1
         join ext_store_details esd on t1.store_id = esd.store_id;

6.按年和月计算每家店的收入

%hive
use shopping;
select distinct *
from (
         select store_id,
                year(purchase_date)                                                     year,
                sum(price) over (partition by year(purchase_date))                      sum_year,
                month(purchase_date)                                                    month,
                sum(price) over (partition by year(purchase_date),month(purchase_date)) sum_month
         from transaction_details
     ) tb;

 

7.按店铺制作总收益饼图

%hive
use shopping;
select store_id,sum(price)
from transaction_details
group by store_id

 

8.找出每个商店最繁忙的时间段

%hive
use shopping;
with basetb1 as (
    select store_id,
           customer_id,
           purchase_time,
           case
               when purchase_time like '%AM' then split(purchase_time, '\\s+')[0]
               when purchase_time like '%PM' then concat_ws(':',
                                                            cast(`if`(
                                                                    (cast(split(purchase_time, ':')[0] as int) + 12) == 24,
                                                                    0,
                                                                    (cast(split(purchase_time, ':')[0] as int) + 12)) as string)
                   , split(split(purchase_time, ':')[1], '\\s+')[0])
               else purchase_time
               end time_format
    from transaction_details),
     basetb2 as (select store_id,
                        customer_id,
                        purchase_time,
                        (cast(split(time_format, ':')[0] as decimal(4, 2)) +
                         cast(split(time_format, ':')[1] as decimal(4, 2)) / 60) purchase_time_num
                 from basetb1),
     basetb3 as (select store_id,
                        customer_id,
                        purchase_time,
                        `if`(purchase_time_num > 5 and purchase_time_num <= 8, 'early morning',
                             `if`(purchase_time_num > 8 and purchase_time_num <= 11, 'morning',
                                  `if`(purchase_time_num > 11 and purchase_time_num <= 13, 'noon',
                                       `if`(purchase_time_num > 13 and purchase_time_num <= 18, 'afternoon',
                                            `if`(purchase_time_num > 18 and purchase_time_num <= 22, 'evening',
                                                 'night'))))) as time_bucket
                 from basetb2)
select esd.store_name,
       tb3.time_bucket,
       count(customer_id) counts
from basetb3 tb3
         join ext_store_details esd on tb3.store_id = esd.store_id
group by esd.store_name, time_bucket;

9.找出每家店的忠实顾客

购买次数大于5,认为他是忠实粉丝

%hive
use shopping;
select *
from (
         select store_id, customer_id, count(1) counts
         from transaction_details
         group by store_id, customer_id) tb
where tb.counts > 5;

 

10.根据每位员工的最高收入找出明星商店

%hive
use shopping;
with base as
         (
             select store_id, sum(price) s
             from transaction_details
             group by store_id
         )
select base.store_id,
       base.s / store.employee_number en
from base
         join ext_store_details store
              on base.store_id = store.store_id
order by en desc
limit 1;

 Review分析

1.在ext_store_review中找出存在冲突的交易映射关系

%hive
use shopping;
with basetb as (
    select row_number() over (partition by transaction_id) as row_number1, * from vm_store_review
)
select row_number1, a.transaction_id, a.store_id, b.store_id, a.review_score, b.review_score
from basetb a
         join vm_store_review b on a.transaction_id = b.transaction_id
where row_number1 > 1;

 

2.了解客户评价的覆盖率

%hive
use shopping;
with t1 as (
    select count(1) c1
    from ext_store_review
    where review_score <> ''
),
     t2 as (
         select count(1) c2
         from ext_store_review
         where review_score = ''
     )
select concat(cast((c1 - c2) / c1 * 100 as decimal(4, 2)), '%')  Coverage
from t1 join t2;

 

3.根据评分了解客户的分布情况

%hive
use shopping;
select concat(round(sum(case review_score when '1' then 1 else 0 end) / count(*) * 100, 2), '%') as one_score,
       concat(round(sum(case review_score when '2' then 1 else 0 end) / count(*) * 100, 2), '%') as two_score,
       concat(round(sum(case review_score when '3' then 1 else 0 end) / count(*) * 100, 2), '%') as three_score,
       concat(round(sum(case review_score when '4' then 1 else 0 end) / count(*) * 100, 2), '%') as four_score,
       concat(round(sum(case review_score when '5' then 1 else 0 end) / count(*) * 100, 2), '%') as five_score
from ext_store_review;

4.根据交易了解客户的分布情况

根据总金额

%hive
use shopping;
select country,
       sum(price) sum_price
from transaction_details td
         join ext_customer_details cd on td.customer_id = cd.customer_id
group by cd.country;

 

5.客户给出的最佳评价是否总是同一家门店

%hive
use shopping;
select store_id, customer_id, count(customer_id) counts
from transaction_details td
         join ext_store_review esr
              on esr.transaction_id = td.transaction_id
where esr.review_score = 5
group by td.store_id, td.customer_id;

 

 

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:/a/5424.html

如若内容造成侵权/违法违规/事实不符,请联系我们进行投诉反馈qq邮箱809451989@qq.com,一经查实,立即删除!

相关文章

【web前端初级课程】第八章 什么是事件?

目录 一、事件情况汇总 二、标签绑定 三、使用DOM0事件模型 四、使用DOM2事件模型 五、相关练习&#xff1a;图片切换 一、事件情况汇总 事件分为三部分&#xff1a;事件源&#xff1a;绑定事件的标签、事件对象&#xff1a;就是事件产生的相关数据、事件处理函数 二、标…

Java使用功能方法交换a,b的值,通过构造方法输出姓名、年龄、家庭地址

目录 前言 一、使用功能方法交换a&#xff0c;b的值 1.1运行流程&#xff08;思想&#xff09; 1.2代码段 1.3运行截图 二、通过构造方法输出姓名、年龄、家庭地址 1.1运行流程&#xff08;思想&#xff09; 1.2代码段 1.3运行截图 前言 1.因多重原因&#xff0c;所以我…

愚人节,聊聊那些正在坑人的“新型AI”

几年前的一个愚人节&#xff0c;我们和大家聊过AI技术被作为诈骗工具的情况。很不幸&#xff0c;当时讨论的一些苗头&#xff0c;现在都成了电诈犯罪中屡见不鲜的手段。更可气的是&#xff0c;随着AI技术与应用本身的发展&#xff0c;犯罪分子的AI手段不减反增。一些“新型AI”…

(七)Tomcat源码阅读:Host组件分析

一、概述 Host类中比较重要的类就是HostConfig其它类实现的功能和之前的组件差不多&#xff0c;这里就不多介绍了。 二、阅读源码 1、HostConfig &#xff08;1&#xff09;重要方法 lifecycleEvent&#xff1a; 根据对应的方法设置对应的属性&#xff0c;并调用对应的方…

自己写gpt的软件教程-国内最好的chatgpt软件

GPT-3是一种非常强大的自然语言处理技术&#xff0c;可以为用户生成高质量的文本内容。虽然GPT-3最初是为英文而设计的&#xff0c;但是近年来&#xff0c;GPT-3在中文领域也变得越来越流行。在本篇教程中&#xff0c;我们将详细介绍如何在GPT-3中生成中文内容。 一、准备工作 …

第二天并发篇

一、线程状态 1.新建&#xff08;New&#xff09;&#xff1a;创建线程对象时 2.就绪&#xff08;Runnable&#xff09;&#xff1a;线程调用start方法&#xff0c;有执行资格没有执行权 3.运行&#xff1a;当就绪状态时抢到cpu的执行权之后&#xff0c;进入运行状态 4.阻塞&am…

过程控制系统中的模块技术MTP

在过程自动化行业中&#xff0c;模块化设备概念近年来越来越受欢迎。其中最热门的是MTP。MTP称为模块类型封装&#xff0c;它是过程工业自动化技术用户协会&#xff08;NAMUR&#xff09;提出的过程自动化行业的模块化标准&#xff0c;通过这种模型&#xff0c;开发工作的重点从…

C++(Qt)软件调试---linux下生成/调试Core文件(3)

#软件调试 C(Qt)软件调试—linux下生成/调试Core文件&#xff08;3&#xff09; 文章目录C(Qt)软件调试---linux下生成/调试Core文件&#xff08;3&#xff09;前言1、C生成Core和使用GDB调试1、环境2、C生成Core文件3、使用gdb工具调试core可定位段错误位置&#xff1b;4、修…

【创作赢红包】你是真的“C”——C语言中文件操作函数使用的详细讲解【上篇】

你是真的“c”——C语言中文件操作函数使用的详细讲解~&#x1f60e;前言&#x1f64c;一、 为什么使用文件&#xff1a;&#x1f64c;二、 什么是文件&#xff1a;&#x1f64c;2.1 程序文件2.2 数据文件2.3 文件名3. 文件的打开和关闭3.1 文件指针3.2 文件的打开和关闭4. 文件…

【ansible】实施任务控制

目录 实施任务控制 一&#xff0c;循环&#xff08;迭代&#xff09;--- loop 1&#xff0c;利用loop----item循环迭代任务 2&#xff0c;item---loop循环案例 1&#xff0c;定义item循环列表 2&#xff0c;通过变量应用列表格式 3&#xff0c;字典列表&#xff08;迭代嵌套子…

一个ESP32小东西

之前发了ESP8266&#xff0c;有人评论说玩下ESP32然后就买了几个回来&#xff0c;当然&#xff0c;也想着和大家一起玩介绍下这个开发板开发板Github项目链接https://github.com/Xinyuan-LilyGO/T-QT把仓库的代码下载到本地我们可以用ESP-IDF和Arduino两个SDK来开发ESP32S3ESP-…

回溯算法思想、回溯算法解题模板与回溯算法题目索引(不断更新)

回溯算法 回溯算法是一种试探性的搜索算法&#xff0c;它在解决某些组合问题、优化问题、路径问题等&#xff0c;非常有效。回溯算法的核心思想是通过递归和深度优先搜索&#xff08;DFS&#xff09;来搜索问题的解空间。 细说一下这些问题&#xff1a; 组合问题&#xff1a;N…

初级网络工程师这30道面试题一定得会,建议小白收藏!

你好&#xff0c;这里是网络技术联盟站。 后台有小伙伴想让瑞哥整理一下初级网络工程师面试题&#xff0c;今天我整理出来了&#xff0c;针对初级网络工程师&#xff0c;我们在面试的时候主要考察的是基础概念&#xff0c;下面列举的希望大家可以收藏&#xff0c;平时多看看&a…

活动选择问题 | 贪心算法 1

贪心法是一种算法范式&#xff0c;它逐个构建解决方案&#xff0c;始终选择下一个提供最明显和最直接好处的部分。贪心算法用于优化问题。 如果问题具有以下属性&#xff0c;则可以使用 Greedy 解决优化问题&#xff1a; 在每一步中&#xff0c;我们都可以做出当前看起来最好…

MongoDB 6.0 (四)聚合操作

一、 聚合框架的作用 1. 什么是MongoDB 聚合框架 MongoDB 聚合框架(Aggregation Framework)是一个计算框架,它可以: • 作用在一个或几个集合上; • 对集合中的数据进行的一系列运算; • 将这些数据转化为期望的形式; 从效果而言,聚合框架相当于SQL 查询中的: …

【Mysql系列】——详细剖析数据库“索引”【上篇】

【Mysql系列】——详细剖析数据库中的核心知识【索引】&#x1f60e;前言&#x1f64c;索引索引概述为什么需要索引&#xff1f;索引的优缺点索引结构索引的结构为什么不是二叉树和红黑树&#xff1f;索引的B树结构索引的Hash结构Hash结构索引的特点思考&#xff1a;为什么Inno…

MySQL中多表查询(多表关系:一对多、多对多、一对一,分类:连接查询:内连接、外连接、自连接、联合查询,子查询:标量子查询、列子查询、行子查询、表子查询)

多表关系&#xff1a; 一对多&#xff1a; 多对多&#xff1a; 一对一&#xff1a; 我们发现我们利用DQL中的select语句查询多张表的时候&#xff0c;会出现一个数学现象&#xff0c;叫做笛卡尔积 因此我们可以加上where语句来限定条件&#xff1a; 内连接&#xff1a; 此处in…

计算机网络面试八股文攻略(一) —— OSI 七层模型

一、简述 本系列将对面试中与计算机网络相关的知识进行讲解与分析。 本篇为 OSI 七层网络模型的相关知识。 二、概念 OSI 七层网络模型是国际标准化组织&#xff08;ISO&#xff09;制定的一个用于计算机或通信系统间互联的标准体系。它是一个七层的、抽象的模型体&#xff…

A Causal Debiasing Framework for Unsupervised Salient Object Detection

背景知识 显著性检测 简单就是使用图像处理技术和计算机视觉算法来定位图片中最“显著”的区域。显著区域就是指图片中引人注目的区域或比较重要的区域&#xff0c;例如人眼在观看一幅图片时会首先关注的区域。 chatGPT4的回答 计算机视觉中的显著性检测&#xff08;Visual…

从事6个月软件测试,目前只会功能测试迷茫了...

前言 (来自一位粉丝的投稿)来这个公司大半年&#xff0c;现在主要做的是类似于淘宝的购物商城&#xff0c;以前也做应用系统什么的&#xff0c;可是感觉公司的软件测试岗位都是不着边的&#xff0c;因为做的都是功能测试&#xff0c;来了这么久&#xff0c;没接触过技术性的东…