在这篇文章中,介绍使用DuckDB将数据从MySQL数据库无缝传输到Parquet文件的过程。该方法比传统的基于pandas方法更高效、方便,我们可以从DuckDB cli实现,也可以结合Python编程方式实现,两者执行核心SQL及过程都一样。
Parquet格式优势
Parquet 文件格式是一种开源的列存储文件格式,由 Apache Hadoop 生态系统中的 Apache Parquet 项目开发。
其优势主要体现为下面四个方面。
列式存储:只读取需要的数据,降低 IO 数据量,速度快。比如在一个包含众多列的大数据表中,若只需查询其中几列数据,Parquet 格式可直接定位到这些列进行读取,而无需像行式存储那样读取整行数据134。
压缩比高:由于按列存储,可以使用更高效的压缩编码,如 Run Length Encoding 和 Delta Encoding 等,进一步节约存储空间,通常能比相同数据的行式存储格式节省大量空间14。
自带 Schema:Parquet 文件包含了元数据信息,其中的 schema 和 structure 信息可以通过数据文件解析出来,这使得数据的读取和理解更加方便,也便于不同系统之间的数据共享和交换1。
数据跳过功能:在执行查询时,可以快速跳过与查询条件不相关的数据块,减少不必要的数据扫描,提高查询效率,尤其在处理大规模数据时优势明显。
支持谓词下推:当执行查询时,过滤条件可以被下推到存储层进行处理,而不是在查询引擎中处理,这样可以减少数据的移动和处理量,从而加快查询速度。
聚合操作高效:由于其列存储特性,在执行聚合操作时,如求和、计数等,能够快速访问所需的列数据,相比行式存储格式性能更优3。
跨平台支持:Parquet 文件可以被多种数据处理工具和平台读取和写入,如 Apache Spark、Apache Hive、Apache Impala 以及 Python 的 Pandas 等,提高了数据的可移植性和互操作性3。
支持复杂数据类型和结构:能够高效地存储和查询复杂的嵌套数据结构,如 JSON 格式的数据,以及包含数组、结构体等复杂类型的数据3。
语言支持广泛:有多种编程语言的库和接口支持,如 Java、Python、C++、C# 等,方便在不同语言环境下进行数据的读写和处理。
支持模式演进:可以在不重写整个文件的情况下修改 Parquet 文件的模式,方便添加新列或更改现有列的数据类型,易于应对数据结构的变化。
数据治理友好:支持细粒度的访问控制和数据加密,适合存储敏感数据,为数据治理和合规性提供了保障。
安装MySQL扩展
mysql扩展允许DuckDB直接从运行的mysql实例中读写数据。数据可以直接从底层MySQL数据库查询。数据可以从MySQL表加载到DuckDB表,反之亦然。
INSTALL mysql;
LOAD mysql;
配置MySQL连接
MySQL连接字符串决定了如何以一组键=值
对的形式连接到MySQL的参数。任何未提供的选项都将被其默认值替换,如下表所示。还可以使用环境变量指定连接信息。如果没有显式提供选项,MySQL扩展将尝试从环境变量中读取该选项。
Setting | Default | Environment variable |
---|---|---|
database | NULL | MYSQL_DATABASE |
host | localhost | MYSQL_HOST |
password | | MYSQL_PWD |
port | 0 | MYSQL_TCP_PORT |
socket | NULL | MYSQL_UNIX_PORT |
user | ⟨current user⟩ | MYSQL_USER |
ssl_mode | preferred | |
ssl_ca | | |
ssl_capath | | |
ssl_cert | | |
ssl_cipher | | |
ssl_crl | | |
ssl_crlpath | | |
ssl_key | | |
要使MySQL数据库可以被DuckDB访问,可以使用ATTACH命令结合连接参数实现:
ATTACH 'host=localhost user=root password=passw0rd port=3306 database=saklia' AS saklia (TYPE MYSQL);
USE saklia;
复制MySQL数据至DuckDB
D create or replace table film_copy as from saklia.sakila.film;
D desc film_copy;
{"column_name":"film_id","column_type":"USMALLINT","null":"YES","key":null,"default":null,"extra":null}
{"column_name":"title","column_type":"VARCHAR","null":"YES","key":null,"default":null,"extra":null}
{"column_name":"description","column_type":"VARCHAR","null":"YES","key":null,"default":null,"extra":null}
{"column_name":"release_year","column_type":"INTEGER","null":"YES","key":null,"default":null,"extra":null}
{"column_name":"language_id","column_type":"UTINYINT","null":"YES","key":null,"default":null,"extra":null}
{"column_name":"original_language_id","column_type":"UTINYINT","null":"YES","key":null,"default":null,"extra":null}
{"column_name":"rental_duration","column_type":"UTINYINT","null":"YES","key":null,"default":null,"extra":null}
{"column_name":"rental_rate","column_type":"DECIMAL(4,2)","null":"YES","key":null,"default":null,"extra":null}
{"column_name":"length","column_type":"USMALLINT","null":"YES","key":null,"default":null,"extra":null}
{"column_name":"replacement_cost","column_type":"DECIMAL(5,2)","null":"YES","key":null,"default":null,"extra":null}
{"column_name":"rating","column_type":"VARCHAR","null":"YES","key":null,"default":null,"extra":null}
{"column_name":"special_features","column_type":"VARCHAR","null":"YES","key":null,"default":null,"extra":null}
{"column_name":"last_update","column_type":"TIMESTAMP WITH TIME ZONE","null":"YES","key":null,"default":null,"extra":null}
## 查看记录数
D select count(*) from film_copy;
{"count_star()":1000}
导出Parquest格式数据
使用copy to 语句导出文件,然后查询数据结构及记录数:
D copy film_copy to film.parquet;
## 确认并验证
D select count(*) from read_parquet('film.parquet');
{"count_star()":1000}
D desc from read_parquet('film.parquet');
{"column_name":"film_id","column_type":"USMALLINT","null":"YES","key":null,"default":null,"extra":null}
{"column_name":"title","column_type":"VARCHAR","null":"YES","key":null,"default":null,"extra":null}
{"column_name":"description","column_type":"VARCHAR","null":"YES","key":null,"default":null,"extra":null}
{"column_name":"release_year","column_type":"INTEGER","null":"YES","key":null,"default":null,"extra":null}
{"column_name":"language_id","column_type":"UTINYINT","null":"YES","key":null,"default":null,"extra":null}
{"column_name":"original_language_id","column_type":"UTINYINT","null":"YES","key":null,"default":null,"extra":null}
{"column_name":"rental_duration","column_type":"UTINYINT","null":"YES","key":null,"default":null,"extra":null}
{"column_name":"rental_rate","column_type":"DECIMAL(4,2)","null":"YES","key":null,"default":null,"extra":null}
{"column_name":"length","column_type":"USMALLINT","null":"YES","key":null,"default":null,"extra":null}
{"column_name":"replacement_cost","column_type":"DECIMAL(5,2)","null":"YES","key":null,"default":null,"extra":null}
{"column_name":"rating","column_type":"VARCHAR","null":"YES","key":null,"default":null,"extra":null}
{"column_name":"special_features","column_type":"VARCHAR","null":"YES","key":null,"default":null,"extra":null}
{"column_name":"last_update","column_type":"TIMESTAMP WITH TIME ZONE","null":"YES","key":null,"default":null,"extra":null}
可以看到与之前结果一致。如果数据量很大,我们还可以实现hive分区,参考代码如下:
COPY (
SELECT *,
date_trunc('day', rundate) AS rundate_truncated
FROM bike_readings_april
) TO 'bike_readings'(
FORMAT PARQUET,
PARTITION_BY (rundate_truncated),
OVERWRITE_OR_IGNORE true
);
将导出文件拆分为多个Parquet文件可以简化存储,并通过减少扫描和处理的数据量来提高需要读取数据的系统的性能。Hive分区通常用于数据湖存储,用于分布式SQL查询引擎的高效检索,可以利用分区键来减少需要扫描的文件数量。
总结
驾驭复杂的数据提取和格式转换不仅需要技能,还需要合适的工具。通过本文介绍,我们看到DuckDB如何简化数据导出过程,提供了从MySQL到Parquet的无缝对接方式。通过保持数据完整性、自动处理数据类型和消除繁琐的数据类型转换,DuckDB为数据工程师提供了引人注目的解决方案。