有道无术,术尚可求,有术无道,止于术。
本系列Seata 版本 2.0.0
本系列Spring Boot 版本 3.2.0
本系列Spring Cloud 版本 2023.0.0
源码地址:https://gitee.com/pearl-organization/study-seata-demo
文章目录
- 1. 前言
- 2. 问题演示
- 3. 客户端集成
- 3.1 引入依赖
- 3.2 配置
- 3.3 undo_log 表
- 3.4 开启全局事务
- 3.5 启动
- 4. 测试
1. 前言
在前几篇文档中,我们部署好了Seata
服务端并集成了Nacos
,也搭建了一个微服务项目并实现了电商下单功能,接下来,我们学习如何Spring Cloud
集成Seata
客户端,并解决分布式事务问题(默认使用的是AT
模式)。
2. 问题演示
假如在整个电商下单的流程中,扣除账户余额时,发生了异常:
@Override
@Transactional
public ObjectResponse decreaseAccount(AccountDTO accountDTO) {
// 扣减余额
int account = baseMapper.decreaseAccount(accountDTO.getUserId(), accountDTO.getAmount().doubleValue());
// 模拟异常
if (1==1){
throw new RuntimeException("扣除失败~~");
}
ObjectResponse<Object> response = new ObjectResponse<>();
if (account > 0) {
response.setStatus(RspStatusEnum.SUCCESS.getCode());
response.setMessage(RspStatusEnum.SUCCESS.getMessage());
return response;
}
response.setStatus(RspStatusEnum.FAIL.getCode());
response.setMessage(RspStatusEnum.FAIL.getMessage());
return response;
}
操作前,数据库中账户余额为10000
,库存为10000
,订单数为0
,访问http://localhost:8080/business/buy
下单接口,账户服务发生异常,由于开启了本地事务,账户回滚 ,订单服务调用账户服务返回异常,也因为开启了本地事务,插入的订单进行了回滚。
但是用于的库存服务本身并没有抛出任何异常,本地事务进行了提交,导致扣减了库存,发生数据不一致问题:
3. 客户端集成
3.1 引入依赖
在当前案例项目中,以下几个服务都是分布式事务的参与者,所以都需要集成Seata
客户端:
Spring Cloud Alibaba
已经提供了Spring Cloud
环境下Seata
的集成包,只需要引入以下依赖即可:
<dependency>
<groupId>com.alibaba.cloud</groupId>
<artifactId>spring-cloud-starter-alibaba-seata</artifactId>
</dependency>
该Seata
客户端依赖为最新的2.0.0
版本:
3.2 配置
在所有服务后台application.yml
文件中,添加注册、配置中心:
seata:
# 配置中心
config:
type: nacos
nacos:
# 通过 Nacos 获取 Seata 配置(以下配置需要和服务端保持一致)
namespace: 7032916a-19f1-482e-a3eb-8a62226c2e4d
server-addr: 127.0.0.1:8848
group: SEATA_GROUP
data-id: seata.properties
# 注册中心
registry:
type: nacos
nacos:
# 通过 Nacos 服务发现 Seata 服务端(以下配置需要和服务端保持一致)
# Seata 服务端在Nacos中注册的服务名
application: seata-server
server-addr: 127.0.0.1:8848
group: DEFAULT_GROUP
namespace: 7032916a-19f1-482e-a3eb-8a62226c2e4d
3.3 undo_log 表
在AT
模式中,需要在参与全局事务的数据库中添加undo_log
表:
CREATE TABLE `undo_log` (
`id` bigint NOT NULL AUTO_INCREMENT,
`branch_id` bigint NOT NULL,
`xid` varchar(100) CHARACTER SET utf8mb3 COLLATE utf8_general_ci NOT NULL,
`context` varchar(128) CHARACTER SET utf8mb3 COLLATE utf8_general_ci NOT NULL,
`rollback_info` longblob NOT NULL,
`log_status` int NOT NULL,
`log_created` datetime NOT NULL,
`log_modified` datetime NOT NULL,
PRIMARY KEY (`id`),
UNIQUE KEY `ux_undo_log` (`xid`,`branch_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb3;
在seata_account
、seata_order
、seata_stock
库中,都新建undo_log
表。
3.4 开启全局事务
下单流程由business
服务发起,所以该服务即事务的发起方TM
,我们在事务发起方法上添加@GlobalTransactional
注解开启全局事务:
@GlobalTransactional
public Object handleBusiness() {
ObjectResponse<Object> objectResponse = new ObjectResponse<>();
// 1. 业务请求数据
BusinessDTO businessDTO =new BusinessDTO();
// 省略..........
}
3.5 启动
启动所有业务服务,查看控制台,我们看下Seata
相关的一些关键日志。
使用Nacos
注册中心:
main] i.s.discovery.registry.RegistryFactory : use registry center type: nacos
获取到Seata
服务端地址并订阅:
main] com.alibaba.nacos.client.naming : init new ips(1) service: DEFAULT_GROUP@@seata-server@@default -> [{"instanceId":"192.168.142.1#8091#default#DEFAULT_GROUP@@seata-server","ip":"192.168.142.1","port":8091,"weight":1.0,"healthy":true,"enabled":true,"ephemeral":true,"clusterName":"default","serviceName":"DEFAULT_GROUP@@seata-server","metadata":{},"ipDeleteTimeout":30000,"instanceHeartBeatInterval":5000,"instanceHeartBeatTimeOut":15000,"instanceIdGenerator":"simple"}]
main] com.alibaba.nacos.client.naming : current ips:(1) service: DEFAULT_GROUP@@seata-server@@default -> [{"instanceId":"192.168.142.1#8091#default#DEFAULT_GROUP@@seata-server","ip":"192.168.142.1","port":8091,"weight":1.0,"healthy":true,"enabled":true,"ephemeral":true,"clusterName":"default","serviceName":"DEFAULT_GROUP@@seata-server","metadata":{},"ipDeleteTimeout":30000,"instanceHeartBeatInterval":5000,"instanceHeartBeatTimeOut":15000,"instanceIdGenerator":"simple"}]
main] com.alibaba.nacos.client.naming : [SUBSCRIBE-SERVICE] service:seata-server, group:DEFAULT_GROUP, clusters:default
使用Netty
端口连接到Seata
服务端,并发送TM
注册请求:
main] i.s.c.r.netty.NettyClientChannelManager : will connect to 192.168.142.1:8091
main] i.s.core.rpc.netty.NettyPoolableFactory : NettyPool create channel to transactionRole:TMROLE,address:192.168.142.1:8091,msg:< RegisterTMRequest{version='2.0.0', applicationId='stock', transactionServiceGroup='default_tx_group', extraData='ak=null
digest=default_tx_group,192.168.142.1,1710149283844
timestamp=1710149283844
authVersion=V4
vgroup=default_tx_group
ip=192.168.142.1
'} >
TM
注册成功,应用ID
为account
(后台服务名),事务分组为default_tx_group
(默认的,后续讲解):
main] i.s.c.rpc.netty.TmNettyRemotingClient : register TM success. client version:2.0.0, server version:2.0.0,channel:[id: 0x9902a1f6, L:/192.168.142.1:53418 - R:/192.168.142.1:8091]
main] i.s.core.rpc.netty.NettyPoolableFactory : register success, cost 40 ms, version:2.0.0,role:TMROLE,channel:[id: 0x9902a1f6, L:/192.168.142.1:53418 - R:/192.168.142.1:8091]
main] i.s.s.a.GlobalTransactionScanner : Transaction Manager Client is initialized. applicationId[stock] txServiceGroup[default_tx_group]
数据源初始化完成后,注册RM
:
main] i.s.s.a.GlobalTransactionScanner : Resource Manager is initialized. applicationId[stock] txServiceGroup[default_tx_group]
main] com.zaxxer.hikari.HikariDataSource : HikariPool-1 - Starting...
main] com.zaxxer.hikari.pool.HikariPool : HikariPool-1 - Added connection com.mysql.cj.jdbc.ConnectionImpl@15a0f9
main] com.zaxxer.hikari.HikariDataSource : HikariPool-1 - Start completed.
main] i.s.c.r.netty.NettyClientChannelManager : will connect to 192.168.142.1:8091
main] i.s.c.rpc.netty.RmNettyRemotingClient : RM will register :jdbc:mysql://127.0.0.1:3306/seata_stock
main] i.s.c.rpc.netty.RmNettyRemotingClient : register RM success. client version:2.0.0, server version:2.0.0,channel:[id: 0xd20fcff4, L:/192.168.142.1:53423 - R:/192.168.142.1:8091]
main] i.s.core.rpc.netty.NettyPoolableFactory : register success, cost 8 ms, version:2.0.0,role:RMROLE,channel:[id: 0xd20fcff4, L:/192.168.142.1:53423 - R:/192.168.142.1:8091]
最后可以看到默认开启了AT
模式的数据源代理,说明AT
是默认开启的事务模型:
main] .s.s.a.d.SeataAutoDataSourceProxyCreator : Auto proxy data source 'dataSource' by 'AT' mode.
4. 测试
重启所有服务后台,访问下单接口,可以看到在库存服务中,虽然没有抛出异常,但是全局事务失败了,进行了回滚操作:
查看数据库,所有数据一致,集成成功: