K8S部署seata-server集群

Seata分TC、TM和RM三个角色,TC(Server端)为单独服务端部署,TM和RM(Client端)由业务系统集成。 Seata-server 的高可用依赖于注册中心配置中心数据库来实现。

Server端存储模式(store.mode)现有file、db、redis三种(后续将引入raft,mongodb),file模式无需改动,直接启动即可,下面专门讲下db启动步骤。

注: file模式为单机模式,全局事务会话信息内存中读写并持久化本地文件root.data,性能较高;

db模式为高可用模式,全局事务会话信息通过db共享,相应性能差些;

redis模式Seata-Server 1.3及以上版本支持,性能较高,存在事务信息丢失风险,请提前配置合适当前场景的redis持久化配置.

下面主要介绍一下基于nacos注册中心和mysql 数据库的集群高可用的搭建和实现。

原理和架构图

seata-server 的高可用的实现,主要基于db和注册中心,通过db获取全局事务,实现多实例事务共享。通过注册中心来实现seata-server多实例的动态管理。架构图原理图如 下:

部署环境

部署步骤

1.创建seata_server库:

1
create database seata_server character set utf8mb4 collate utf8mb4_general_ci;

创建表:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
-- -------------------------------- The script used when storeMode is 'db' --------------------------------
-- the table to store GlobalSession data
CREATE TABLE IF NOT EXISTS `global_table`
(
`xid` VARCHAR(128) NOT NULL,
`transaction_id` BIGINT,
`status` TINYINT NOT NULL,
`application_id` VARCHAR(256),
`transaction_service_group` VARCHAR(32),
`transaction_name` VARCHAR(128),
`timeout` INT,
`begin_time` BIGINT,
`application_data` VARCHAR(2000),
`gmt_create` DATETIME,
`gmt_modified` DATETIME,
PRIMARY KEY (`xid`),
KEY `idx_status_gmt_modified` (`status` , `gmt_modified`),
KEY `idx_transaction_id` (`transaction_id`)
) ENGINE = InnoDB
DEFAULT CHARSET = utf8mb4;

-- the table to store BranchSession data
CREATE TABLE IF NOT EXISTS `branch_table`
(
`branch_id` BIGINT NOT NULL,
`xid` VARCHAR(128) NOT NULL,
`transaction_id` BIGINT,
`resource_group_id` VARCHAR(32),
`resource_id` VARCHAR(256),
`branch_type` VARCHAR(8),
`status` TINYINT,
`client_id` VARCHAR(64),
`application_data` VARCHAR(2000),
`gmt_create` DATETIME(6),
`gmt_modified` DATETIME(6),
PRIMARY KEY (`branch_id`),
KEY `idx_xid` (`xid`)
) ENGINE = InnoDB
DEFAULT CHARSET = utf8mb4;

-- the table to store lock data
CREATE TABLE IF NOT EXISTS `lock_table`
(
`row_key` VARCHAR(128) NOT NULL,
`xid` VARCHAR(128),
`transaction_id` BIGINT,
`branch_id` BIGINT NOT NULL,
`resource_id` VARCHAR(256),
`table_name` VARCHAR(32),
`pk` VARCHAR(36),
`status` TINYINT NOT NULL DEFAULT '0' COMMENT '0:locked ,1:rollbacking',
`gmt_create` DATETIME,
`gmt_modified` DATETIME,
PRIMARY KEY (`row_key`),
KEY `idx_status` (`status`),
KEY `idx_branch_id` (`branch_id`),
KEY `idx_xid` (`xid`)
) ENGINE = InnoDB
DEFAULT CHARSET = utf8mb4;

CREATE TABLE IF NOT EXISTS `distributed_lock`
(
`lock_key` CHAR(20) NOT NULL,
`lock_value` VARCHAR(20) NOT NULL,
`expire` BIGINT,
primary key (`lock_key`)
) ENGINE = InnoDB
DEFAULT CHARSET = utf8mb4;

INSERT INTO `distributed_lock` (lock_key, lock_value, expire) VALUES ('AsyncCommitting', ' ', 0);
INSERT INTO `distributed_lock` (lock_key, lock_value, expire) VALUES ('RetryCommitting', ' ', 0);
INSERT INTO `distributed_lock` (lock_key, lock_value, expire) VALUES ('RetryRollbacking', ' ', 0);
INSERT INTO `distributed_lock` (lock_key, lock_value, expire) VALUES ('TxTimeoutCheck', ' ', 0);

-- for AT mode you must to init this sql for you business database. the seata server not need it.
CREATE TABLE IF NOT EXISTS `undo_log`
(
`branch_id` BIGINT NOT NULL COMMENT 'branch transaction id',
`xid` VARCHAR(128) NOT NULL COMMENT 'global transaction id',
`context` VARCHAR(128) NOT NULL COMMENT 'undo_log context,such as serialization',
`rollback_info` LONGBLOB NOT NULL COMMENT 'rollback info',
`log_status` INT(11) NOT NULL COMMENT '0:normal status,1:defense status',
`log_created` DATETIME(6) NOT NULL COMMENT 'create datetime',
`log_modified` DATETIME(6) NOT NULL COMMENT 'modify datetime',
UNIQUE KEY `ux_undo_log` (`xid`, `branch_id`)
) ENGINE = InnoDB AUTO_INCREMENT = 1 DEFAULT CHARSET = utf8mb4 COMMENT ='AT transaction mode undo table';
ALTER TABLE `undo_log` ADD INDEX `ix_log_created` (`log_created`);

添加nacos配置:

配置文件如下:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
store.mode=db
#-----db-----
store.db.datasource=druid
store.db.dbType=mysql
# 需要根据mysql的版本调整driverClassName
# mysql8及以上版本对应的driver:com.mysql.cj.jdbc.Driver
# mysql8以下版本的driver:com.mysql.jdbc.Driver
store.db.driverClassName=com.mysql.jdbc.Driver
store.db.url=jdbc:mysql://mysql:3306/seata_server?useUnicode=true&characterEncoding=utf8&connectTimeout=1000&socketTimeout=3000&autoReconnect=true&useSSL=false
store.db.user= root
store.db.password= qifu@2023
# 数据库初始连接数
store.db.minConn=1
# 数据库最大连接数
store.db.maxConn=200
# 获取连接时最大等待时间 默认5000,单位毫秒
store.db.maxWait=5000
# 全局事务表名 默认global_table
store.db.globalTable=global_table
# 分支事务表名 默认branch_table
store.db.branchTable=branch_table
# 全局锁表名 默认lock_table
store.db.lockTable=lock_table
store.db.distributedLockTable=distributed_lock
# 查询全局事务一次的最大条数 默认100
store.db.queryLimit=100


# undo保留天数 默认7天,log_status=1(附录3)和未正常清理的undo
server.undo.logSaveDays=7
# undo清理线程间隔时间 默认86400000,单位毫秒
server.undo.logDeletePeriod=86400000
# 二阶段提交重试超时时长 单位ms,s,m,h,d,对应毫秒,秒,分,小时,天,默认毫秒。默认值-1表示无限重试
# 公式: timeout>=now-globalTransactionBeginTime,true表示超时则不再重试
# 注: 达到超时时间后将不会做任何重试,有数据不一致风险,除非业务自行可校准数据,否者慎用
server.maxCommitRetryTimeout=-1
# 二阶段回滚重试超时时长
server.maxRollbackRetryTimeout=-1
# 二阶段提交未完成状态全局事务重试提交线程间隔时间 默认1000,单位毫秒
server.recovery.committingRetryPeriod=1000
# 二阶段异步提交状态重试提交线程间隔时间 默认1000,单位毫秒
server.recovery.asynCommittingRetryPeriod=1000
# 二阶段回滚状态重试回滚线程间隔时间 默认1000,单位毫秒
server.recovery.rollbackingRetryPeriod=1000
# 超时状态检测重试线程间隔时间 默认1000,单位毫秒,检测出超时将全局事务置入回滚会话管理器
server.recovery.timeoutRetryPeriod=1000

seata-server.yaml文件:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
apiVersion: v1
kind: ConfigMap
metadata:
name: seata-ha-server-config
namespace: qifu-develop
data:
application.yml: |
server:
# web管理的端口
port: 7091
spring:
application:
name: seata-server
logging:
config: classpath:logback-spring.xml
file:
path: ${user.home}/logs/seata
extend:
logstash-appender:
destination: 127.0.0.1:4560
kafka-appender:
bootstrap-servers: 127.0.0.1:9092
topic: logback_to_logstash
console:
# web端管理的用户名和密码
user:
username: seata
password: seata
seata:
config:
type: nacos
nacos:
# nacos的地址 k8s内部域名需要在后面加端口号
server-addr: nacos-headless.qifu.svc.cluster.local:8848
namespace: qifu-develop
group: SEATA_GROUP
username: nacos
password: nacos
# 创建的配置文件名称
data-id: seataServer.properties
registry:
type: nacos
nacos:
application: seata-server
server-addr: nacos-headless.qifu.svc.cluster.local:8848
group: SEATA_GROUP
namespace: qifu-develop
cluster: default
username: nacos
password: nacos
#store:
# mode: db
# db:
# datasource: druid
# dbType: mysql
# driverClassName: com.mysql.jdbc.Driver
# url: jdbc:mysql:/192.168.1.50:3306/seata?useUnicode=true&characterEncoding=utf8&connectTimeout=1000&socketTimeout=3000&autoReconnect=true&useSSL=false
# user: root
# password: root
# seata 服务暴露的地址
server:
service-port: 8091
security:
secretKey: SeataSecretKey0c382ef121d778043159209298fd40bf3850a017
tokenValidityInMilliseconds: 1800000
ignore:
urls: /,/**/*.css,/**/*.js,/**/*.html,/**/*.map,/**/*.svg,/**/*.png,/**/*.ico,/console-fe/public/**,/api/v1/auth/login

---
apiVersion: v1
kind: Service
metadata:
name: seata-ha-server
namespace: qifu-develop
labels:
app.kubernetes.io/name: seata-ha-server
#annotations:
# service.beta.kubernetes.io/alibaba-cloud-loadbalancer-id: lb-2zeabcdefghijklmn
# service.beta.kubernetes.io/alicloud-loadbalancer-force-override-listeners: 'true'
spec:
#type: LoadBalancer
type: NodePort
ports:
- port: 8091
protocol: TCP
targetPort: 8091
name: http
- port: 7091
protocol: TCP
targetPort: 7091
name: web
selector:
app.kubernetes.io/name: seata-ha-server
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: seata-ha-server
namespace: qifu-develop
labels:
app.kubernetes.io/name: seata-ha-server
spec:
serviceName: seata-ha-server
replicas: 3
selector:
matchLabels:
app.kubernetes.io/name: seata-ha-server
template:
metadata:
labels:
app.kubernetes.io/name: seata-ha-server
spec:
containers:
- name: seata-ha-server
image: swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/seataio/seata-server:1.7.1
imagePullPolicy: IfNotPresent
ports:
- name: http
containerPort: 8091
protocol: TCP
- name: web
containerPort: 7091
protocol: TCP
resources:
limits:
cpu: "1"
memory: 2Gi
requests:
cpu: 200m
memory: 1Gi
volumeMounts:
- mountPath: /seata-server/resources/application.yml
name: seata-config
subPath: application.yml
volumes:
- name: seata-config
configMap:
name: seata-ha-server-config

部署服务:

1
kubectl apply -f seata-server.yaml

nacos的服务列表:

通过nodeport访问web界面:

遇到的问题

问题:

在k8s部署好seata服务后,本地连接会报错,连不上seata服务:

这个是因为本地是访问不了k8s集群的pod,即使把服务通过nodeport暴露出来也不行:

查看nacos的seata服务会发现ip是pod ip:

解决办法:

启动seata服务时指定ip为node主机的ip:

修改yaml文件,添加环境变量:

1
2
3
4
5
env:
- name: SEATA_IP
value: 10.168.2.234
- name: SEATA_PORT
value: '30180'

如图:

服务启动后查看nacos的seata服务ip:

再写个service对象把端口暴露出来即可:

现在就可以在本地连接seata服务啦。

Thank you for your accept. mua!
-------------本文结束感谢您的阅读-------------