前言:
眼前姐妹们对“wegtmysql”大体比较注重,你们都需要了解一些“wegtmysql”的相关内容。那么小编同时在网上网罗了一些对于“wegtmysql””的相关资讯,希望朋友们能喜欢,同学们快快来了解一下吧!ClickHouse作为OLAP分析引擎已经被广泛使用,数据的导入导出是用户面临的第一个问题。由于ClickHouse本身无法很好地支持单条大批量的写入,因此在实时同步数据方面需要借助其他服务协助。本文给出一种结合Canal+Kafka的方案,并且给出在多个MySQL实例分库分表的场景下,如何将多张MySQL数据表写入同一张ClickHouse表的方法,欢迎大家批评指正
我的需求就是将mysql多张表的数据实时同步到clickhouse中,参考了很多网上的资料,写的都不是很好,有的配置完成后,就会报错,今天终于实现了该功能,分享给大家。
我主要利用canal来实现ClickHouse实时同步MySQL数据,本次不考虑接入kafka等mq
配置mysql canal安装配置 canal-client安装配置 创建clickhouse的数据库和表 结果展示
1. 配置mysql开启binlog
# mysql配置文件/etc/my.cnf添加下面配置vi /etc/my.cnf#插入下面内容server-id = 1log_bin = /var/lib/mysql/bin.logbinlog-format = row # very important if you want to receive write, update and delete row events# optionalexpire_logs_days = 30max_binlog_size = 768M# setup listen addressbind-address = 0.0.0.0 复制代码新增同步账号
#登陆mysql,执行下面命令,创建账号aaa,密码为123456 CREATE USER 'maxwell'@'%' IDENTIFIED BY '123456'; GRANT SELECT, REPLICATION CLIENT, REPLICATION SLAVE ON *.* TO 'maxwell'@'%'; flush privileges; 复制代码2. canal安装配置下载 canal, 访问 release页面 , 选择需要的包下载, 如以 1.1.4 版本为例
#下载canalan安装包wget 解压到指定目录mkdir -p /usr/tool/canaltar zxvf canal.deployer-1.1.4.tar.gz -C /usr/tool/canal#解压后进入目录,结构如下drwxr-xr-x 7 awwzc staff 238 12 14 23:34 bindrwxr-xr-x 9 awwzc staff 306 12 14 23:32 confdrwxr-xr-x 83 awwzc staff 2822 12 14 23:30 libdrwxr-xr-x 4 awwzc staff 136 12 14 23:34 logs# canal启动时会读取conf目录下面的文件夹,当作instance,进入conf目录下复制example文件夹cp -R example/ maxwell/#移除example文件夹mv -rf example#修改canal.properties(一定要修改)# instance列表,conf目录下必须有同名的目录canal.destinations = maxwell#修改maxwell文件夹下面的instance.properties文件下面几项为你自己的数据库配置即可vi conf/maxwell/instance.properties# position infocanal.instance.master.address=192.168.0.102:3306# username/passwordcanal.instance.dbUsername=maxwellcanal.instance.dbPassword=123456# 启动,安装目录下执行以下命令,server,instance出现下面日记说明启动成功bin/startup.sh# 查看server日记,会出现以下日记tail -200f logs/canal/canal.log2019-12-14 23:34:47.247 [main] INFO com.alibaba.otter.canal.deployer.CanalLauncher - ## set default uncaught exception handler2019-12-14 23:34:47.312 [main] INFO com.alibaba.otter.canal.deployer.CanalLauncher - ## load canal configurations2019-12-14 23:34:47.334 [main] INFO com.alibaba.otter.canal.deployer.CanalStarter - ## start the canal server.2019-12-14 23:34:47.406 [main] INFO com.alibaba.otter.canal.deployer.CanalController - ## start the canal server[192.168.0.111(192.168.0.111):11111]2019-12-14 23:34:49.026 [main] INFO com.alibaba.otter.canal.deployer.CanalStarter - ## the canal server is running now ......# 查看instance日记,会出现以下日记tail -200f logs/maxwell/maxwell.log2019-12-15 17:59:12.908 [destination = example , address = /192.168.0.102:3306 , EventParser] WARN c.a.o.c.p.inbound.mysql.rds.RdsBinlogEventParserProxy - ---> begin to find start position, it will be long time for reset or first position2019-12-15 17:59:12.913 [destination = example , address = /192.168.0.102:3306 , EventParser] WARN c.a.o.c.p.inbound.mysql.rds.RdsBinlogEventParserProxy - prepare to find start position just last position {"identity":{"slaveId":-1,"sourceAddress":{"address":"192.168.0.102","port":3306}},"postion":{"gtid":"","included":false,"journalName":"bin.000002","position":249315,"serverId":1,"timestamp":1576282583000}}2019-12-15 17:59:13.015 [destination = example , address = /192.168.0.102:3306 , EventParser] WARN c.a.o.c.p.inbound.mysql.rds.RdsBinlogEventParserProxy - ---> find start position successfully, EntryPosition[included=false,journalName=bin.000002,position=249315,serverId=1,gtid=,timestamp=1576282583000] cost : 105ms , the next step is binlog dump#关闭sh bin/stop.sh复制代码3. canal-client安装配置下载 canal-adapter, 访问 release页面 , 选择需要的包下载, 如以 1.1.4 版本为例
#下载wget 解压mkdir -p /usr/tool/canal-adaptertar zxvf canal.adapter-1.1.4.tar.gz -C /usr/tool/canal-adapter#解压后目录如下drwxr-xr-x 7 awwzc staff 238 12 15 13:19 bindrwxr-xr-x 9 awwzc staff 306 12 15 13:18 confdrwxr-xr-x 87 awwzc staff 2958 12 15 13:18 libdrwxr-xr-x 3 awwzc staff 102 12 15 13:19 logsdrwxr-xr-x 6 awwzc staff 204 12 15 13:09 plugin#在lib目录下面添加clickhouse连接驱动httpcore-4.4.13.jar、httpclient-4.3.3.jar、clickhouse-jdbc-0.2.4.jar;#修改配置文件conf/application.yml文件,修改canalServerHost、srcDataSources、canalAdapters的配置;canal.conf: mode: tcp canalServerHost: 127.0.0.1:11111 # canal-server的服务地址 secretKey: #同步数据源配置 srcDataSources defaultDS: #mysql连接信息 url: jdbc:mysql://192.168.0.102:3306/maxwell?useUnicode=true username: root password: 123456 canalAdapters: - instance: maxwell # canal instance Name or mq topic name groups: - groupId: g1 outerAdapters: - name: logger - name: rdb #rdb类型 key: mysql properties: #clickhouse数据看配置 jdbc.driverClassName: ru.yandex.clickhouse.ClickHouseDriver jdbc.url: jdbc:clickhouse://127.0.0.1:8123/maxwell jdbc.username: default jdbc.password: 1. 源数据库与目标数据库名字不同,源表名与目标表名不同 #修改adapter的conf/rdb/mytest_user.yml配置文件,指定源数据库和目标数据库 dataSourceKey: defaultDS destination: maxwell groupId: g1 outerAdapterKey: mysql1 concurrent: true dbMapping: database: aqi_china table: tb_aqi_0 targetTable: aqi_china.tb_aqi #targetPk: # id: id mapAll: true #targetColumns: # id: # name: # role_id: # c_time: # test1: #etlCondition: "where c_time>={}" commitBatch: 3000 # 批量提交的大小 2. 多个源数据库表写入目的端的同一张表 在conf/rdb 目录配置多个yml文件,分别指明不同的table名称。复制代码启动
bin/startup.sh#查看日记,出现以下日记说名启动成功tail -200f logs/adapter/adapter.log复制代码我在启动的时候出现了内存溢出的异常,可以通过修改startup.sh的配置:
启动后需要看日志,成功启动后会打印:
2019-12-15 13:19:40.457 [main] INFO c.a.o.canal.adapter.launcher.loader.CanalAdapterService - ## start the canal client adapters.2019-12-15 13:19:40.464 [main] INFO c.a.otter.canal.client.adapter.support.ExtensionLoader - extension classpath dir: /Users/awwzc/Documents/my_soft/tool/canal-adpater/plugin2019-12-15 13:19:40.542 [main] INFO c.a.o.canal.adapter.launcher.loader.CanalAdapterLoader - Load canal adapter: logger succeed2019-12-15 13:19:40.546 [main] INFO c.a.otter.canal.client.adapter.rdb.config.ConfigLoader - ## Start loading rdb mapping config ...2019-12-15 13:19:40.637 [main] INFO c.a.otter.canal.client.adapter.rdb.config.ConfigLoader - ## Rdb mapping config loaded2019-12-15 13:19:40.640 [main] ERROR com.alibaba.druid.pool.DruidDataSource - testWhileIdle is true, validationQuery not set2019-12-15 13:19:40.951 [main] INFO com.alibaba.druid.pool.DruidDataSource - {dataSource-2} inited2019-12-15 13:19:40.959 [main] INFO c.a.o.canal.adapter.launcher.loader.CanalAdapterLoader - Load canal adapter: rdb succeed2019-12-15 13:19:40.986 [main] INFO c.a.o.canal.adapter.launcher.loader.CanalAdapterLoader - Start adapter for canal instance: example succeed2019-12-15 13:19:40.986 [Thread-4] INFO c.a.o.canal.adapter.launcher.loader.CanalAdapterWorker - =============> Start to connect destination: example <=============2019-12-15 13:19:40.986 [main] INFO c.a.o.canal.adapter.launcher.loader.CanalAdapterService - ## the canal client adapters are running now ......2019-12-15 13:19:40.995 [main] INFO org.apache.coyote.http11.Http11NioProtocol - Starting ProtocolHandler ["http-nio-8081"]2019-12-15 13:19:41.021 [main] INFO org.apache.tomcat.util.net.NioSelectorPool - Using a shared selector for servlet write/read2019-12-15 13:19:41.048 [main] INFO o.s.boot.web.embedded.tomcat.TomcatWebServer - Tomcat started on port(s): 8081 (http) with context path ''2019-12-15 13:19:41.053 [main] INFO c.a.otter.canal.adapter.launcher.CanalAdapterApplication - Started CanalAdapterApplication in 7.099 seconds (JVM running for 7.832)2019-12-15 13:19:41.122 [Thread-4] INFO c.a.o.canal.adapter.launcher.loader.CanalAdapterWorker - =============> Start to subscribe destination: example <=============2019-12-15 13:19:41.128 [Thread-4] INFO c.a.o.canal.adapter.launcher.loader.CanalAdapterWorker - =============> Subscribe destination: example succeed <=============复制代码4. clickhouse创建数据库和表
#创建数据库CREATE DATABASE aqi_china;#创建表CREATE TABLE IF NOT EXISTS aqi_china.tb_aqi (id Int32,uuid String,uid Int32,aqi Int32,url String,co String,dew String,h String,no2 String,o3 String,p String,pm10 String,pm25 String,so2 String,t String,w String,wg String,vtime String,ftime Int32) ENGINE=MergeTree() PARTITION BY uid ORDER BY id SETTINGS index_granularity = 8192;复制代码5. 结果展示我们来看一下展示结果:
这是canal客户端打印的日志:
展示clickhouse的数据:数据已经同步成功了
希望可以帮助到跟我遇到同样问题的同学,这个方式肯定没问题,如果有问题,可以给我留言
文章来源:
标签: #wegtmysql #mysql两张表同步