这篇文章主要讲解了“在Cassandra集群中表的数据清理与恢复”,文中的讲解内容简单清晰,易于学习与理解,下面请大家跟着小编的思路慢慢深入,一起来研究和学习“在Cassandra集群中表的数据清理与恢复”吧!目的:项目组需要对线上cassandra集群中某张表数据进行清理,通过实验验证truncate是否可行。1.环境准备阿里云环境搭建三节点集群,副本数为2172.26.99.152172.26.99.153172.26.99.154安装java jdk:如果有遗留的旧版本,需要先删除(1)、查看系统自带jdk是否已安装:yum list installed |grep java若有自带安装的jdk,如何卸载系统自带java环境:yum -y remove java-1.7.0-openjdk*yum -y remove tzdata-java.noarch(2)、查看yum库中的java安装包yum -y list java*(3)、使用yum安装java环境(这里是安装的jdk-1.8.0,如果安装1.7,后面cassandra启动时会报错)yum install java-1.8.0(4)、查看刚安装的java版本信息:java -versionmkdir /CAScd /CAStar xzvf apache-cassandra-3.11.1-bin.tar.gzmv apache-cassandra-3.11.1 cassandrauseradd cassandrapasswd cassandrachown -R cassandra.cassandra /CASchmod 755 -R /CAS/cassandrasu – cassandracd /CAS/cassandra/conf$vi cassandra.yaml- seeds: “192.26.99.152” –这一行由127.0.0.1改为集群中一个或多个节点的IP,不建议所有IP。因为种子节点损坏时修复方法相对复杂listen_address: 192.168.73.104 –这一行改为当前IPrpc_address: 192.168.73.104 –改为当前节点的IP$ vi cassandra-env.shcassandra-env.sh 文件需要修改的参数:JVM_OPTS=”$JVM_OPTS -Djava.rmi.server.hostname=192.168.73.104″ –此行默认是注释的,需要去掉注释,并将hostname改为当前IP 配置$JAVA_HOME(java环境变量 )和$CASSANDRA_HOME(cassandra环境变量)一般来讲通过yum安装的jdk路径应该在/usr/lib/jvm/下(例如我这里的/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.111-0.b15.el6_8.x86_64)(1)、打开环境变量配置文件,添加内容:cat >> /etc/profile
#java pathexport JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.252.b09-2.el7_8.x86_64/jreexport JRE_HOME=$JAVA_HOMEexport CLASS_PATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar:$JRE_HOME/libexport PATH=$PATH:$JAVA_HOME/bin:$JRE_HOME/bin#cassandra pathCASSANDRA_HOME=/CAS/cassandraexport CASSANDRA_HOMEEOF(2)、使配置生效:source /etc/profile上面操作参考:https://blog.csdn.net/dengjiexian123/article/deta免费云主机域名ils/53033119不过java_home的设置不太一样启动:su – cassandracd /CAS/cassandra/bin ./cassandra如果java版本低或未安装成功,此处会报错Cassandra 3.0 and later require Java 8u40 or later.如果报错Unable to find java executable. Check JAVA_HOME and PATH environment variables.重点检查JAVA_HOME是否正确,方法是执行$JAVA_HOME/bin/java -version[cassandra@node2 bin]$ ./cqlsh –request-timeout=9000 $HOSTNAMEConnected to Test Cluster at node2:9042.[cqlsh 5.0.1 | Cassandra 3.11.1 | CQL spec 3.4.4 | Native protocol v4]Use HELP for help.cqlsh> desc keyspaces;system_traces system_schema system_auth system system_distributedcqlsh> SELECT * FROM system_schema.keyspaces;keyspace_name | durable_writes | replication——————–+—————-+————————————————————————————- system_auth | True | {‘class’: ‘org.apache.cassandra.locator.SimpleStrategy’, ‘replication_factor’: ‘1’} system_schema | True | {‘class’: ‘org.apache.cassandra.locator.LocalStrategy’}system_distributed | True | {‘class’: ‘org.apache.cassandra.locator.SimpleStrategy’, ‘replication_factor’: ‘3’} system | True | {‘class’: ‘org.apache.cassandra.locator.LocalStrategy’} system_traces | True | {‘class’: ‘org.apache.cassandra.locator.SimpleStrategy’, ‘replication_factor’: ‘2’}(5 rows)cqlsh>cqlsh> create keyspace dbrsk WITH replication = {‘class’:’NetworkTopologyStrategy’,’datacenter1′:2};cqlsh> SELECT * FROM system_schema.keyspaces; keyspace_name | durable_writes | replication——————–+—————-+————————————————————————————— system_auth | True | {‘class’: ‘org.apache.cassandra.locator.SimpleStrategy’, ‘replication_factor’: ‘1’} system_schema | True | {‘class’: ‘org.apache.cassandra.locator.LocalStrategy’}system_distributed | True | {‘class’: ‘org.apache.cassandra.locator.SimpleStrategy’, ‘replication_factor’: ‘3’} system | True | {‘class’: ‘org.apache.cassandra.locator.LocalStrategy’} dbrsk | True | {‘class’: ‘org.apache.cassandra.locator.NetworkTopologyStrategy’, ‘datacenter1’: ‘2’} system_traces | True | {‘class’: ‘org.apache.cassandra.locator.SimpleStrategy’, ‘replication_factor’: ‘2’}(6 rows)从源库查看待测试的表结构,并导出数据:cqlsh:dbrsk> desc t_card_info;CREATE TABLE dbrsk.t_card_info ( bankcard text PRIMARY KEY, bankname text, cardname text, cardtype text, city text, province text, updatetime bigint) WITH bloom_filter_fp_chance = 0.00075 AND caching = {‘keys’: ‘ALL’, ‘rows_per_partition’: ‘NONE’} AND comment = ‘卡信息’ AND compaction = {‘class’: ‘org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy’, ‘max_threshold’: ’32’, ‘min_threshold’: ‘4’} AND compression = {‘chunk_length_in_kb’: ’64’, ‘class’: ‘org.apache.cassandra.io.compress.LZ4Compressor’} AND crc_check_chance = 0.0 AND dclocal_read_repair_chance = 0.0 AND default_time_to_live = 0 AND gc_grace_seconds = 86400 AND max_index_interval = 2048 AND memtable_flush_period_in_ms = 0 AND min_index_interval = 128 AND read_repair_chance = 0.0 AND speculative_retry = ’99PERCENTILE’;cqlsh:dbrsk> copy t_card_info to ‘/tmp/t_card_info.csv’;Using 16 child processesStarting copy of dbrsk.t_card_info with columns [bankcard, bankname, cardname, cardtype, city, province, updatetime].Processed: 2726962 rows; Rate: 5524 rows/s; Avg. rate: 57918 rows/s2726962 rows exported to 1 files in 47.165 seconds.当前库建表并导入数据:cqlsh> use dbrsk;cqlsh:dbrsk> copy t_card_info from ‘/tmp/t_card_info.csv’;Using 1 child processesStarting copy of dbrsk.t_card_info with columns [bankcard, bankname, cardname, cardtype, city, province, updatetime].Processed: 690000 rows; Rate: 10883 rows/s; Avg. rate: 11617 rows/sProcessed: 1410000 rows; Rate: 13012 rows/s; Avg. rate: 11813 rows/sProcessed: 2115000 rows; Rate: 10324 rows/s; Avg. rate: 11783 rows/sProcessed: 2726962 rows; Rate: 5305 rows/s; Avg. rate: 11893 rows/s2726962 rows imported from 1 files in 3 minutes and 49.299 seconds (0 skipped).导入数据前:[root@node2 data]# du -sh *408K commitlog1.4M data4.0K hints4.0K saved_caches导入数据后:[root@node2 data]# du -sh *155M commitlog98M data4.0K hints4.0K saved_caches执行truncate操作并查看效果:cqlsh:dbrsk> truncate table t_card_info;[root@node2 dbrsk]# cd t_card_info-9e129520c31c11eab89c515b68839f7c/[root@node2 t_card_info-9e129520c31c11eab89c515b68839f7c]# lsbackups snapshots[root@node2 t_card_info-9e129520c31c11eab89c515b68839f7c]# du -sh *4.0K backups103M snapshots[root@node2 t_card_info-9e129520c31c11eab89c515b68839f7c]# cd snapshots/[root@node2 snapshots]# lstruncated-1594434747140-t_card_info[root@node2 snapshots]# cd truncated-1594434747140-t_card_info/[root@node2 truncated-1594434747140-t_card_info]# lsmanifest.json mc-10-big-Statistics.db mc-11-big-Filter.db mc-12-big-Data.db mc-12-big-TOC.txt mc-9-big-Statistics.dbmc-10-big-CompressionInfo.db mc-10-big-Summary.db mc-11-big-Index.db mc-12-big-Digest.crc32 mc-9-big-CompressionInfo.db mc-9-big-Summary.dbmc-10-big-Data.db mc-10-big-TOC.txt mc-11-big-Statistics.db mc-12-big-Filter.db mc-9-big-Data.db mc-9-big-TOC.txtmc-10-big-Digest.crc32 mc-11-big-CompressionInfo.db mc-11-big-Summary.db mc-12-big-Index.db mc-9-big-Digest.crc32 schema.cqlmc-10-big-Filter.db mc-11-big-Data.db mc-11-big-TOC.txt mc-12-big-Statistics.db mc-9-big-Filter.dbmc-10-big-Index.db mc-11-big-Digest.crc32 mc-12-big-CompressionInfo.db mc-12-big-Summary.db mc-9-big-Index.db在其他节点上,空间占用一致:[cassandra@node3 t_card_info-9e129520c31c11eab89c515b68839f7c]$ du -sh *4.0K backups101M snapshots数据被移动到snapshots文件夹中执行repair命令,snapshots中的数据不会被清理./nodetool repair dbrsk尝试从操作系统中删除snapshots文件夹。删除后数据库可以正常使用。重新导入数据,并进行表删除操作:cqlsh:dbrsk> drop table t_card_info ;[root@node2 t_card_info-9e129520c31c11eab89c515b68839f7c]# du -sh *4.0K backups103M snapshots[root@node2 t_card_info-9e129520c31c11eab89c515b68839f7c]# lsbackups snapshots[root@node2 t_card_info-9e129520c31c11eab89c515b68839f7c]# cd snapshots/[root@node2 snapshots]# lsdropped-1594435864327-t_card_info可以看到,drop和truncte表后,数据分别会被放入该表下的snapshots/droppedxxxxxx snapshots/truncatedxxxxxx中。那么如何恢复呢?[cassandra@node2 bin]$ ./sstableloader -d 172.26.99.152 /tmp/dbrsk/t_card_infoWARN 11:04:46,472 Only 31.813GiB free across all data volumes. Consider adding more capacity to your cluster or removing obsolete snapshotsEstablished connection to initial hostsOpening sstables and calculating sections to streamSkipping file mc-21-big-Data.db: table dbrsk.t_card_info doesn’t existSkipping file mc-22-big-Data.db: table dbrsk.t_card_info doesn’t existSkipping file mc-23-big-Data.db: table dbrsk.t_card_info doesn’t existSkipping file mc-24-big-Data.db: table dbrsk.t_card_info doesn’t existSummary statistics: Connections per host : 1 Total files transferred : 0 Total bytes transferred : 0.000KiB Total duration : 2934 ms Average transfer rate : 0.000KiB/s Peak transfer rate : 0.000KiB/s直接使用sstableloader时如果表不存在,会报错。需要手工建表:[cassandra@node2 bin]$ ./cqlsh –request-timeout=90000 $HOSTNAMEConnected to Test Cluster at node2:9042.[cqlsh 5.0.1 | Cassandra 3.11.1 | CQL spec 3.4.4 | Native protocol v4]Use HELP for help.cqlsh> use dbrsk;cqlsh:dbrsk> CREATE TABLE dbrsk.t_card_info ( … bankcard text PRIMARY KEY, … bankname text, … cardname text, … cardtype text, … city text, … province text, … updatetime bigint … ) WITH bloom_filter_fp_chance = 0.00075 … AND caching = {‘keys’: ‘ALL’, ‘rows_per_partition’: ‘NONE’} … AND comment = ‘银行卡信息数据’ … AND compaction = {‘class’: ‘org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy’, ‘max_threshold’: ’32’, ‘min_threshold’: ‘4’} … AND compression = {‘chunk_length_in_kb’: ’64’, ‘class’: ‘org.apache.cassandra.io.compress.LZ4Compressor’} … AND crc_check_chance = 0.0 … AND dclocal_read_repair_chance = 0.0 … AND default_time_to_live = 0 … AND gc_grace_seconds = 86400 … AND max_index_interval = 2048 … AND memtable_flush_period_in_ms = 0 … AND min_index_interval = 128 … AND read_repair_chance = 0.0 … AND speculative_retry = ’99PERCENTILE’;cqlsh:dbrsk> exit[cassandra@node2 bin]$ ./sstableloader -d 172.26.99.152 /tmp/dbrsk/t_card_infoWARN 11:05:57,753 Only 31.813GiB free across all data volumes. Consider adding more capacity to your cluster or removing obsolete snapshotsEstablished connection to initial hostsOpening sstables and calculating sections to streamStreaming relevant part of /tmp/dbrsk/t_card_info/mc-21-big-Data.db /tmp/dbrsk/t_card_info/mc-22-big-Data.db /tmp/dbrsk/t_card_info/mc-23-big-Data.db /tmp/dbrsk/t_card_info/mc-24-big-Data.db to [/172.26.99.154, /172.26.99.152, /172.26.99.153]progress: [/172.26.99.154]0:0/4 0 % [/172.26.99.152]0:1/4 6 % total: 4% 1.172MiB/s (avg: 1.172MiB/s)progress: [/172.26.99.154]0:0/4 0 % [/172.26.99.152]0:1/4 6 % [/172.26.99.153]0:0/4 0 % total: 3% 65.484MiB/s (avg: 1.257MiB/s)progress: [/172.26.99.154]0:0/4 0 % [/172.26.99.152]0:1/4 6 % [/172.26.99.153]0:0/4 0 % total: 3% 2.578MiB/s (avg: 1.260MiB/s)……progress: [/172.26.99.154]0:0/4 48 % [/172.26.99.152]0:2/4 16 % [/172.26.99.153]0:1/4 13 % total: 24% 245.959MiB/s (avg: 8.540MiB/s)progress: [/172.26.99.154]0:0/4 49 % [/172.26.99.152]0:2/4 16 % [/172.26.99.153]0:1/4 13 % total: 24% 1012.976MiB/s (avg: 8.651MiB/s)progress: [/172.26.99.154]0:0/4 51 % [/172.26.99.152]0:2/4 16 % [/172.26.99.153]0:1/4 13 % total: 25% 1.454GiB/s (avg: 8.803MiB/s)progress: [/172.26.99.154]0:0/4 54 % [/172.26.99.152]0:2/4 16 % [/172.26.99.153]0:1/4 13 % total: 25% 161.665MiB/s (avg: 9.091MiB/s)progress: [/172.26.99.154]0:0/4 56 % [/172.26.99.152]0:2/4 16 % [/172.26.99.153]0:1/4 13 % total: 26% 1.643GiB/s (avg: 9.249MiB/s)progress: [/172.26.99.154]0:0/4 56 % [/172.26.99.152]0:2/4 16 % [/172.26.99.153]0:1/4 13 % total: 26% 134.745MiB/s (avg: 9.254MiB/s)progress: [/172.26.99.154]0:0/4 58 % [/172.26.99.152]0:2/4 16 % [/172.26.99.153]0:1/4 13 % total: 26% 1.702GiB/s (avg: 9.371MiB/s)progress: [/172.26.99.154]0:0/4 58 % [/172.26.99.152]0:2/4 16 % [/172.26.99.153]0:1/4 13 % total: 26% 23.592MiB/s (avg: 9.406MiB/s)……progress: [/172.26.99.154]0:4/4 100% [/172.26.99.152]0:4/4 100% [/172.26.99.153]0:4/4 100% total: 100% 0.000KiB/s (avg: 10.816MiB/s)Summary statistics: Connections per host : 1 Total files transferred : 8 Total bytes transferred : 133.156MiB Total duration : 12314 ms Average transfer rate : 10.813MiB/s Peak transfer rate : 17.530MiB/s结论:truncate操作可行,且出现问题可以恢复,不过恢复的时间较长。cassandra的truncate table与drop table都不会释放空间,而是将其放入snapshot文件夹下。感谢各位的阅读,以上就是“在Cassandra集群中表的数据清理与恢复”的内容了,经过本文的学习后,相信大家对在Cassandra集群中表的数据清理与恢复这一问题有了更深刻的体会,具体使用情况还需要大家实践验证。这里是百云,小编将为大家推送更多相关知识点的文章,欢迎关注!
小编给大家分享一下使用redis的好处有哪些,希望大家阅读完这篇文章后大所收获,下面让我们一起去探讨吧!使用redis有哪些好处?redis的好处:(1) 速度快,因为数据存在内存中,类似于HashMap,HashMap的优势就是查找和操作的时间复杂度都是O(…
免责声明:本站发布的图片视频文字,以转载和分享为主,文章观点不代表本站立场,本站不承担相关法律责任;如果涉及侵权请联系邮箱:360163164@qq.com举报,并提供相关证据,经查实将立刻删除涉嫌侵权内容。