怎么在Hadoop+HBase上安装snappy


这篇文章主要介绍“怎么在Hadoop+HBase上安装snappy ”,在日常操作中,相信很多人在怎么在Hadoop+HBase上安装snappy 问题上存在疑惑,小编查阅了各式资料,整理出简单好用的操作方法,希望对大家解答”怎么在Hadoop+HBase上安装snappy ”的疑惑有所帮助!接下来,请跟着小编一起来学习吧!1、检查snappy压缩包是否安装命令为:bin/hbase org.apache.hadoop.hbase.util.CompressionTest file:///tmp/test.txt snappy如果显示信息为:12/12/03 10:30:02 WARN metrics.SchemaConfigured: Could not determine table and column family of the HFile path file:/tmp/test.txt. Expecting at least 5 path components.
12/12/03 10:30:02 WARN snappy.LoadSnappy: Snappy native library not loaded
Exception in thread “main” java.lang.RuntimeException: native snappy library not available
at org.apache.hadoop.io.compress.SnappyCodec.getCompressorType(SnappyCodec.java:123)
at org.apache.hadoop.io.compress.CodecPool.getCompressor(CodecPool.java:100)
at org.apache.hadoop.io.compress.CodecPool.getCompressor(CodecPool.java:112)
at org.apache.hadoop.hbase.io.hfile.Compression$Algorithm.getCompressor(Compression.java:264)
at org.apache.hadoop.hbase.io.hfile.HFileBlock$Writer.(HFileBlock.java:739)
at org.apache.hadoop.hbase.io.hfile.HFileWriterV2.finishInit(HFileWriterV2.java:127)
at org.apache.hadoop.hbase.io.hfile.HFileWriterV2.(HFileWriterV2.java:118)
at org.apache.hadoop.hbase.io.hfile.HFileWriterV2$WriterFactoryV2.createWriter(HFileWriterV2.java:101)
at org.apache.hadoop.hbase.io.hfile.HFile$WriterFactory.create(HFile.java:394)
at org.apache.hadoop.hbase.util.CompressionTest.doSmokeTest(CompressionTest.java:108)
则说明snappy压缩包没有安装;2、下载snappy-*.tar.gz压缩包(只要和hbase版本兼容就可以,我的是snappy-1.1.1.tar.gz),解压;3、进入snappy目录,进行编译,两条命令: ./configure make4、make完之后会产生一个libsnappy.so文件(这就是我们所需要的库!!!),正常情况下出现在当前目录./libs/libsnappy.so,但是很多时候不按套路出牌,跑到别的文件夹下了,如果make没有出错,可以在根目录search一下,肯定能找到这个文件;5、将生成的这个libsnappy.so拷贝到HBase的lib/native/Linux-ARCH目录下,ARCH代表 amd64 或 i386-32,注意,对于amd64的HBase可能没有这个目录,此时,需要手动创建: mkdir /opt/hbase-0.98.6.1/lib/native/Linux-amd64-646、如果还是不确定HBase在哪里查找lib,那么可以修改log4j文件中的日志级别(log level)进行调试;7、重新运行第1步中的命令,现在看到的信息应该为:12/12/03 10:34:35 INFO util.ChecksumType: Checksum can use java.util.zip.CRC32
12/12/03 10:34:35 INFO util.ChecksumType: org.apache.hadoop.util.PureJavaCrc32C not available.
12/12/03 10:34:35 DEBUG util.FSUtils: Creating file:file:/tmp/test.txtwith permission:rwxrwxrwx
12/12/03 10:34:35 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
12/12/03 10:34:35 WARN metrics.SchemaConfigured: Could not determine table and column family of the HFile path file:/tmp/test.txt. Expecting at least 5 path components.
12/12/03 10:34:35 WARN snappy.LoadSnappy: Snappy native library is available
12/12/03 10:34:35 WARN snappy.LoadSnappy: Snappy native library not loaded
Exception in thread "main" java.lang.RuntimeException: native snappy library not available
at org.apache.hadoop.io.compress.SnappyCodec.getCompressorType(SnappyCodec.java:123)
at org.apache.hadoop.io.compress.CodecPool.getCompressor(CodecPool.java:100)
at org.apache.hadoop.io.compress.CodecPool.getCompressor(CodecPool.java:112)
at org.apache.hadoop.hbase.io.hfile.Compression$Algorithm.getCompressor(Compression.java:264)
at org.apache.hadoop.hbase.io.hfile.HFileBlock$Writer.(HFileBlock.java:739)
at org.apache.hadoop.hbase.io.hfile.HFile开发云主机域名WriterV2.finishInit(HFileWriterV2.java:127)
at org.apache.hadoop.hbase.io.hfile.HFileWriterV2.(HFileWriterV2.java:118)
at org.apache.hadoop.hbase.io.hfile.HFileWriterV2$WriterFactoryV2.createWriter(HFileWriterV2.java:101)
at org.apache.hadoop.hbase.io.hfile.HFile$WriterFactory.create(HFile.java:394)
at org.apache.hadoop.hbase.util.CompressionTest.doSmokeTest(CompressionTest.java:108)
at org.apache.hadoop.hbase.util.CompressionTest.main(CompressionTest.java:138)

8、可以看到,snappy已经可以找到了,但是还没有加载(not loaded)。想加载的话,还需要拷贝hadoop的本地库到与libsnappy.so同一个路径下,hadoop的本地库路径为: hadoop-1.2.1/lib/native/Linux-ARCH/libhadoop.so; 如果这个路径下没有,可以根据所使用的hadoop版本到https://archive.apache.org/dist/hadoop/core/下载相应的tar.gz包,解压之后就能找到所需要的文件了;9、再次运行测试命令(第1步中的命令),可以得到:12/12/03 10:37:48 INFO util.ChecksumType: org.apache.hadoop.util.PureJavaCrc32 not available.
12/12/03 10:37:48 INFO util.ChecksumType: Checksum can use java.util.zip.CRC32
12/12/03 10:37:48 INFO util.ChecksumType: org.apache.hadoop.util.PureJavaCrc32C not available.
12/12/03 10:37:48 DEBUG util.FSUtils: Creating file:file:/tmp/test.txtwith permission:rwxrwxrwx
12/12/03 10:37:48 INFO util.NativeCodeLoader: Loaded the native-hadoop library
12/12/03 10:37:48 WARN metrics.SchemaConfigured: Could not determine table and column family of the HFile path file:/tmp/test.txt. Expecting at least 5 path components.
12/12/03 10:37:48 WARN snappy.LoadSnappy: Snappy native library is available
12/12/03 10:37:48 INFO snappy.LoadSnappy: Snappy native library loaded
12/12/03 10:37:48 INFO compress.CodecPool: Got brand-new compressor
12/12/03 10:37:48 DEBUG hfile.HFileWriterV2: Initialized with CacheConfig:disabled
12/12/03 10:37:49 WARN metrics.SchemaConfigured: Could not determine table and column family of the HFile path file:/tmp/test.txt. Expecting at least 5 path components.
12/12/03 10:37:49 INFO compress.CodecPool: Got brand-new decompressor
SUCCESS 看到SUCCESS,说明安装成功,snappy压缩包可以使用,搞定。到此,关于“怎么在Hadoop+HBase上安装snappy ”的学习就结束了,希望能够解决大家的疑惑。理论与实践的搭配能更好的帮助大家学习,快去试试吧!若想继续学习更多相关知识,请继续关注开发云网站,小编会继续努力为大家带来更多实用的文章!

相关推荐: Sidecar模式是怎么工作的

这篇文章主要介绍“Sidecar模式是怎么工作的”,在日常操作中,相信很多人在Sidecar模式是怎么工作的问题上存在疑惑,小编查阅了各式资料,整理出简单好用的操作方法,希望对大家解答”Sidecar模式是怎么工作的”的疑惑有所帮助!接下来,请跟着小编一起来学…

免责声明:本站发布的图片视频文字,以转载和分享为主,文章观点不代表本站立场,本站不承担相关法律责任;如果涉及侵权请联系邮箱:360163164@qq.com举报,并提供相关证据,经查实将立刻删除涉嫌侵权内容。

Like (0)
Donate 微信扫一扫 微信扫一扫
Previous 05/20 19:57
Next 05/20 19:57

相关推荐