centos6.5下部署用于生产的hadoop,并使用C语言API连接hadoop


#########安装hadoop2.6.0完全分布式集群#####
####文件及系统版本:####hadoop-2.6.0Java version 1.8.0_77centos 64位
####预备####在/home/hadoop/下:mkdir Cloud把java和hadoop安装包放在/home/hadoop/Cloud下

####配置静态ip####master192.168.116.100slave1192.168.116.110slave2192.168.116.120

####修改机器相关名称(都是在root权限下)####su rootvim /etc/hosts在原信息下输入:(空格+tab键)192.168.116.100 master192.168.116.110 slave1192.168.116.120 slave2
vim /etc/hostnamemastershutdown -r now (重启机器)
vim /etc/hostnameslave1shutdown -r now
vim /etc/hostnameslave2shutdown -r now

####安装openssh####su rootyum install opensshssh-keygen -t rsa然后一直确认
把slave1和slave2的公钥发给master:scp /home/hadoop/.ssh/id_rsa.pub hadoop@master:~/.ssh/slave1.pubscp /home/hadoop/.ssh/id_rsa.pub hadoop@master:~/.ssh/slave2.pub
在master下: cd .ssh/cat id_rsa.pub >> authorized_keyscat slave1.pub >> authorized_keyscat slave2.pub >> authorized_keys
把公钥包发给slave1和slave2:scp authorized_keys hadoop@slave1:~/.ssh/scp authorized_keys hadoop@slave2:~/.ssh/
ssh slave1ssh slave2ssh master相应的输入yes
到这里ssh无密码登录配置完成
########设计JAVA_HOME HADOOP_HOME####su rootvim /etc/profile输入:export JAVA_HOME=/home/hadoop/Cloud/jdk1.8.0_77export JRE_HOME=$JAVA_HOME/jreexport CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jarexport HADOOP_HOME=/home/hadoop/Cloud/hadoop-2.6.0export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin然后source /etc/profile(三台都要配置)
########配置hadoop文件####在/home/hadoop/Cloud/hadoop-2.6.0/sbin下:vim hadoop-daemon.sh修改pid的路径
vim yarn-daemon.sh修改pid的路径

在/home/hadoop/Cloud/hadoop-2.6.0/etc下:
vim slaves 输入:masterslave1slave2
vim hadoop-env.sh 输入:export JAVA_HOME=/home/hadoop/Cloud/jdk1.8.0_77export HADOOP_HOME_WARN_SUPPRESS=”TRUE”
vim core-site.xml 输入:############# 香港云主机##################################coreio.native.lib.avaliable
true

fs.default.name
hdfs://master:9000
true

hadoop.tmp.dir
/home/hadoop/Cloud/workspace/temp

#################################################core
vim hdfs-site.xml######################################################hdfsdfs.replication
3

dfs.permissions
false

dfs.namenode.name.dir
/home/hadoop/Cloud/workspace/hdfs/data
true

dfs.namenode.dir
/home/hadoop/Cloud/workspace/hdfs/name

dfs.datanode.dir
/home/hadoop/Cloud/workspace/hdfs/data

dfs.webhdfs.enabled
true

#######################################################hdfs
vim mapred-site.xml
######################################mapred mapred.job.tracker master:9001
######################################mapred
把配置好的hadoop发送到slave1和slave2scp -r hadoop-2.6.0 hadoop@slave1:~/Cloud/scp -r hadoop-2.6.0 hadoop@slave2:~/Cloud/把Java包发到slave1和slave2:scp -r jdk1.8.0_77 hadoop@slave1:~/Cloud/scp -r jdk1.8.0_77 hadoop@slave2:~/Cloud/
到这里,hadoop集群配置完成
################现在可以启动hadoop########
首先格式化namenodehadoop namenode -format (由于前面设计了hadoop-env.sh和系统环境,所以在任意目录下都可以执行)查看日志没错的话往下start-all.sh然后完整的的话通过jps查看:[hadoop@master ~]$ jps42306 ResourceManager42407 NodeManager42151 SecondaryNameNode41880 NameNode41979 DataNode
[hadoop@slave1 ~]$ jps21033 NodeManager20926 DataNode
[hadoop@slave2 ~]$ jps20568 NodeManager20462 DataNode
至此,hadoop-2.6.0完全分布式配置完成。
下面是hadoop的浏览器端口号:localhost:50070localhost:8088

################配置C的API连接HDFS########find / -name libhdfs.so.0.0.0vi /etc/ld.so.conf写入:/home/hadoop/Cloud/hadoop-2.6.0/lib/native//home/hadoop/Cloud/jdk1.8.0_77/jre/lib/amd64/server/然后设计启动加载:/sbin/ldconfig –v
接着配置环境变量:查找并打印:find /home/hadoop/Cloud/hadoop-2.6.0/share/ -name *.jar|awk ‘{ printf(“export CLASSPATH=%s:$CLASSPATHn”, $0); }’会看到打印的内容如:export CLASSPATH=/home/hadoop/Cloud/hadoop-2.6.0/share/hadoop/common/lib/activation-1.1.jar:$CLASSPATHexport CLASSPATH=/home/hadoop/Cloud/hadoop-2.6.0/share/hadoop/common/lib/jsch-0.1.42.jar:$CLASSPATH。。。。。。把打印的全部内容添加到环境变量vim /etc/profile
然后编写C语言代码验证是否配置成功:vim above_sample.c代码内容如下:##################################################################################include”hdfs.h”
#include
#include
#include
int main(int argc, char **argv) {

hdfsFS fs =hdfsConnect(“192.168.116.100”, 9000); //在这里做了一点修改
const char* writePath =”/tmp/testfile.txt”;
hdfsFile writeFile = hdfsOpenFile(fs,writePath, O_WRONLY|O_CREAT, 0, 0, 0);
if(!writeFile) {
fprintf(stderr, “Failed toopen %s for writing!n”, writePath);
exit(-1);
} char* buffer = “Hello,World!”;
tSize num_written_bytes = hdfsWrite(fs,writeFile, (void*)buffer, strlen(buffer)+1);
if (hdfsFlush(fs, writeFile)) {
fprintf(stderr, “Failed to’flush’ %sn”, writePath);
exit(-1);
}
hdfsCloseFile(fs, writeFile);
}###############################################################################编译C语言代码:gcc above_sample.c -I /home/hadoop/Cloud/hadoop-2.6.0/include/ -L /home/hadoop/Cloud/hadoop-2.6.0/lib/native/ -lhdfs /home/hadoop/Cloud/jdk1.8.0_77/jre/lib/amd64/server/libjvm.so -o above_sample执行编译完成生成的above_sample文件:./above_sample查看日志和hadoop文件目录是否生成了testfile文件
至此,C语言的API连接HDFS配置完成

################集群的文件操作########
###(自动分发脚本)auto.sh
vim auto.sh
chmod +x auto.sh
./auto.sh jdk1.8.0_77 ~/Cloud/自动分发脚本#############################!/bin/bash
nodes=(slave1 slave2)
num=${#nodes[@]}
file=$1
dst_path=$2
for((i=0;i

scp -r ${file} ${nodes[i]}:${dst_path};done;####################

相关推荐: win10中如何设置FTP

本文小编为大家详细介绍“win10中如何设置FTP”,内容详细,步骤清晰,细节处理妥当,希望这篇“win10中如何设置FTP”文章能帮助大家解决疑惑,下面跟着小编的思路慢慢深入,一起来学习新知识吧。win10怎么设置FTP?复印,打印,扫描的日常操作是每个办公…

免责声明:本站发布的图片视频文字,以转载和分享为主,文章观点不代表本站立场,本站不承担相关法律责任;如果涉及侵权请联系邮箱:360163164@qq.com举报,并提供相关证据,经查实将立刻删除涉嫌侵权内容。

Like (0)
Donate 微信扫一扫 微信扫一扫
Previous 07/23 16:58
Next 07/23 16:59

相关推荐