hadoop安装文件配置教程


这篇文章主要讲解了“hadoop安装文件配置教程”,文中的讲解内容简单清晰,易于学习与理解,下面请大家跟着小编的思路慢慢深入,一起来研究和学习“hadoop安装文件配置教程”吧!
1.目前只是单机环境,namenodedatanode都在一台机器。hadoop版本选的是2.7.2jdk选的是jdk-8u131-linux-64.rpm
2.安装jdk
rpm
-ivh jdk-8u111-linux-x64.rpm
3.安装密钥
ssh
-keygen -t rsa

root目录下会自动生成.ssh目录
4.把公钥写到authorized_keys里面5.修改权限
6.关闭防火墙7.解压hadoop安装包
tar
zxf hadoop-2.7.2.tar.gz
8.修改 /etc/profile
#java
JAVA_HOME=/usr/java/default
export PATH=$PATH:$JAVA_HOME/bin
#hadoop
export HADOOP_HOME=/hadoop_soft/hadoop-2.7.2
export HADOOP_OPTS=”$HADOOP_OPTS
-Djava.library.path=/hadoop_soft/hadoop-2.7.2/lib/native”
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export
HADOOP_OPTS=”-Djava.library.path=$HADOOP_HOME/lib:$HADOOP_COMMON_LIB_NATIVE_DIR”
##export LD_LIBRARY_PATH=/hadoop_soft/hadoop-2.7.2/lib/native/:$LD_LIBRARY_PATH
export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin

9.修改配置文件 hadoop-2.7.2/etc/hadoop/
(1)
core-site.xml fs.defaultFS
就是namenode的节点名称和地址
fs.defaultFS hdfs://192.168.1.120:9000 hadoop.tmp.dir /hadoop_soft/hadoop-2.7.2/current/tmp fs.trash.interval 4320
(2)hdfs-site.xml dfs.namenode.name.dir
/hadoop_soft/hadoop-2.7.2/current/dfs/name dfs.namenode.data.dir
/hadoop_soft/hadoop-2.7.2/current/data
dfs.replication 1
dfs.webhdfs.enabled
true dfs.permissions.superusergroup
staff
dfs.permissions.enabled
false
(3).
yarn-site.xml
yarn.resourcemanager.hostname
192.168.1.115
yarn.nodemanager.aux-services
mapreduce_shuffle
yarn.nodemanager.aux-services.mapreduce.shuffle.class org.apache.hadoop.mapred.ShuffleHandler
yarn.resourcemanager.address 192.168.1.120:18040 yarn.resourcemanager.scheduler.address 192.168.1.120:18030 yarn.resourcemanager.resource-tracker.address 192.168.1.120:18025 yarn.resourcemanager.admin.address 192.168.1.120:18141
yarn.resourcemanager.webapp.address 192.168.1.120:18088 yarn.log-aggregation-enable true yarn.log-aggregation.retain-seconds 86400 yarn.log-aggregation.retain-check-interval-seconds 86400 yarn.nodemanager.remote-app-log-dir /tmp/logs yarn.nodemanager.remote-app-log-dir-suffix logs
(4).复制mapred-site.xml.template免费云主机域名mapred-site.xml
mapreduce.framework.name yarn mapreduce.jobtracker.http.address 192.168.1.120:50030 mapreduce.jobhistory.address 192.168.1.120:10020 mapreduce.jobhistory.webapp.address 192.168.1.120:19888 mapreduce.jobhistory-done-dir /jobhistory/done mapreduce.intermediate-done-dir /jobhistory/done_intermediate mapreduce.job.ubertask.enable true
(5).编辑slaves,添加主机的IP
192.168.1.120
(6).hadoop-env.sh文件中添加java_home,找到文件JAVA_HOME这一行

10.格式化文件系统
Hdfs namenode –format

11.启动 hadoop-2.7.2/sbin/start-all.sh

12.验证 jps
6433
NameNode

6532
DataNode

7014
NodeManager

6762
SecondaryNameNode

6910
ResourceManager

7871
Jps

13.hadoop 基本命令
hadoopfs–mkdir/hadoop-test
hadoopfs-find/
-name hadoop-test

hadoopfs-put
NOTICE.txt/hadoop-test/

hadoop
fs –rm -R
感谢各位的阅读,以上就是“hadoop安装文件配置教程”的内容了,经过本文的学习后,相信大家对hadoop安装文件配置教程这一问题有了更深刻的体会,具体使用情况还需要大家实践验证。这里是云编程开发博客,小编将为大家推送更多相关知识点的文章,欢迎关注!

相关推荐: ASA防火墙IPSEC

实验ASA防火墙IPSEC实验步骤:1.首先在R1,R2,R3配置ip地址,R1和R3配置默认路由2.配置ASA1 和 ASA 2首先ASA1初始化然后改下名字,在配置接口名称和IP地址以及默认路由和静态路由首先开启ISAKMP/IKE在配置第一个阶段安全策略…

免责声明:本站发布的图片视频文字,以转载和分享为主,文章观点不代表本站立场,本站不承担相关法律责任;如果涉及侵权请联系邮箱:360163164@qq.com举报,并提供相关证据,经查实将立刻删除涉嫌侵权内容。

(0)
打赏 微信扫一扫 微信扫一扫
上一篇 02/02 10:29
下一篇 02/02 10:29