学习笔记–hadoop


Hadoophadoop-1.2.1.tar.gz jdk-6u32-linux-x64.binUseradd -u 900 hadoopMv jdk1.6.0_32 /home/hadoopMv hadoop-1.2.1.tar.gz/home/hadoopChown hadoop.hadoop /home/hadoop -RSu -hadoopLn -s jdk1.6.0_32 javaTar zxf hadoop-1.2.1.tar.gz hadoop-1.2.1Ln -s hadoop-1.2.1 hadoop更改环境变量:Vim /hadoop/conf/hadoop-env.shCd /hadoopMkdir inputCp conf/*.xml input Bin/hadoop jar hadoop-examples-1.2.1.jar grep input output ‘dfs[a-z.]+’设置无密码登陆:Ssh-keygenSsh-copy-id 172.25.60.1保证 master 到所有的 slave 节点都实现无密码登录Cd ~/hadoop/confVim slaves —->172.25.60.1Vim masters—->172.25.60.1Vim core-site.xml configuration中间添加以下内容fs.default.namehdfs://172.25.60.1:9000Vim hdfs-site.xml configuration中间添加以下内容dfs.replication1Vim mapred-site.xml configuration中间添加以下内容mapred.job.tracker172.25.60.1:9001格式化一个新的分布式文件系统:$ bin/hadoop namenode -format启动 Hadoop 守护进程:$ bin/start-all.sh在各个节点查看 hadoop 进程:$ jpsHadoop 守护进程的日志写入到 ${HADOOP_HOME}/logs 目录浏览 NameNode JobTracker 的网络接口,它们的地址默认为:NameNode – http://172.25.60.1:50070/JobTrac 香港云主机ker – http://172.25.60.1:50030/将输入文件拷贝到分布式文件系统:$ bin/hadoop fs -put conf input运行发行版提供的示例程序:$ bin/hadoop jar hadoop-*-examples.jar grep input output ‘dfs[a-z.]+’查看输出文件:将输出文件从分布式文件系统拷贝到本地文件系统查看:$ bin/hadoop fs -get output output$ cat output/*或者在分布式文件系统上查看输出文件:$ bin/hadoop fs -cat output/*完成全部操作后,停止守护进程:$ bin/stop-all.sh完全分布式(三个节点)server1 server2 server4在三个节点上安装 rpcbind nfs-utils 并打开rpcind nfs服务Vim /etc/exports/home/hadoop *(rw,all_squash,anonuid=900,anongid=900)slave2 4上添加用户 useradd -u 900 hadoopmount 172.25.60.1:/home/hadoop/ /home/hadoop/1上分别进行ssh 连接 ssh 172.25.60.2ssh272.25.60.4master上:vim ~/hadoop/confVim slaves172.25.60.2172.25.60.4Vim hdfs-site.xml1 ——> 2(删除tmp——>格式化—->bin/start-dfs.sh—–>bin/hadoop fs -put conf/ input—–>bin/start-mapred.shbin/hadoop jar hadoop-examples-1.2.1.jar grep input output ‘dfs[a-z.]+’)bin/hadoop dfsadmin-report:查看节点运行状态bin/hadoop fs -ls :查看输出文件添加一个节点文件:在线添加节点:添加用户 useradd -u 900 hadoopmount 172.25.60.1:/home/hadoop/home/hadoopsu – hadoopvim slaves加入该节点 —–>>172.25.60.5bin/hadoop-daemon.sh start datanodebin/hadoop-daemon.sh start tasktracker在线删除节点:先做数据迁移:server上:vim mapred-site.xmldfs.hosts.exclude/home/hadoop/hadoop/conf/hostexcludeVim hadoop/hadoop/conf/hostexclude——->172.25.60.4Bin/hadoop dfsadmin -refreshNodes ####刷新节点回收站功能:vimcore-site.xml添加以下:fs.trash.interval1440 1440=60*24实验:bin/hadoop fs -rm input/hadoop-env.shbin/hadoop fs -ls input 查看是否删除bin/hadoop fs -ls 此时新增目录 .Trashbin/hadoop fs -ls .Trash/Current/user/hadoop/input将此文件移回原目录即可恢复bin/hadoop fs -mv .Trash/Current/user/hadoop/input/hadoop-env.sh input 优化:更新hadoop2.6版本删除之前的链接,解压hadoop-2.6.4.tar.gz jdk-7u79-linux-x64.tar.gzhadoop家目录,并更改权限为hadoop.hadoop进入hadoop用户,链接成hadoopjava,进入hadoop/etc/hadoop/vim hadoop-env.sh export JAVA_HOME=/home/hadoop/javacd /hadoop/etc/hadoopvim core-site.xmlfs.defaultFShdfs://172.25.60.1:9000vim hdfs-site.xmldfs.replication2vim yarn-env.sh# some Java parametersexport JAVA_HOME=/home/hadoop/javacp mapred-site.xml.template mapred-site.xml mapreduce.framework.nameyarnvim yarn-site.xmlyarn.nodemanager.aux-servicesmapreduce_shufflevim slaves 172.25.60.4172.25.60.5bin/hdfs namenode -formatsbin/start-dfs.shbin/hdfs dfs -mkdir /userbin/hdfs dfs -mkdir /user/hadoopbin/hdfs dfs -put etc/hadoop inputsbin/start-yarn.shbin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.4.jar grep input output ‘dfs[a-z.]+’访问 172.25.60.1:50070 172.25.60.1:8088##########替换lib下文件为64(不更改的话启动时会有warn警告)mv hadoop-native-64-2.6.0.tar /home/hadoop/hadoop/lib/nativetarxf hadoop-native-64-2.6.0.tar ###########指定节点目录

相关推荐: win7电脑突然弹出igfxsrvc.exe提示怎么解决

这篇文章主要介绍“win7电脑突然弹出igfxsrvc.exe提示怎么解决”的相关知识,小编通过实际案例向大家展示操作过程,操作方法简单快捷,实用性强,希望这篇“win7电脑突然弹出igfxsrvc.exe提示怎么解决”文章能帮助大家解决问题。电脑弹出igfx…

免责声明:本站发布的图片视频文字,以转载和分享为主,文章观点不代表本站立场,本站不承担相关法律责任;如果涉及侵权请联系邮箱:360163164@qq.com举报,并提供相关证据,经查实将立刻删除涉嫌侵权内容。

Like (0)
Donate 微信扫一扫 微信扫一扫
Previous 07/23 17:46
Next 07/23 17:47

相关推荐