Ambari2.6安装部署Hadoop2.7的步骤


本篇内容介绍了“Ambari2.6安装部署Hadoop2.7的步骤”的有关知识,在实际案例的操作过程中,不少人都会遇到这样的困境,接下来就让小编带领大家学习一下如何处理这些情况吧!希望大家仔细阅读,能够学有所成!Apache Ambari是一种基于Web的工具,支持Apache Hadoop集群的供应、管理和监控。Ambari已支持大多数Hadoop组件,包括HDFS、MapReduce、Hive、Pig、 Hbase、Zookeper、Sqoop和Hcatalog等。Apache Ambari 支持HDFS、MapReduce、Hive、Pig、Hbase、Zookeper、Sqoop和Hcatalog等的集中管理。也是5个顶级hadoop管理工具之一。Ambari能够安装安全的(基于Kerberos)Hadoop集群,以此实现了对Hadoop 安全的支持,提供了基于角色的用户认证、授权和审计功能,并为用户管理集成了LDAP和Active Directory。之所以选择Ambari部署hadoop而不是CDH,是因为CDH最新版本只支持Hadoop2.6.X,Ambari最新版本支持Hadoop2.7.3。一、安装部署参考官网http://ambari.apache.org/ 及简书https://www.jianshu.com/p/73f9670f71cf,主要分以下几步:1、节点互信2、关闭防火墙、selinux3、安装ambari-server4、设置ambari-server5、图形界面部署hadoop各组件二、如下是新增节点步骤:
1、注意密钥为master1节点 prod-hadoop-master-01 /root/.ssh/d_rsa文件2、注册节点3、安装服务也可添加后再安装4、配置默认即可5、确认下没有变更就开始部署6、安装进度完成即可,也可以登陆首页等待后续安装完成三、补充Ambari没有集成组件安装:
1、解决ambari-service、ambari-agent默认安装数据目录在/下ambari-agent stop
mv /var/lib/ambari-agent /data/disk1/
ln -s /data/disk1/ambari-agent /var/lib/ambari-agentmv /usr/hdp /data/disk1/
ln -s /data/disk1/hdp/ /usr/hdpambari-agent start2、ambari与presto整合
参考
https://www.jianshu.com/p/0b5f52a959d5
https://github.com/prestodb/ambari-presto-service/releases
https://github.com/prestodb/ambari-presto-service/releases/download/v1.2/ambari-presto-1.2.tar.gz[root@prod-hadoop-master-01 ~]# tar zxvf ambari-presto-1.2.tar.gz -C /var/lib/ambari-server/resources/stacks/HDP/2.6/services/
ambari-presto-1.2/
ambari-presto-1.2/configuration/
ambari-presto-1.2/configuration/connectors.properties.xml
ambari-presto-1.2/configuration/jvm.config.xml
ambari-presto-1.2/configuration/config.properties.xml
ambari-presto-1.2/configuration/node.properties.xml
ambari-presto-1.2/HISTORY.rst
ambari-presto-1.2/themes/
ambari-presto-1.2/themes/theme.json
ambari-presto-1.2/Makefile
ambari-presto-1.2/setup.py
ambari-presto-1.2/MANIFEST.in
ambari-presto-1.2/PKG-INFO
ambari-presto-1.2/package/
ambari-presto-1.2/package/scripts/
ambari-presto-1.2/package/scripts/presto_cli.py
ambari-presto-1.2/package/scripts/presto_worker.py
ambari-presto-1.2/package/scripts/presto_coordinator.py
ambari-presto-1.2/package/scripts/init.py
ambari-presto-1.2/package/scripts/params.py
ambari-presto-1.2/package/scripts/download.ini
ambari-presto-1.2/package/scripts/common.py
ambari-presto-1.2/package/scripts/presto_client.py
ambari-presto-1.2/setup.cfg
ambari-presto-1.2/ambari_presto.egg-info/
ambari-presto-1.2/ambari_presto.egg-info/dependency_links.txt
ambari-presto-1.2/ambari_presto.egg-info/not-zip-safe
ambari-presto-1.2/ambari_presto.egg-info/PKG-INFO
ambari-presto-1.2/ambari_presto.egg-info/top_level.txt
ambari-presto-1.2/ambari_presto.egg-info/SOURCES.txt
ambari-presto-1.2/LICENSE
ambari-presto-1.2/README.md
ambari-presto-1.2/metainfo.xml
ambari-presto-1.2/requirements.txt
[root@prod-hadoop-master-01 ~]# cd /var/lib/ambari-server/resources/stacks/HDP/2.6/services/
[root@prod-hadoop-master-01 services]# ls
ACCUMULO ATLAS FALCON HBASE HIVE KERBEROS MAHOUT PIG RANGER_KMS SPARK SQOOP stack_advisor.pyc STORM TEZ ZEPPELIN
ambari-presto-1.2 DRUID FLUME HDFS KAFKA KNOX OOZIE RANGER SLIDER SPARK2 stack_advisor.py stack_advisor.pyo SUPERSET YARN ZOOKEEPER
[root@prod-hadoop-master-01 services]# mv ambari-presto-1.2/ PRESTO
[root@prod-hadoop-master-01 services]# chmod -R +x PRESTO/*
[root@prod-hadoop-master-01 services]# ambari-server restart
平台上添加presto服务器,一个控制节点,两个worker节点3、安装kylin组件
https://blog.csdn.net/vivismilecs/article/details/72763665
下载安装
tar -zxvf apache-kylin-2.3.1-hbase1x-bin.tar.gz -C /hadoop/
cd /hadoop/
chown -Rhdfs:hadoopkylin/
vim /etc/profile
source /etc/profile
echo $KYLIN_HOME
/hadoop/kylin
切换用户检查环境是否正确安装
su hdfs
hive(进入hive,quit;退出)
hbase shell(进入hbase shell,ctrl+c结束)[hdfs@prod-hadoop-data-01 kylin]$ bin/check-env.sh
Retrieving hadoop conf dir…
KYLIN_HOME is set to /hadoop/kylin
hdfs is not in the sudoers file. This incident will be reported.
Failed to createhdfs:///kylin/spark-history. Please make sure the user has right to accesshdfs:///kylin/spark-history排错
[hdfs@prod-hadoop-data-01 kylin]$ exit
[root 香港云主机@prod-hadoop-data-01 hadoop]# vim /etc/sudoers.d/waagent检测
[hdfs@prod-hadoop-data-01 kylin]$ bin/check-env.sh
Retrieving hadoop conf dir…
KYLIN_HOME is set to /hadoop/kylin启动
[hdfs@prod-hadoop-data-01 kylin]$ bin/kylin.sh start
Retrieving hadoop conf dir…
KYLIN_HOME is set to /hadoop/kylin
Retrieving hive dependency…
Retrieving hbase dependency…
Retrieving hadoop conf dir…
Retrieving kafka dependency…
Retrieving Spark dependency…
Start to check whether we need to migrate acl tables
Retrieving hadoop conf dir…
KYLIN_HOME is set to /hadoop/kylin
Retrieving hive dependency…
Retrieving hbase dependency…
Retrieving hadoop conf dir…
Retrieving kafka dependency…
Retrieving Spark dependency…
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/hadoop/apache-kylin-2.3.1-bin/tool/kylin-tool-2.3.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/data/disk1/hdp/2.6.5.0-292/hadoop/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/hadoop/apache-kylin-2.3.1-bin/spark/jars/slf4j-log4j12-1.7.16.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Seehttp://www.slf4j.org/codes.html#multiple_bindingsfor an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
2018-05-24 14:23:21,974 INFO [main] common.KylinConfig:319 : Loading kylin-defaults.properties fromfile:/hadoop/apache-kylin-2.3.1-bin/tool/kylin-tool-2.3.1.jar!/kylin-defaults.properties
2018-05-24 14:23:22,016 DEBUG [main] common.KylinConfig:278 : KYLIN_CONF property was not set, will seek KYLIN_HOME env variable
2018-05-24 14:23:22,019 INFO [main] common.KylinConfig:99 : Initialized a new KylinConfig from getInstanceFromEnv : 494317290
2018-05-24 14:23:22,120 INFO [main] persistence.ResourceStore:86 : Using metadata url kylin_metadata@hbase for resource store
2018-05-24 14:23:24,034 DEBUG [main] hbase.HBaseConnection:181 : Using the working dir FS for HBase:hdfs://prod-hadoop-master-01.hadoop:8020
2018-05-24 14:23:24,034 INFO [main] hbase.HBaseConnection:258 : connection is null or closed, creating a new one
2018-05-24 14:23:24,168 INFO [main] zookeeper.RecoverableZooKeeper:120 : Process identifier=hconnection-0x7561db12 connecting to ZooKeeper ensemble=prod-hadoop-master-01.hadoop:2181,prod-hadoop-master-02.hadoop:2181,prod-hadoop-data-01.hadoop:2181
2018-05-24 14:23:24,176 INFO [main] zookeeper.ZooKeeper:100 : Clientenvironment:zookeeper.version=3.4.6-292–1, built on 05/11/2018 07:09 GMT
2018-05-24 14:23:24,176 INFO [main] zookeeper.ZooKeeper:100 : Clientenvironment:host.name=prod-hadoop-data-01.hadoop
2018-05-24 14:23:24,176 INFO [main] zookeeper.ZooKeeper:100 : Clientenvironment:java.version=1.8.0_91
2018-05-24 14:23:24,177 INFO [main] zookeeper.ZooKeeper:100 : Clientenvironment:java.vendor=OracleCorporation
2018-05-24 14:23:24,177 INFO [main] zookeeper.ZooKeeper:100 : Clientenvironment:java.home=/usr/local/java
2018-05-24 14:23:24,182 INFO [main] zookeeper.ZooKeeper:100 : Clientenvironment:java.class.path=/hadoop/kylin/tool/kylin-tool-2.3.1.jar:1.8.1.jar:/hadoop/kylin/spark/jars/hadoop-mapreduce-client-jobclient-2.7.3.jar:/hadoop/kylin/spark/jars/chill-java-0.8.0.jar:jar:/hadoop/kylin/spark/jars/xercesImpl-2.9.1.jar:/hadoop/kylin/spark/jars/netty-3.8.0.Final.jar:/usr/hdp/current/ext/hbase/*
2018-05-24 14:23:24,191 INFO [main] zookeeper.ZooKeeper:100 : Clientenvironment:java.library.path=:/usr/hdp/2.6.5.0-292/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.5.0-292/hadoop/lib/native/Linux-amd64-64:/data/disk1/hdp/2.6.5.0-292/hadoop/lib/native
2018-05-24 14:23:24,191 INFO [main] zookeeper.ZooKeeper:100 : Clientenvironment:java.io.tmpdir=/tmp
2018-05-24 14:23:24,191 INFO [main] zookeeper.ZooKeeper:100 : Clientenvironment:java.compiler=
2018-05-24 14:23:24,193 INFO [main] zookeeper.ZooKeeper:100 : Clientenvironment:os.name=Linux
2018-05-24 14:23:24,193 INFO [main] zookeeper.ZooKeeper:100 : Clientenvironment:os.arch=amd64
2018-05-24 14:23:24,193 INFO [main] zookeeper.ZooKeeper:100 : Clientenvironment:os.version=2.6.32-696.18.7.el6.x86_64
2018-05-24 14:23:24,193 INFO [main] zookeeper.ZooKeeper:100 : Clientenvironment:user.name=hdfs
2018-05-24 14:23:24,194 INFO [main] zookeeper.ZooKeeper:100 : Clientenvironment:user.home=/home/hdfs
2018-05-24 14:23:24,194 INFO [main] zookeeper.ZooKeeper:100 : Clientenvironment:user.dir=/hadoop/apache-kylin-2.3.1-bin
2018-05-24 14:23:24,195 INFO [main] zookeeper.ZooKeeper:438 : Initiating client connection, connectString=prod-hadoop-master-01.hadoop:2181,prod-hadoop-master-02.hadoop:2181,prod-hadoop-data-01.hadoop:2181sessionTimeout=90000 watcher=org.apache.hadoop.hbase.zookeeper.PendingWatcher@66b72664
2018-05-24 14:23:24,237 INFO [main-SendThread(prod-hadoop-data-01.hadoop:2181)] zookeeper.ClientCnxn:1019 : Opening socket connection to server prod-hadoop-data-01.hadoop/172.20.3.6:2181. Will not attempt to authenticate using SASL (unknown error)
2018-05-24 14:23:24,246 INFO [main-SendThread(prod-hadoop-data-01.hadoop:2181)] zookeeper.ClientCnxn:864 : Socket connection established, initiating session, client: /172.20.3.6:50746, server: prod-hadoop-data-01.hadoop/172.20.3.6:2181
2018-05-24 14:23:24,256 INFO [main-SendThread(prod-hadoop-data-01.hadoop:2181)] zookeeper.ClientCnxn:1279 : Session establishment complete on server prod-hadoop-data-01.hadoop/172.20.3.6:2181, sessionid = 0x163882326e1003b, negotiated timeout = 60000
2018-05-24 14:23:24,892 DEBUG [main] hbase.HBaseConnection:181 : Using the working dir FS for HBase:hdfs://prod-hadoop-master-01.hadoop:8020
2018-05-24 14:23:24,944 INFO [main] imps.CuratorFrameworkImpl:224 : Starting
2018-05-24 14:23:24,947 INFO [main] zookeeper.ZooKeeper:438 : Initiating client connection, connectString=prod-hadoop-master-01.hadoop:2181,prod-hadoop-master-02.hadoop:2181,prod-hadoop-data-01.hadoop:2181sessionTimeout=120000 watcher=org.apache.curator.ConnectionState@67207d8a
2018-05-24 14:23:24,950 INFO [main-SendThread(prod-hadoop-master-02.hadoop:2181)] zookeeper.ClientCnxn:1019 : Opening socket connection to server prod-hadoop-master-02.hadoop/172.20.3.5:2181. Will not attempt to authenticate using SASL (unknown error)
2018-05-24 14:23:24,951 INFO [main-SendThread(prod-hadoop-master-02.hadoop:2181)] zookeeper.ClientCnxn:864 : Socket connection established, initiating session, client: /172.20.3.6:60080, server: prod-hadoop-master-02.hadoop/172.20.3.5:2181
2018-05-24 14:23:24,952 DEBUG [main] util.ZookeeperDistributedLock:143 : 6616@prod-hadoop-data-01 trying to lock /kylin/kylin_metadata/create_htable/kylin_metadata/lock
2018-05-24 14:23:24,957 INFO [main-SendThread(prod-hadoop-master-02.hadoop:2181)] zookeeper.ClientCnxn:1279 : Session establishment complete on server prod-hadoop-master-02.hadoop/172.20.3.5:2181, sessionid = 0x3638801b4480045, negotiated timeout = 60000
2018-05-24 14:23:24,962 INFO [main-EventThread] state.ConnectionStateManager:228 : State change: CONNECTED
2018-05-24 14:23:25,031 INFO [main] util.ZookeeperDistributedLock:155 : 6616@prod-hadoop-data-01 acquired lock at /kylin/kylin_metadata/create_htable/kylin_metadata/lock
2018-05-24 14:23:25,036 DEBUG [main] hbase.HBaseConnection:337 : Creating HTable ‘kylin_metadata’
2018-05-24 14:23:27,822 INFO [main] client.HBaseAdmin:789 : Created kylin_metadata
2018-05-24 14:23:27,823 DEBUG [main] hbase.HBaseConnection:350 : HTable ‘kylin_metadata’ created
2018-05-24 14:23:27,824 DEBUG [main] util.ZookeeperDistributedLock:223 : 6616@prod-hadoop-data-01 trying to unlock /kylin/kylin_metadata/create_htable/kylin_metadata/lock
2018-05-24 14:23:27,833 INFO [main] util.ZookeeperDistributedLock:234 : 6616@prod-hadoop-data-01 released lock at /kylin/kylin_metadata/create_htable/kylin_metadata/lock
2018-05-24 14:23:28,105 DEBUG [main] hbase.HBaseConnection:181 : Using the working dir FS for HBase:hdfs://prod-hadoop-master-01.hadoop:8020
2018-05-24 14:23:28,105 INFO [main] hbase.HBaseConnection:258 : connection is null or closed, creating a new one
2018-05-24 14:23:28,106 INFO [main] zookeeper.RecoverableZooKeeper:120 : Process identifier=hconnection-0xf339eae connecting to ZooKeeper ensemble=prod-hadoop-master-01.hadoop:2181,prod-hadoop-master-02.hadoop:2181,prod-hadoop-data-01.hadoop:2181
2018-05-24 14:23:28,106 INFO [main] zookeeper.ZooKeeper:438 : Initiating client connection, connectString=prod-hadoop-master-01.hadoop:2181,prod-hadoop-master-02.hadoop:2181,prod-hadoop-data-01.hadoop:2181sessionTimeout=90000 watcher=org.apache.hadoop.hbase.zookeeper.PendingWatcher@2822c6ff
2018-05-24 14:23:28,109 INFO [main-SendThread(prod-hadoop-data-01.hadoop:2181)] zookeeper.ClientCnxn:1019 : Opening socket connection to server prod-hadoop-data-01.hadoop/172.20.3.6:2181. Will not attempt to authenticate using SASL (unknown error)
2018-05-24 14:23:28,109 INFO [main-SendThread(prod-hadoop-data-01.hadoop:2181)] zookeeper.ClientCnxn:864 : Socket connection established, initiating session, client: /172.20.3.6:50760, server: prod-hadoop-data-01.hadoop/172.20.3.6:2181
2018-05-24 14:23:28,115 INFO [main-SendThread(prod-hadoop-data-01.hadoop:2181)] zookeeper.ClientCnxn:1279 : Session establishment complete on server prod-hadoop-data-01.hadoop/172.20.3.6:2181, sessionid = 0x163882326e1003c, negotiated timeout = 60000
2018-05-24 14:23:28,138 INFO [close-hbase-conn] hbase.HBaseConnection:137 : Closing HBase connections…
2018-05-24 14:23:28,144 INFO [close-hbase-conn] client.ConnectionManager$HConnectionImplementation:1703 : Closing zookeeper sessionid=0x163882326e1003c
2018-05-24 14:23:28,152 INFO [close-hbase-conn] zookeeper.ZooKeeper:684 : Session: 0x163882326e1003c closed
2018-05-24 14:23:28,152 INFO [main-EventThread] zookeeper.ClientCnxn:524 : EventThread shut down
2018-05-24 14:23:28,154 INFO [Thread-8] zookeeper.ZooKeeper:684 : Session: 0x3638801b4480045 closed
2018-05-24 14:23:28,154 INFO [main-EventThread] zookeeper.ClientCnxn:524 : EventThread shut down
2018-05-24 14:23:28,162 INFO [close-hbase-conn] client.ConnectionManager$HConnectionImplementation:2167 : Closing master protocol: MasterService
2018-05-24 14:23:28,163 INFO [close-hbase-conn] client.ConnectionManager$HConnectionImplementation:1703 : Closing zookeeper sessionid=0x163882326e1003b
2018-05-24 14:23:28,168 INFO [main-EventThread] zookeeper.ClientCnxn:524 : EventThread shut down
2018-05-24 14:23:28,169 INFO [close-hbase-conn] zookeeper.ZooKeeper:684 : Session: 0x163882326e1003b closed
A new Kylin instance is started by hdfs. To stop it, run ‘kylin.sh stop’
Check the log at /hadoop/kylin/logs/kylin.log
Web UI is at http://:7070/kylin“Ambari2.6安装部署Hadoop2.7的步骤”的内容就介绍到这里了,感谢大家的阅读。如果想了解更多行业相关的知识可以关注开发云网站,小编将为大家输出更多高质量的实用文章!

相关推荐: 怎么搭建Redis集群

这篇文章主要介绍“怎么搭建Redis集群”,在日常操作中,相信很多人在怎么搭建Redis集群问题上存在疑惑,小编查阅了各式资料,整理出简单好用的操作方法,希望对大家解答”怎么搭建Redis集群”的疑惑有所帮助!接下来,请跟着小编一起来学习吧!准备节点既然是搭建…

免责声明:本站发布的图片视频文字,以转载和分享为主,文章观点不代表本站立场,本站不承担相关法律责任;如果涉及侵权请联系邮箱:360163164@qq.com举报,并提供相关证据,经查实将立刻删除涉嫌侵权内容。

Like (0)
Donate 微信扫一扫 微信扫一扫
Previous 08/08 16:07
Next 08/08 16:08

相关推荐