如何在CentOS上配置HDFS高可用

centos上配置hdfshadoop distributed file system)高可用性,需要遵循以下步骤:

1. 准备工作

  • 硬件准备:确保至少有三台服务器,用于部署NameNode和Secondary NameNode。
  • 软件准备:安装Java和Hadoop。

2. 安装Java

sudo yum install java-1.8.0-openjdk-devel 

3. 下载并解压Hadoop

wget https://archive.apache.org/dist/hadoop/common/hadoop-3.3.1/hadoop-3.3.1.tar.gz tar -xzvf hadoop-3.3.1.tar.gz -C /usr/local/ 

4. 配置Hadoop

编辑/usr/local/hadoop/etc/hadoop/hadoop-env.sh文件,设置Java路径:

export JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk 

5. 配置core-site.xml

<<span>configuration></span>     <<span>property></span>         <<span>name></span>fs.defaultFS</<span>name></span>         <<span>value></span>hdfs://mycluster</<span>value></span>     </<span>property></span>     <<span>property></span>         <<span>name></span>hadoop.tmp.dir</<span>name></span>         <<span>value></span>/usr/local/hadoop/tmp</<span>value></span>     </<span>property></span> </<span>configuration></span> 

6. 配置hdfs-site.xml

<<span>configuration></span>     <<span>property></span>         <<span>name></span>dfs.replication</<span>name></span>         <<span>value></span>3</<span>value></span>     </<span>property></span>     <<span>property></span>         <<span>name></span>dfs.namenode.name.dir</<span>name></span>         <<span>value></span>/usr/local/hadoop/data/namenode</<span>value></span>     </<span>property></span>     <<span>property></span>         <<span>name></span>dfs.datanode.data.dir</<span>name></span>         <<span>value></span>/usr/local/hadoop/data/datanode</<span>value></span>     </<span>property></span>     <<span>property></span>         <<span>name></span>dfs.namenode.secondary.http-address</<span>name></span>         <<span>value></span>secondarynamenode:50090</<span>value></span>     </<span>property></span>     <<span>property></span>         <<span>name></span>dfs.client.failover.proxy.provider.mycluster</<span>name></span>         <<span>value></span>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</<span>value></span>     </<span>property></span>     <<span>property></span>         <<span>name></span>dfs.ha.fencing.methods</<span>name></span>         <<span>value></span>sshfence</<span>value></span>     </<span>property></span>     <<span>property></span>         <<span>name></span>dfs.ha.fencing.ssh.private-key-files</<span>name></span>         <<span>value></span>/home/hadoop/.ssh/id_rsa</<span>value></span>     </<span>property></span>     <<span>property></span>         <<span>name></span>dfs.namenode.shared.edits.dir</<span>name></span>         <<span>value></span>qjournal://journalnode1:8485;journalnode2:8485;journalnode3:8485/mycluster</<span>value></span>     </<span>property></span>     <<span>property></span>         <<span>name></span>dfs.journalnode.edits.dir</<span>name></span>         <<span>value></span>/usr/local/hadoop/data/journalnode</<span>value></span>     </<span>property></span> </<span>configuration></span> 

7. 配置yarn-site.xml

<<span>configuration></span>     <<span>property></span>         <<span>name></span>yarn.resourcemanager.ha.enabled</<span>name></span>         <<span>value></span>true</<span>value></span>     </<span>property></span>     <<span>property></span>         <<span>name></span>yarn.resourcemanager.cluster-id</<span>name></span>         <<span>value></span>yarn-cluster</<span>value></span>     </<span>property></span>     <<span>property></span>         <<span>name></span>yarn.resourcemanager.ha.rm-ids</<span>name></span>         <<span>value></span>rm1,rm2</<span>value></span>     </<span>property></span>     <<span>property></span>         <<span>name></span>yarn.resourcemanager.hostname.rm1</<span>name></span>         <<span>value></span>resourcemanager1</<span>value></span>     </<span>property></span>     <<span>property></span>         <<span>name></span>yarn.resourcemanager.hostname.rm2</<span>name></span>         <<span>value></span>resourcemanager2</<span>value></span>     </<span>property></span>     <<span>property></span>         <<span>name></span>yarn.resourcemanager.zk-address</<span>name></span>         <<span>value></span>zookeeper1:2181,zookeeper2:2181,zookeeper3:2181</<span>value></span>     </<span>property></span> </<span>configuration></span> 

8. 配置mapred-site.xml

<<span>configuration></span>     <<span>property></span>         <<span>name></span>mapreduce.framework.name</<span>name></span>         <<span>value></span>yarn</<span>value></span>     </<span>property></span> </<span>configuration></span> 

9. 启动JournalNode

在所有journalnode服务器上执行:

/usr/local/hadoop/sbin/hadoop-daemon.sh start journalnode 

10. 格式化NameNode

在其中一个NameNode服务器上执行:

/usr/local/hadoop/bin/hdfs namenode -format 

11. 启动Secondary NameNode

在Secondary NameNode服务器上执行:

/usr/local/hadoop/sbin/hadoop-daemon.sh start secondarynamenode 

12. 启动NameNode

在第一个NameNode服务器上执行:

/usr/local/hadoop/sbin/hadoop-daemon.sh start namenode 

在第二个NameNode服务器上执行:

/usr/local/hadoop/sbin/hadoop-daemon.sh start namenode 

13. 启动ResourceManager

在ResourceManager1服务器上执行:

/usr/local/hadoop/sbin/yarn-daemon.sh start resourcemanager 

在ResourceManager2服务器上执行:

/usr/local/hadoop/sbin/yarn-daemon.sh start resourcemanager 

14. 启动DataNode

在所有DataNode服务器上执行:

/usr/local/hadoop/sbin/hadoop-daemon.sh start datanode 

15. 验证配置

使用hdfs dfsadmin -report命令检查集群状态。

注意事项

  • 确保所有节点之间的网络通信正常。
  • 确保防火墙允许必要的端口通信。
  • 定期备份配置文件和数据。

通过以上步骤,你可以在centos上配置一个高可用的HDFS集群。

© 版权声明
THE END
喜欢就支持一下吧
点赞13 分享