WiseN

AWS EMR Series - Multi-master 기능 간단 확인

May 17, 2019   |   AWS

작성자_김명수

페이스북 공유하기 트위터 공유하기
Blog thumbnail

여기서 다루는 내용


· 간단 소개
· Multi-master cluster 생성
· Multi-master 확인
· 마무리


 

이번 시간에는 EMR의 multi-master 기능을 확인해 보겠습니다.

AWS EMR Release Notes에 5.23.0 버전부터 마스터 노드의 HA를 위해 3개의 마스터 노드를 지원하도록 New Feature가 생겼습니다.

자세한 내용은 아래와 같으며, 간단한 테스트를 해보겠습니다.


https://docs.aws.amazon.com/emr/latest/ReleaseGuide/emr-release-5x.html#emr-5230-relnotes


 




간단 소개









    • AWS EMR

      • 관리형 하둡 클러스터 플랫폼

      • Apache Spark, HBase, Presto, Hive와 같이 널리 사용되는 분산 프레임워크를 실행

      • Amazon S3 및Amazon DynamoDB와 같은 다른 AWS  데이터 스토어의 데이터와 상호 작용

      • 수동 또는 Auto Scaling을 통한 인스턴스 수를 늘리거나 줄일 수 있으며, spot 인스턴스 활용을 통한 비용 절감 가능

      • 제품 세부 정보 : Link






 




Multi-master cluster 생성






AWS Console Console 기준으로 클러스터를 생성하겠습니다.

먼저 EMR Console UI에서 Advanced Options로 이동하고, 5.23.0 버전을 선택합니다.

아래에 Multi-master support 부분에서 Enable을 체크합니다.



 

Enable multi-master support를 체크를 하게 되면 Hue와 Pig는 선택조차 할 수 없고, 일부 application은 선택은 되지만 unsupported in Multi-master cluster로 메시지가 표시됩니다.

그리고 Zeppelin, JupyterHub 선택이 된다고 체크를 하고 넘기면 지원되지 않기 때문에 클러스터 생성이 불가합니다.



 

이후 Hardware Configuration에서 Instance group 설정할 때 Instance fleets는 지원하지 않습니다.



 

Uniform instance groups로 선택해야 되며, 아래와 같이 Instance count에 3개의 인스턴스가 고정 생성되고, Muli-master enabled 라고 표시됩니다.



 

참고로 Master node를 Spot instance로 생성 불가능합니다.



 

클러스터 생성을 요청하니 하단에 Master DNS가 3개로 표시되는게 보입니다.



Hardware 탭에서 Master node를 선택하면 3개의 인스턴스가 확인 됩니다.



 

 




Multi-master 동작 확인






3개의 마스터 노드가 생성되었으니 이와 관련하여 간단하게 확인을 해보겠습니다.

:: HDFS Name Node 확인


먼저 3개의 마스터 노드에서 Journal Node 데몬 확인됩니다.

hdfs 7454 1 0 05:21 ? 00:00:09 /usr/lib/jvm/java-openjdk/bin/java -Dproc_journalnode -Xmx1000m -server -XX:OnOutOfMemoryError=kill -9 %p -Dhadoop.log.dir=/var/log/hadoop-hdfs -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/usr/lib/hadoop -Dhadoop.id.str=hdfs -Dhadoop.root.logger=INFO,console -Djava.library.path=:/usr/lib/hadoop-lzo/lib/native:/usr/lib/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -server -XX:OnOutOfMemoryError=kill -9 %p -server -XX:OnOutOfMemoryError=kill -9 %p -Dhadoop.log.dir=/var/log/hadoop-hdfs -Dhadoop.log.file=hadoop-hdfs-journalnode-ip-172-31-29-8.log -Dhadoop.home.dir=/usr/lib/hadoop -Dhadoop.id.str=hdfs -Dhadoop.root.logger=INFO,DRFA -Djava.library.path=:/usr/lib/hadoop-lzo/lib/native:/usr/lib/hadoop/lib/native:/usr/lib/hadoop-lzo/lib/native:/usr/lib/hadoop-lzo/lib/native:/usr/lib/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Dhadoop.security.logger=INFO,DRFAS org.apache.hadoop.hdfs.qjournal.server.JournalNode

 

hdfs 9517 1 0 05:00 ? 00:00:11 /usr/lib/jvm/java-openjdk/bin/java -Dproc_journalnode -Xmx1000m -server -XX:OnOutOfMemoryError=kill -9 %p -Dhadoop.log.dir=/var/log/hadoop-hdfs -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/usr/lib/hadoop -Dhadoop.id.str=hdfs -Dhadoop.root.logger=INFO,console -Djava.library.path=:/usr/lib/hadoop-lzo/lib/native:/usr/lib/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -server -XX:OnOutOfMemoryError=kill -9 %p -server -XX:OnOutOfMemoryError=kill -9 %p -Dhadoop.log.dir=/var/log/hadoop-hdfs -Dhadoop.log.file=hadoop-hdfs-journalnode-ip-172-31-24-55.log -Dhadoop.home.dir=/usr/lib/hadoop -Dhadoop.id.str=hdfs -Dhadoop.root.logger=INFO,DRFA -Djava.library.path=:/usr/lib/hadoop-lzo/lib/native:/usr/lib/hadoop/lib/native:/usr/lib/hadoop-lzo/lib/native:/usr/lib/hadoop-lzo/lib/native:/usr/lib/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Dhadoop.security.logger=INFO,DRFAS org.apache.hadoop.hdfs.qjournal.server.JournalNode

 

hdfs 7660 1 2 05:57 ? 00:00:09 /usr/lib/jvm/java-openjdk/bin/java -Dproc_journalnode -Xmx1000m -server -XX:OnOutOfMemoryError=kill -9 %p -Dhadoop.log.dir=/var/log/hadoop-hdfs -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/usr/lib/hadoop -Dhadoop.id.str=hdfs -Dhadoop.root.logger=INFO,console -Djava.library.path=:/usr/lib/hadoop-lzo/lib/native:/usr/lib/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -server -XX:OnOutOfMemoryError=kill -9 %p -server -XX:OnOutOfMemoryError=kill -9 %p -Dhadoop.log.dir=/var/log/hadoop-hdfs -Dhadoop.log.file=hadoop-hdfs-journalnode-ip-172-31-30-200.log -Dhadoop.home.dir=/usr/lib/hadoop -Dhadoop.id.str=hdfs -Dhadoop.root.logger=INFO,DRFA -Djava.library.path=:/usr/lib/hadoop-lzo/lib/native:/usr/lib/hadoop/lib/native:/usr/lib/hadoop-lzo/lib/native:/usr/lib/hadoop-lzo/lib/native:/usr/lib/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Dhadoop.security.logger=INFO,DRFAS org.apache.hadoop.hdfs.qjournal.server.JournalNode

 

Journal status도 정상입니다.



 

그리고 2개의 마스터 노드에서  Name Node 데몬이 확인됩니다.

hdfs 7734 1 4 05:57 ? 00:00:17 /usr/lib/jvm/java-openjdk/bin/java -Dproc_namenode -Xmx3328m -server -XX:OnOutOfMemoryError=kill -9 %p -Dhadoop.log.dir=/var/log/hadoop-hdfs -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/usr/lib/hadoop -Dhadoop.id.str=hdfs -Dhadoop.root.logger=INFO,console -Djava.library.path=:/usr/lib/hadoop-lzo/lib/native:/usr/lib/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -server -XX:OnOutOfMemoryError=kill -9 %p -server -XX:OnOutOfMemoryError=kill -9 %p -Dhadoop.log.dir=/var/log/hadoop-hdfs -Dhadoop.log.file=hadoop-hdfs-namenode-ip-172-31-30-200.log -Dhadoop.home.dir=/usr/lib/hadoop -Dhadoop.id.str=hdfs -Dhadoop.root.logger=INFO,DRFA -Djava.library.path=:/usr/lib/hadoop-lzo/lib/native:/usr/lib/hadoop/lib/native:/usr/lib/hadoop-lzo/lib/native:/usr/lib/hadoop-lzo/lib/native:/usr/lib/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Dhadoop.security.logger=INFO,DRFAS org.apache.hadoop.hdfs.server.namenode.NameNode

 

hdfs 9512 1 0 05:00 ? 00:00:30 /usr/lib/jvm/java-openjdk/bin/java -Dproc_namenode -Xmx3328m -server -XX:OnOutOfMemoryError=kill -9 %p -Dhadoop.log.dir=/var/log/hadoop-hdfs -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/usr/lib/hadoop -Dhadoop.id.str=hdfs -Dhadoop.root.logger=INFO,console -Djava.library.path=:/usr/lib/hadoop-lzo/lib/native:/usr/lib/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -server -XX:OnOutOfMemoryError=kill -9 %p -server -XX:OnOutOfMemoryError=kill -9 %p -Dhadoop.log.dir=/var/log/hadoop-hdfs -Dhadoop.log.file=hadoop-hdfs-namenode-ip-172-31-24-55.log -Dhadoop.home.dir=/usr/lib/hadoop -Dhadoop.id.str=hdfs -Dhadoop.root.logger=INFO,DRFA -Djava.library.path=:/usr/lib/hadoop-lzo/lib/native:/usr/lib/hadoop/lib/native:/usr/lib/hadoop-lzo/lib/native:/usr/lib/hadoop-lzo/lib/native:/usr/lib/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Dhadoop.security.logger=INFO,DRFAS org.apache.hadoop.hdfs.server.namenode.NameNode

 

 

HDFS Name Node Web UI로  13.209.73.216 노드에 접속해보니 active 상태로 표시됩니다.



 

 

그리고 54.180.31.186 노드로 접속하면 standby 상태로 표시됩니다.



 

Name Node가 13.209.73.216 노드가 active, 54.180.31.186 노드가 standby 인데요. failover 해보겠습니다.

active인 13.209.73.216 노드를 reboot합니다.



 

reboot이후 54.180.31.186 노드가 active로 바로 failover되고, 이후 reboot된 13.209.73.216 노드가 standby 상태로 확인됩니다.

아래 이미지와 같이 정상적으로 failover가 동작된 부분 확인 가능 합니다.



 



 

 

 

:: Yarn Resource Manger 확인


먼저 3개의 마스터 노드에서 Resource Manager 데몬 확인됩니다.

yarn 7795 1 5 05:57 ? 00:00:18 /usr/lib/jvm/java-openjdk/bin/java -Dproc_resourcemanager -Xmx2713m -XX:OnOutOfMemoryError=kill -9 %p -XX:OnOutOfMemoryError=kill -9 %p -Dhadoop.log.dir=/var/log/hadoop-yarn -Dyarn.log.dir=/var/log/hadoop-yarn -Dhadoop.log.file=yarn-yarn-resourcemanager-ip-172-31-30-200.log -Dyarn.log.file=yarn-yarn-resourcemanager-ip-172-31-30-200.log -Dyarn.home.dir=/usr/lib/hadoop-yarn -Dhadoop.home.dir=/usr/lib/hadoop -Dhadoop.root.logger=INFO,DRFA -Dyarn.root.logger=INFO,DRFA -Dsun.net.inetaddr.ttl=30 -Djava.library.path=:/usr/lib/hadoop-lzo/lib/native:/usr/lib/hadoop/lib/native:/usr/lib/hadoop-lzo/lib/native:/usr/lib/hadoop/lib/native -classpath /etc/hadoop/conf:/etc/hadoop/conf:/etc/hadoop/conf:/usr/lib/hadoop/lib/*:/usr/lib/hadoop/.//*:/usr/lib/hadoop-hdfs/./:/usr/lib/hadoop-hdfs/lib/*:/usr/lib/hadoop-hdfs/.//*:/usr/lib/hadoop-yarn/lib/*:/usr/lib/hadoop-yarn/.//*:/usr/lib/hadoop-mapreduce/lib/*:/usr/lib/hadoop-mapreduce/.//*::/etc/tez/conf:/usr/lib/tez/*:/usr/lib/tez/lib/*:/usr/lib/hadoop-lzo/lib/*:/usr/share/aws/aws-java-sdk/*:/usr/share/aws/emr/emrfs/conf:/usr/share/aws/emr/emrfs/lib/*:/usr/share/aws/emr/emrfs/auxlib/*:/usr/share/aws/emr/ddb/lib/emr-ddb-hadoop.jar:/usr/share/aws/emr/goodies/lib/emr-hadoop-goodies.jar:/usr/share/aws/emr/kinesis/lib/emr-kinesis-hadoop.jar:/usr/share/aws/emr/cloudwatch-sink/lib/*:/usr/share/aws/emr/security/conf:/usr/share/aws/emr/security/lib/*:/etc/tez/conf:/usr/lib/tez/*:/usr/lib/tez/lib/*:/usr/lib/hadoop-lzo/lib/*:/usr/share/aws/aws-java-sdk/*:/usr/share/aws/emr/emrfs/conf:/usr/share/aws/emr/emrfs/lib/*:/usr/share/aws/emr/emrfs/auxlib/*:/usr/share/aws/emr/ddb/lib/emr-ddb-hadoop.jar:/usr/share/aws/emr/goodies/lib/emr-hadoop-goodies.jar:/usr/share/aws/emr/kinesis/lib/emr-kinesis-hadoop.jar:/usr/share/aws/emr/cloudwatch-sink/lib/*:/usr/share/aws/emr/security/conf:/usr/share/aws/emr/security/lib/*:/usr/lib/hadoop-yarn/.//*:/usr/lib/hadoop-yarn/lib/*:/etc/hadoop/conf/rm-config/log4j.properties org.apache.hadoop.yarn.server.resourcemanager.ResourceManager

 

yarn 7446 1 1 05:21 ? 00:00:19 /usr/lib/jvm/java-openjdk/bin/java -Dproc_resourcemanager -Xmx2713m -XX:OnOutOfMemoryError=kill -9 %p -XX:OnOutOfMemoryError=kill -9 %p -Dhadoop.log.dir=/var/log/hadoop-yarn -Dyarn.log.dir=/var/log/hadoop-yarn -Dhadoop.log.file=yarn-yarn-resourcemanager-ip-172-31-29-8.log -Dyarn.log.file=yarn-yarn-resourcemanager-ip-172-31-29-8.log -Dyarn.home.dir=/usr/lib/hadoop-yarn -Dhadoop.home.dir=/usr/lib/hadoop -Dhadoop.root.logger=INFO,DRFA -Dyarn.root.logger=INFO,DRFA -Dsun.net.inetaddr.ttl=30 -Djava.library.path=:/usr/lib/hadoop-lzo/lib/native:/usr/lib/hadoop/lib/native:/usr/lib/hadoop-lzo/lib/native:/usr/lib/hadoop/lib/native -classpath /etc/hadoop/conf:/etc/hadoop/conf:/etc/hadoop/conf:/usr/lib/hadoop/lib/*:/usr/lib/hadoop/.//*:/usr/lib/hadoop-hdfs/./:/usr/lib/hadoop-hdfs/lib/*:/usr/lib/hadoop-hdfs/.//*:/usr/lib/hadoop-yarn/lib/*:/usr/lib/hadoop-yarn/.//*:/usr/lib/hadoop-mapreduce/lib/*:/usr/lib/hadoop-mapreduce/.//*::/etc/tez/conf:/usr/lib/tez/*:/usr/lib/tez/lib/*:/usr/lib/hadoop-lzo/lib/*:/usr/share/aws/aws-java-sdk/*:/usr/share/aws/emr/emrfs/conf:/usr/share/aws/emr/emrfs/lib/*:/usr/share/aws/emr/emrfs/auxlib/*:/usr/share/aws/emr/ddb/lib/emr-ddb-hadoop.jar:/usr/share/aws/emr/goodies/lib/emr-hadoop-goodies.jar:/usr/share/aws/emr/kinesis/lib/emr-kinesis-hadoop.jar:/usr/share/aws/emr/cloudwatch-sink/lib/*:/usr/share/aws/emr/security/conf:/usr/share/aws/emr/security/lib/*:/etc/tez/conf:/usr/lib/tez/*:/usr/lib/tez/lib/*:/usr/lib/hadoop-lzo/lib/*:/usr/share/aws/aws-java-sdk/*:/usr/share/aws/emr/emrfs/conf:/usr/share/aws/emr/emrfs/lib/*:/usr/share/aws/emr/emrfs/auxlib/*:/usr/share/aws/emr/ddb/lib/emr-ddb-hadoop.jar:/usr/share/aws/emr/goodies/lib/emr-hadoop-goodies.jar:/usr/share/aws/emr/kinesis/lib/emr-kinesis-hadoop.jar:/usr/share/aws/emr/cloudwatch-sink/lib/*:/usr/share/aws/emr/security/conf:/usr/share/aws/emr/security/lib/*:/usr/lib/hadoop-yarn/.//*:/usr/lib/hadoop-yarn/lib/*:/etc/hadoop/conf/rm-config/log4j.properties org.apache.hadoop.yarn.server.resourcemanager.ResourceManager

 

yarn 9509 1 0 05:00 ? 00:00:22 /usr/lib/jvm/java-openjdk/bin/java -Dproc_resourcemanager -Xmx2713m -XX:OnOutOfMemoryError=kill -9 %p -XX:OnOutOfMemoryError=kill -9 %p -Dhadoop.log.dir=/var/log/hadoop-yarn -Dyarn.log.dir=/var/log/hadoop-yarn -Dhadoop.log.file=yarn-yarn-resourcemanager-ip-172-31-24-55.log -Dyarn.log.file=yarn-yarn-resourcemanager-ip-172-31-24-55.log -Dyarn.home.dir=/usr/lib/hadoop-yarn -Dhadoop.home.dir=/usr/lib/hadoop -Dhadoop.root.logger=INFO,DRFA -Dyarn.root.logger=INFO,DRFA -Dsun.net.inetaddr.ttl=30 -Djava.library.path=:/usr/lib/hadoop-lzo/lib/native:/usr/lib/hadoop/lib/native:/usr/lib/hadoop-lzo/lib/native:/usr/lib/hadoop/lib/native -classpath /etc/hadoop/conf:/etc/hadoop/conf:/etc/hadoop/conf:/usr/lib/hadoop/lib/*:/usr/lib/hadoop/.//*:/usr/lib/hadoop-hdfs/./:/usr/lib/hadoop-hdfs/lib/*:/usr/lib/hadoop-hdfs/.//*:/usr/lib/hadoop-yarn/lib/*:/usr/lib/hadoop-yarn/.//*:/usr/lib/hadoop-mapreduce/lib/*:/usr/lib/hadoop-mapreduce/.//*::/etc/tez/conf:/usr/lib/tez/*:/usr/lib/tez/lib/*:/usr/lib/hadoop-lzo/lib/*:/usr/share/aws/aws-java-sdk/*:/usr/share/aws/emr/emrfs/conf:/usr/share/aws/emr/emrfs/lib/*:/usr/share/aws/emr/emrfs/auxlib/*:/usr/share/aws/emr/ddb/lib/emr-ddb-hadoop.jar:/usr/share/aws/emr/goodies/lib/emr-hadoop-goodies.jar:/usr/share/aws/emr/kinesis/lib/emr-kinesis-hadoop.jar:/usr/share/aws/emr/cloudwatch-sink/lib/*:/usr/share/aws/emr/security/conf:/usr/share/aws/emr/security/lib/*:/etc/tez/conf:/usr/lib/tez/*:/usr/lib/tez/lib/*:/usr/lib/hadoop-lzo/lib/*:/usr/share/aws/aws-java-sdk/*:/usr/share/aws/emr/emrfs/conf:/usr/share/aws/emr/emrfs/lib/*:/usr/share/aws/emr/emrfs/auxlib/*:/usr/share/aws/emr/ddb/lib/emr-ddb-hadoop.jar:/usr/share/aws/emr/goodies/lib/emr-hadoop-goodies.jar:/usr/share/aws/emr/kinesis/lib/emr-kinesis-hadoop.jar:/usr/share/aws/emr/cloudwatch-sink/lib/*:/usr/share/aws/emr/security/conf:/usr/share/aws/emr/security/lib/*:/usr/lib/hadoop-yarn/.//*:/usr/lib/hadoop-yarn/lib/*:/etc/hadoop/conf/rm-config/log4j.properties org.apache.hadoop.yarn.server.resourcemanager.ResourceManager

 

Resource Manager Web UI port 8088로 접속하면 3대의 노드중 한대의 RM Web UI로 접속 가능합니다.

여기서는 54.180.31.186 노드로 접속 됩니다.



 

다른 노드로는 접속이 안됩니다.



 

Name Node와 동일하게 reboot하면 아래와 같이 다른 노드로 접속이 됩니다. 여기서는 15.164.93.61 노드로 접속이 되었습니다.



 

:: ganglia 확인


3개의 마스터 노드 모두 Ganglia Web UI 로 접근 가능합니다.



 



 



 

 




마무리






AWS EMR에서 일부 기능이나 application을 지원하지 않는 기능이 있지만  멀티 마스터 사용을 통해 Name node HA 구성 등 더욱 안정적인 클러스터 구성이 가능해졌습니다.

이상으로 EMR 멀티 마스터 기능을 간단 확인해보았습니다.