79795118

Date: 2025-10-20 17:03:49
Score: 0.5
Natty:
Report link
Fisrt of all you need to install docker in your machine. After that you need to follow these steps.

**Step1**

You need to open the terminal in the docker and run the command in order to pull the Hadoop image inside the docker using the command which is given below.

    docker pull liliasfaxi/spark-hadoop :hv-2.7.2

 You can verify the image by using command in terminal which is given below. 

    docker images

**Step2**

Now in this step we have to create the three different containers like

 - Master 
 - Slave1 
 - Slave2

But before we start making containers we have to make the **network** here. 

    docker network create --driver=bridge hadoop

After creating network now we have to make the master container and wrote the command which is given below. 

**For Master**

    docker run -itd --net=hadoop -p 50070:50070 -p 8088:8088 -p 7077:7077 --name hadoop-master --hostname hadoop-master liliasfaxi/spark-hadoop:hv-2.7.2

After creating master container you need to create the slave containers just copy and paste the commands. 

**For Slave1**

    docker run -itd -p 8040:8042 --net=hadoop --name hadoop-slave1 --hostname hadoop-slave1 liliasfaxi/spark-hadoop:hv-2.7.2

**For Slave2**

    docker run -itd -p 8041:8042 --net=hadoop --name hadoop-slave2 --hostname hadoop-slave2 liliasfaxi/spark-hadoop:hv-2.7.2

Now to see recently created containers like master, slave1 and slave2 are in running properly you need to run the command which is given below. 

    docker ps

Now we have to start the all services of master, slave1 and slave2 services using the below mentioned commands. 

    docker exec hadoop-master service ssh start
    docker exec hadoop-slave1 service ssh start
    docker exec hadoop-slave2 service ssh start 

After starting the commands we need to go inside the master container to confirm some confirgation. we can do it the command which is given below.

    docker exec -it hadoop-master bash 

Now in the master bash we need to open one file named **core-site.xml** using the command which is given below. And I am using vi instead of nano. 

    vi $HADOOP_HOME/etc/hadoop/core-site.xml

Now You need to verify does this file look like below image. 

[![enter image description here][1]][1]


If yes? Then move next and write **ls** in the master bash and verify does it hdfs contains different files. Just compare it with the image below. 

[![enter image description here][2]][2]


If yes then we need to verify master, slave1 and slave2 daemons using the below comands one by one. 

**For Master**

    docker exec -it hadoop-master jps
**For Slave1 and Slave2**

    docker exec -it hadoop-slave1 jps
    docker exec -it hadoop-slave2 jps

And your output maybe looks like this. 

[![enter image description here][3]][3]


  [1]: https://i.sstatic.net/mL3hsHqD.png

Congratulations! Now hadoop is installed in your machine. now you can manipulate the files easily. And dont forget to access the web interface of Hadoop and Yarn. 

 - HDFS UI: http://localhost:50070 
 - YARN UI: http://localhost:8088

  [2]: https://i.sstatic.net/FZq3SAVo.png
  [3]: https://i.sstatic.net/JwYo972C.png
Reasons:
  • Blacklisted phrase (1): enter image description here
  • Long answer (-1):
  • Has code block (-0.5):
  • Self-answer (0.5):
  • Low reputation (0.5):
Posted by: M Mamoon Khan