How can we contribute limited or specific amount of storage as slave(datanode) to the master(namenode)

Sonam Kumari Singh
2 min readDec 11, 2023

Hello Readers,

Back with another task with a new technology, Hadoop. Hadoop is an open-source framework that uses Java to store and process large amounts of data for applications. It uses distributed storage and parallel processing to break down workloads into smaller tasks that can run simultaneously.

Generally when we configure Hadoop it uses the entire Linux filesystem, which might not be feasible to us. So to add specific amount of storage from the node we need to use the Linux Filesystem concept.

Prerequisites:
- Hadoop basic
- Linux Partition
- A working HDFS cluster
- A new slave node

We will be doing it in two steps.
1. Partitioning of disk to be attached and mounting it to the folder
2. Configure the data node to master node.

1. Partitioning of the Disk:

We will be using our Linux skills here;

Agenda: Creating a folder and then mounting the disk after partition . List the devices: Here I am using AWS cloud devices will be named accordingly.

$ fdisk -l 
$ fdisk /dev/sdb

## n -> will create a new partiton
## p -> primary partition
## w -> write and save partition
## q to quit

Format the partition :

# mkfs.ext4 /dev/sdb

Mount the partition:

# mount /dev/sdb1 /kp

2. CONFIGURATION AND NODE ADDITON

Inside hdfs-site.xml file

vi /etc/hadoop/hdfs-site.xml  
hdfs-site configuration
vi /etc/hadoop/core-site.xml
## inside this file we need to add the name node details

Starting the services

$ hadoop-daemon.sh start datanode

Check the cluster storage, newly added node should reflect there

# hadoop dfsadmin -report 

Hope you enjoyed the article. Follow for more ….

--

--

Sonam Kumari Singh

SONAM here! Grateful for your connection! Tech enthusiast exploring new languages, deep into DevOps, with a spotlight on Linux. 😊🚀