Available Techniques In Hadoop Small File Issue
Hadoop is an optimal solution for big data processing and storing since being released in the late of 2006, hadoop data processing stands on master-slaves manner that’s splits the large file job into several small files in order to process them separately, this technique was adopted instead of pushi...
Saved in:
Main Authors: | , , |
---|---|
Format: | Article |
Language: | English |
Published: |
Institute Of Advanced Engineering And Science (IAES)
2020
|
Online Access: | http://eprints.utem.edu.my/id/eprint/24343/2/AVAILABLE%20TECHNIQUES%20IN%20HADOOP%20SMALL%20FILE%20ISSUE.PDF http://eprints.utem.edu.my/id/eprint/24343/ http://ijece.iaescore.com/index.php/IJECE/article/view/20039/13737 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Hadoop is an optimal solution for big data processing and storing since being released in the late of 2006, hadoop data processing stands on master-slaves manner that’s splits the large file job into several small files in order to process them separately, this technique was adopted instead of pushing one large file into a costly super machine to insights some useful information. Hadoop runs very good with large file of big data, but when it comes to big
data in small files it could facing some problems in performance, processing slow down, data access delay, high latency and up to a completely cluster shutting down. In this paper we will high light on one of hadoop’s limitations, that’s affects the data processing performance, one of these limits called “big data in small files” accrued when a massive number of small files pushed into a hadoop cluster which will rides the cluster to shut down totally. This paper also high light on some native and proposed solutions for big data
in small files, how do they work to reduce the negative effects on hadoop cluster, and add extra performance on storing and accessing mechanism |
---|