split分片主要源码解析
生活随笔
收集整理的這篇文章主要介紹了
split分片主要源码解析
小編覺得挺不錯的,現(xiàn)在分享給大家,幫大家做個參考.
問題提出:無意間看到假如260M的文件在進行mapreduce時:
在FileInputFormat.getSplits方法中對文件進行了Split,分成兩塊128M和132M。
向吳老大咨詢,吳老大說不可能,別拿博客說,用源碼來證明。所以來了,老弟!!!
先搜索:
split分片主要源碼解析:
public List<InputSplit> getSplits(JobContext job) throws IOException {StopWatch sw = new StopWatch().start();//分片最小值,默認為1long minSize = Math.max(getFormatMinSplitSize(), getMinSplitSize(job));//分片最大值,默認 LONG.MAX_VALUElong maxSize = getMaxSplitSize(job);// generate splitsList<InputSplit> splits = new ArrayList<InputSplit>();//列出文件狀態(tài)List<FileStatus> files = listStatus(job);for (FileStatus file: files) {//獲取文件路徑和大小Path path = file.getPath();long length = file.getLen();if (length != 0) {獲得block塊的位置信息BlockLocation[] blkLocations;if (file instanceof LocatedFileStatus) {blkLocations = ((LocatedFileStatus) file).getBlockLocations();} else {FileSystem fs = path.getFileSystem(job.getConfiguration());blkLocations = fs.getFileBlockLocations(file, 0, length);}//判斷文件是否可以分片if (isSplitable(job, path)) {//默認128Mlong blockSize = file.getBlockSize();//文件剩余大小long splitSize = computeSplitSize(blockSize, minSize, maxSize);long bytesRemaining = length;//當文件剩余大小大于split大小的1.1倍時,進行分片//private static final double SPLIT_SLOP = 1.1; // 10% slopwhile (((double) bytesRemaining)/splitSize > SPLIT_SLOP) {//獲取block塊的索引位置int blkIndex = getBlockIndex(blkLocations, length-bytesRemaining);//分片splits.add(makeSplit(path, length-bytesRemaining, splitSize,blkLocations[blkIndex].getHosts(),blkLocations[blkIndex].getCachedHosts()));//源文件減去已經(jīng)分片大小bytesRemaining -= splitSize;}//判斷文件是否已經(jīng)完成分片,如果還有剩余,則將剩余部分作為一個分片if (bytesRemaining != 0) {int blkIndex = getBlockIndex(blkLocations, length-bytesRemaining);splits.add(makeSplit(path, length-bytesRemaining, bytesRemaining,blkLocations[blkIndex].getHosts(),blkLocations[blkIndex].getCachedHosts()));}} else { // not splitablesplits.add(makeSplit(path, 0, length, blkLocations[0].getHosts(),blkLocations[0].getCachedHosts()));}} else { //Create empty hosts array for zero length filessplits.add(makeSplit(path, 0, length, new String[0]));}}// Save the number of input files for metrics/loadgenjob.getConfiguration().setLong(NUM_INPUT_FILES, files.size());sw.stop();if (LOG.isDebugEnabled()) {LOG.debug("Total # of splits generated by getSplits: " + splits.size()+ ", TimeTaken: " + sw.now(TimeUnit.MILLISECONDS));}return splits; }分片最小值:
protected long getFormatMinSplitSize() {return 1; }分片最大值:
public static long getMaxSplitSize(JobContext context) {return context.getConfiguration().getLong(SPLIT_MAXSIZE, Long.MAX_VALUE); }SPLIT_SLOP:
private static final double SPLIT_SLOP = 1.1; // 10% slop故:對128M文件進行了Split,分成兩塊128M和132M。
總結(jié)
以上是生活随笔為你收集整理的split分片主要源码解析的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: HadoopIO和javaIO的区别
- 下一篇: 电信信息日志使用mapreduce统计的