dfs.client.block.write.replace-datanode-on-failure
問題描述
在使用hdfs api追加內(nèi)容操作,從windows電腦上的idea對(duì)aliyun服務(wù)器上的hdfs中的文件追加內(nèi)容時(shí),出現(xiàn)錯(cuò)誤,如下:
java.io.IOException: Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try. (Nodes: current=[DatanodeInfoWithStorage[172.25.55.228:9866,DS-6ba619cf-5189-45c7-8dbc-56afa381ab0b,DISK]], original=[DatanodeInfoWithStorage[172.25.55.228:9866,DS-6ba619cf-5189-45c7-8dbc-56afa381ab0b,DISK]]). The current failed datanode replacement policy is DEFAULT, and a client may configure this via ‘dfs.client.block.write.replace-datanode-on-failure.policy’ in its configuration.
at org.apache.hadoop.hdfs.DFSOutputStreamDataStreamer.findNewDatanode(DFSOutputStream.java:918)atorg.apache.hadoop.hdfs.DFSOutputStreamDataStreamer.findNewDatanode(DFSOutputStream.java:918) at org.apache.hadoop.hdfs.DFSOutputStreamDataStreamer.findNewDatanode(DFSOutputStream.java:918)atorg.apache.hadoop.hdfs.DFSOutputStreamDataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:984)
at org.apache.hadoop.hdfs.DFSOutputStreamDataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1131)atorg.apache.hadoop.hdfs.DFSOutputStreamDataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1131) at org.apache.hadoop.hdfs.DFSOutputStreamDataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1131)atorg.apache.hadoop.hdfs.DFSOutputStreamDataStreamer.run(DFSOutputStream.java:455)
截圖:
問題解決
在idea代碼中添加
總結(jié)
以上是生活随笔為你收集整理的dfs.client.block.write.replace-datanode-on-failure的全部?jī)?nèi)容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: java源代码实例倒计时_Java倒计时
- 下一篇: 谷歌发布最新版安卓Android,谷歌发