毕设开发日志2017-12-01-Scan超时
【前言】
本篇博客主要描述一下在開(kāi)發(fā)過(guò)程中遇到的scan的超時(shí)問(wèn)題。
【問(wèn)題描述】
剛剛完成了對(duì)索引表的定義和建議,并且在單元測(cè)試中對(duì)該表進(jìn)行插入和掃描時(shí)均未發(fā)現(xiàn)錯(cuò)誤。但是在對(duì)該表進(jìn)行整體更新時(shí),需要在掃描weather表的過(guò)程中對(duì)該表進(jìn)行不斷的更新操作。但是發(fā)現(xiàn)每次更新到第100條數(shù)據(jù)的時(shí)候就報(bào)scan的超時(shí)錯(cuò)誤。即使只更新一行數(shù)據(jù)中的某一列也是如此(只獲取區(qū)塊首的時(shí)候掃描量會(huì)大大下降),于是證明不是掃描量的問(wèn)題。具體報(bào)錯(cuò)如下
Exception in thread "main" java.lang.RuntimeException: org.apache.hadoop.hbase.client.ScannerTimeoutException: 1255382ms passed since the last invocation, timeout is currently set to 60000at org.apache.hadoop.hbase.client.AbstractClientScanner$1.hasNext(AbstractClientScanner.java:97)at com.zxc.fox.dao.IndexDao.updateAll(IndexDao.java:118)at com.zxc.fox.dao.IndexDao.main(IndexDao.java:38) Caused by: org.apache.hadoop.hbase.client.ScannerTimeoutException: 1255382ms passed since the last invocation, timeout is currently set to 60000at org.apache.hadoop.hbase.client.ClientScanner.loadCache(ClientScanner.java:417)at org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:332)at org.apache.hadoop.hbase.client.AbstractClientScanner$1.hasNext(AbstractClientScanner.java:94)... 2 more Caused by: org.apache.hadoop.hbase.UnknownScannerException: org.apache.hadoop.hbase.UnknownScannerException: Name: 533, already closed?at org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2017)at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:31443)at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2031)at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:107)at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)at java.lang.Thread.run(Thread.java:745)at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)at java.lang.reflect.Constructor.newInstance(Constructor.java:525)at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:95)at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRemoteException(ProtobufUtil.java:313)at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:241)at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:62)at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:126)at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:310)at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:291)at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)at java.util.concurrent.FutureTask.run(FutureTask.java:166)at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)at java.lang.Thread.run(Thread.java:722) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.UnknownScannerException): org.apache.hadoop.hbase.UnknownScannerException: Name: 533, already closed?at org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2017)at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:31443)at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2031)at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:107)at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)at java.lang.Thread.run(Thread.java:745)at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1199)at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:216)at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:300)at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.scan(ClientProtos.java:31889)at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:200)... 9 more【解決過(guò)程】
先查看了datanode的錯(cuò)誤日志,然后參考了如下兩篇博客:參考博客01,參考博客02,在代碼中配置了超時(shí)時(shí)間,未成功。修改了hadoop配置文件,但是沒(méi)有起作用。仔細(xì)思考了問(wèn)題出錯(cuò)的原因,比對(duì)了之前自己寫(xiě)的一些方法,最后發(fā)現(xiàn)我嵌套了兩個(gè)scan,于是修改此部分代碼,先用一個(gè)scan得到所有的城市id之后保存在一個(gè)list里,在遍歷這個(gè)list來(lái)代替之前的scan,這樣做在實(shí)踐中沒(méi)有出現(xiàn)明顯的時(shí)間消耗,但是卻避免了scan超時(shí)問(wèn)題。
【體會(huì)】
出現(xiàn)該問(wèn)題可能是由多種原因造成的,先檢查一下。可以從如下幾方面考慮
1. 是否你自己每次的scan處理較耗時(shí)? -> ?優(yōu)化處理程序,scan一些設(shè)置調(diào)優(yōu)(比如setBlockCache(false) )
2. 是否每次scan的caching設(shè)置過(guò)大? ?-> ?減少caching (一般默認(rèn)先設(shè)100)
3. 是否是網(wǎng)絡(luò)或機(jī)器負(fù)載問(wèn)題? ? ?->? 查看集群原因
4. 是否HBase本身負(fù)載問(wèn)題? ? ? -> ? 查看RegionServer日志
轉(zhuǎn)載于:https://www.cnblogs.com/420Rock/p/7943280.html
創(chuàng)作挑戰(zhàn)賽新人創(chuàng)作獎(jiǎng)勵(lì)來(lái)咯,堅(jiān)持創(chuàng)作打卡瓜分現(xiàn)金大獎(jiǎng)總結(jié)
以上是生活随笔為你收集整理的毕设开发日志2017-12-01-Scan超时的全部?jī)?nèi)容,希望文章能夠幫你解決所遇到的問(wèn)題。
- 上一篇: Mac MongoDB未正常关闭导致重启
- 下一篇: JMS(Java消息服务)与消息队列Ac