关于Netty的ByteBuff内存泄漏问题
之前做的東華車管數據采集平臺總是發生數據丟失的情況,雖然不頻繁但是還是要關注一下原因,于是今天提高了Netty的Log級別,打算查找一下問題出在哪了,提高級別代碼:
ServerBootstrap b =new ServerBootstrap(); b.group(bossGroup,workerGroup).channel(NioServerSocketChannel.class).option(ChannelOption.SO_BACKLOG, 2048).handler(new LoggingHandler(LogLevel.DEBUG)).childHandler(new ChildChannelHandler());將Loglevel設置成DEBUG模式就OK了。
于是開始安心的觀察日志:
注意這句話:
LEAK: ByteBuf.release() was not called before it's garbage-collected. Enable advanced leak reporting to find out where the leak occurred. To enable advanced leak reporting, specify the JVM option '-Dio.netty.leakDetectionLevel=advanced' or call ResourceLeakDetector.setLevel() See http://netty.io/wiki/reference-counted-objects.html for more information.通過這句話我們可以得知,只要加入
ResourceLeakDetector.setLevel(ResourceLeakDetector.Level.ADVANCED);將警告級別設置成Advaced即可查到更詳細的泄漏信息,之后再度查看日志:
2017-01-19 10:35:59 [ nioEventLoopGroup-1-0:665092 ] - [ ERROR ] LEAK: ByteBuf.release() was not called before it's garbage-collected. See http://netty.io/wiki/reference-counted-objects.html for more information. Recent access records: 5 #5:io.netty.buffer.AdvancedLeakAwareByteBuf.readBytes(AdvancedLeakAwareByteBuf.java:435)com.dhcc.ObdServer.ObdServerHandler.channelRead(ObdServerHandler.java:31)io.netty.channel.ChannelHandlerInvokerUtil.invokeChannelReadNow(ChannelHandlerInvokerUtil.java:84)io.netty.channel.DefaultChannelHandlerInvoker.invokeChannelRead(DefaultChannelHandlerInvoker.java:153)io.netty.channel.PausableChannelEventExecutor.invokeChannelRead(PausableChannelEventExecutor.java:86)io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:389)io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:243)io.netty.channel.ChannelHandlerInvokerUtil.invokeChannelReadNow(ChannelHandlerInvokerUtil.java:84)io.netty.channel.DefaultChannelHandlerInvoker.invokeChannelRead(DefaultChannelHandlerInvoker.java:153)io.netty.channel.PausableChannelEventExecutor.invokeChannelRead(PausableChannelEventExecutor.java:86)io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:389)io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:956)io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:127)io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:514)io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:471)io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:385)io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:351)io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116)io.netty.util.internal.chmv8.ForkJoinTask$RunnableExecuteAction.exec(ForkJoinTask.java:1412)io.netty.util.internal.chmv8.ForkJoinTask.doExec(ForkJoinTask.java:280)io.netty.util.internal.chmv8.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:877)io.netty.util.internal.chmv8.ForkJoinPool.scan(ForkJoinPool.java:1706)io.netty.util.internal.chmv8.ForkJoinPool.runWorker(ForkJoinPool.java:1661)io.netty.util.internal.chmv8.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:126) #4:Hint: 'ObdServerHandler#0' will handle the message from this point.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:387)io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:243)io.netty.channel.ChannelHandlerInvokerUtil.invokeChannelReadNow(ChannelHandlerInvokerUtil.java:84)io.netty.channel.DefaultChannelHandlerInvoker.invokeChannelRead(DefaultChannelHandlerInvoker.java:153)io.netty.channel.PausableChannelEventExecutor.invokeChannelRead(PausableChannelEventExecutor.java:86)io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:389)io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:956)io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:127)io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:514)io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:471)io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:385)io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:351)io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116)io.netty.util.internal.chmv8.ForkJoinTask$RunnableExecuteAction.exec(ForkJoinTask.java:1412)io.netty.util.internal.chmv8.ForkJoinTask.doExec(ForkJoinTask.java:280)io.netty.util.internal.chmv8.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:877)io.netty.util.internal.chmv8.ForkJoinPool.scan(ForkJoinPool.java:1706)io.netty.util.internal.chmv8.ForkJoinPool.runWorker(ForkJoinPool.java:1661)io.netty.util.internal.chmv8.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:126) #3:io.netty.buffer.AdvancedLeakAwareByteBuf.release(AdvancedLeakAwareByteBuf.java:721)io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:237)io.netty.channel.ChannelHandlerInvokerUtil.invokeChannelReadNow(ChannelHandlerInvokerUtil.java:84)io.netty.channel.DefaultChannelHandlerInvoker.invokeChannelRead(DefaultChannelHandlerInvoker.java:153)io.netty.channel.PausableChannelEventExecutor.invokeChannelRead(PausableChannelEventExecutor.java:86)io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:389)io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:956)io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:127)io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:514)io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:471)io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:385)io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:351)io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116)io.netty.util.internal.chmv8.ForkJoinTask$RunnableExecuteAction.exec(ForkJoinTask.java:1412)io.netty.util.internal.chmv8.ForkJoinTask.doExec(ForkJoinTask.java:280)io.netty.util.internal.chmv8.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:877)io.netty.util.internal.chmv8.ForkJoinPool.scan(ForkJoinPool.java:1706)io.netty.util.internal.chmv8.ForkJoinPool.runWorker(ForkJoinPool.java:1661)io.netty.util.internal.chmv8.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:126) #2:io.netty.buffer.AdvancedLeakAwareByteBuf.retain(AdvancedLeakAwareByteBuf.java:693)io.netty.handler.codec.DelimiterBasedFrameDecoder.decode(DelimiterBasedFrameDecoder.java:277)io.netty.handler.codec.DelimiterBasedFrameDecoder.decode(DelimiterBasedFrameDecoder.java:216)io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:316)io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:230)io.netty.channel.ChannelHandlerInvokerUtil.invokeChannelReadNow(ChannelHandlerInvokerUtil.java:84)io.netty.channel.DefaultChannelHandlerInvoker.invokeChannelRead(DefaultChannelHandlerInvoker.java:153)io.netty.channel.PausableChannelEventExecutor.invokeChannelRead(PausableChannelEventExecutor.java:86)io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:389)io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:956)io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:127)io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:514)io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:471)io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:385)io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:351)io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116)io.netty.util.internal.chmv8.ForkJoinTask$RunnableExecuteAction.exec(ForkJoinTask.java:1412)io.netty.util.internal.chmv8.ForkJoinTask.doExec(ForkJoinTask.java:280)io.netty.util.internal.chmv8.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:877)io.netty.util.internal.chmv8.ForkJoinPool.scan(ForkJoinPool.java:1706)io.netty.util.internal.chmv8.ForkJoinPool.runWorker(ForkJoinPool.java:1661)io.netty.util.internal.chmv8.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:126) #1:io.netty.buffer.AdvancedLeakAwareByteBuf.skipBytes(AdvancedLeakAwareByteBuf.java:465)io.netty.handler.codec.DelimiterBasedFrameDecoder.decode(DelimiterBasedFrameDecoder.java:272)io.netty.handler.codec.DelimiterBasedFrameDecoder.decode(DelimiterBasedFrameDecoder.java:216)io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:316)io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:230)io.netty.channel.ChannelHandlerInvokerUtil.invokeChannelReadNow(ChannelHandlerInvokerUtil.java:84)io.netty.channel.DefaultChannelHandlerInvoker.invokeChannelRead(DefaultChannelHandlerInvoker.java:153)io.netty.channel.PausableChannelEventExecutor.invokeChannelRead(PausableChannelEventExecutor.java:86)io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:389)io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:956)io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:127)io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:514)io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:471)io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:385)io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:351)io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116)io.netty.util.internal.chmv8.ForkJoinTask$RunnableExecuteAction.exec(ForkJoinTask.java:1412)io.netty.util.internal.chmv8.ForkJoinTask.doExec(ForkJoinTask.java:280)io.netty.util.internal.chmv8.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:877)io.netty.util.internal.chmv8.ForkJoinPool.scan(ForkJoinPool.java:1706)io.netty.util.internal.chmv8.ForkJoinPool.runWorker(ForkJoinPool.java:1661)io.netty.util.internal.chmv8.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:126) Created at:io.netty.buffer.PooledByteBufAllocator.newDirectBuffer(PooledByteBufAllocator.java:250)io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:155)io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:146)io.netty.buffer.AbstractByteBufAllocator.ioBuffer(AbstractByteBufAllocator.java:107)io.netty.channel.AdaptiveRecvByteBufAllocator$HandleImpl.allocate(AdaptiveRecvByteBufAllocator.java:104)io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:113)io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:514)io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:471)io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:385)io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:351)io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116)io.netty.util.internal.chmv8.ForkJoinTask$RunnableExecuteAction.exec(ForkJoinTask.java:1412)io.netty.util.internal.chmv8.ForkJoinTask.doExec(ForkJoinTask.java:280)io.netty.util.internal.chmv8.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:877)io.netty.util.internal.chmv8.ForkJoinPool.scan(ForkJoinPool.java:1706)io.netty.util.internal.chmv8.ForkJoinPool.runWorker(ForkJoinPool.java:1661)io.netty.util.internal.chmv8.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:126)定位到我的代碼中為:
ByteBuf buff=(ByteBuf) msg; byte[] req=new byte[buff.readableBytes()];于是可以確定是ByteBuff內存泄漏導致的問題,于是從這方面著手調查,發現netty5默認的分配bytebuff的方式是PooledByteBufAllocator,所以要手動回收,要不然會造成內存泄漏。
于是釋放ByteBuff即可
這里引入一個網友對于這行代碼的說明:
ReferenceCountUtil.release()其實是ByteBuf.release()方法(從ReferenceCounted接口繼承而來)的包裝。netty4中的ByteBuf使用了引用計數(netty4實現了一個可選的ByteBuf池),每一個新分配的ByteBuf>>的引用計數值為1,每對這個ByteBuf對象增加一個引用,需要調用ByteBuf.retain()方法,而每減少一個引用,需要調用ByteBuf.release()方法。當這個ByteBuf對象的引用計數值為0時,表示此對象可回收。我這只是用ByteBuf說明,還有其他對象實現了ReferenceCounted接口,此時同理。
在檢查問題的過程中,我還懷疑是不是我的Netty使用了UDP協議導致的數據丟失,于是這里附上Netty使用的是TCP還是UDP的判斷方法:
關于TCP和UDP
socket可以基于TCP,也可以基于UDP。區別在于UDP的不保證數據包都正確收到,所以性能更好,但容錯不高。TCP保證不錯,所以性能沒那么好。
UDP基本只適合做在線視頻傳輸之類,我們的需求應該會是TCP。
那這2種方式在寫法上有什么不同?網上搜到這樣的說法:
在ChannelFactory 的選擇上,UDP的通信選擇 NioDatagramChannelFactory,TCP的通信我們選擇的是NioServerSocketChannelFactory;
在Bootstrap的選擇上,UDP選擇的是ConnectionlessBootstrap,而TCP選擇的是ServerBootstrap。
對于編解碼器decoder和Encoder,以及ChannelPipelineFactory,UDP開發與TCP并沒有什么區別,在此不做詳細介紹。
對于ChannelHandler,是UDP與TCP區別的核心所在。大家都知道UDP是無連接的,也就是說你通過 MessageEvent 參數對象的 getChannel() 方法獲取當前會話連接,但是其 isConnected() 永遠都返回 false。
UDP 開發中在消息獲取事件回調方法中,獲取了當前會話連接 channel 對象后可直接通過 channel 的 write 方法發送數據給對端 channel.write(message, remoteAddress),第一個參數仍然是要發送的消息對象,
第二個參數則是要發送的對端 SocketAddress 地址對象。
這里最需要注意的一點是SocketAddress,在TCP通信中我們可以通過channel.getRemoteAddress()獲得,但在UDP通信中,我們必須從MessageEvent中通過調用getRemoteAddress()方法獲得對端的SocketAddress 地址。
?
總結
以上是生活随笔為你收集整理的关于Netty的ByteBuff内存泄漏问题的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: spark学习:ContextClean
- 下一篇: 如何在生产环境使用Btrace进行调试