关于在本地idea当中提交spark代码到远程的错误总结(第二篇)
當(dāng)代碼能正常提交到spark集群運(yùn)行的時(shí)候,出現(xiàn)下面的錯(cuò)誤:
Exception in thread "main" java.lang.OutOfMemoryError: PermGen spaceat java.lang.ClassLoader.defineClass1(Native Method)at java.lang.ClassLoader.defineClass(ClassLoader.java:800)at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)at java.net.URLClassLoader.defineClass(URLClassLoader.java:449)at java.net.URLClassLoader.access$100(URLClassLoader.java:71)at java.net.URLClassLoader$1.run(URLClassLoader.java:361)at java.net.URLClassLoader$1.run(URLClassLoader.java:355)at java.security.AccessController.doPrivileged(Native Method)at java.net.URLClassLoader.findClass(URLClassLoader.java:354)at java.lang.ClassLoader.loadClass(ClassLoader.java:425)at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)at java.lang.ClassLoader.loadClass(ClassLoader.java:358)at scala.collection.SeqViewLike$AbstractTransformed.<init>(SeqViewLike.scala:43)at scala.collection.SeqViewLike$$anon$4.<init>(SeqViewLike.scala:79)at scala.collection.SeqViewLike$class.newFlatMapped(SeqViewLike.scala:79)at scala.collection.SeqLike$$anon$2.newFlatMapped(SeqLike.scala:635)at scala.collection.SeqLike$$anon$2.newFlatMapped(SeqLike.scala:635)at scala.collection.TraversableViewLike$class.flatMap(TraversableViewLike.scala:160)at scala.collection.SeqLike$$anon$2.flatMap(SeqLike.scala:635)at org.apache.spark.sql.catalyst.planning.QueryPlanner.plan(QueryPlanner.scala:58)at org.apache.spark.sql.execution.QueryExecution.sparkPlan$lzycompute(QueryExecution.scala:48)at org.apache.spark.sql.execution.QueryExecution.sparkPlan(QueryExecution.scala:46)at org.apache.spark.sql.execution.QueryExecution.executedPlan$lzycompute(QueryExecution.scala:53)at org.apache.spark.sql.execution.QueryExecution.executedPlan(QueryExecution.scala:53)at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:56)at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:56)at org.apache.spark.sql.DataFrame.withCallback(DataFrame.scala:153)at org.apache.spark.sql.DataFrame.<init>(DataFrame.scala:145)at org.apache.spark.sql.DataFrame.<init>(DataFrame.scala:130)at org.apache.spark.sql.DataFrame$.apply(DataFrame.scala:52)at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:829)at p.JavaSparkPi.main(JavaSparkPi.java:30) Exception in thread "Thread-3" java.lang.OutOfMemoryError: PermGen space Exception in thread "Thread-30" java.lang.OutOfMemoryError: PermGen space Exception in thread "Thread-33" java.lang.OutOfMemoryError: PermGen space?
除了出現(xiàn)上面的問題之外還會(huì)出現(xiàn)下面這個(gè)錯(cuò)誤。看到這個(gè)錯(cuò)誤的第一反應(yīng)內(nèi)存溢出 Job aborted due to stage failure: Total size of serialized results of 34 tasks (1033.9 MB) is bigger than spark.driver.maxResultSize (1024.0 MB)Exception in thread "main"
Exception: java.lang.OutOfMemoryError thrown from the UncaughtExceptionHandler in thread "main"
2018-12-19 10:42:51,599 WARN? [shuffle-client-0] server.TransportChannelHandler : Exception in connection from /10.8.30.108:50610
java.io.IOException: Connection reset by peer
? at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
? at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
? at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
? at sun.nio.ch.IOUtil.read(IOUtil.java:192)
? at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:379)
? at io.netty.buffer.PooledUnsafeDirectByteBuf.setBytes(PooledUnsafeDirectByteBuf.java:313)
? at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:881)
? at io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:242)
? at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:119)
? at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511)
? at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
? at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
? at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
? at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
? at java.lang.Thread.run(Thread.java:745)
2018-12-19 10:42:51,610 INFO? [dispatcher-event-loop-1] yarn.ApplicationMaster$AMEndpoint : Driver terminated or disconnected! Shutting down. tc-20024:50610
2018-12-19 10:42:51,614 INFO? [dispatcher-event-loop-1] yarn.ApplicationMaster : Final app status: SUCCEEDED, exitCode: 0
2018-12-19 10:42:51,623 INFO? [Thread-3] yarn.ApplicationMaster : Unregistering ApplicationMaster with SUCCEEDED
2018-12-19 10:42:51,637 INFO? [Thread-3] impl.AMRMClientImpl : Waiting for application to be successfully unregistered.
2018-12-19 10:42:51,743 INFO? [Thread-3] yarn.ApplicationMaster : Deleting staging directory .sparkStaging/application_1545188975663_0002
2018-12-19 10:42:51,745 INFO? [Thread-3] util.ShutdownHookManager : Shutdown hook called
?這個(gè)種種的跡象都顯示是程序的內(nèi)存溢出造成的,那為什么會(huì)內(nèi)存溢出那,原因是我們隊(duì)結(jié)果集進(jìn)行collect操作的時(shí)候,整的結(jié)果作為一個(gè)大的集群全部的聚集到了driver 端也就是我們的idea當(dāng)中。這個(gè)時(shí)候我們的客戶端如果內(nèi)存不是夠大的情況下就會(huì)出現(xiàn)內(nèi)存溢出的情況
你可以調(diào)大你的內(nèi)存。但是這樣是治標(biāo)不治本的操作,在后面的操作過程當(dāng)中,你也不知道后面的數(shù)據(jù)量多大,配置多大的driver內(nèi)存合適那,這個(gè)就很難界定了。所以我們?cè)谔幚頂?shù)據(jù)的時(shí)候盡量的減輕對(duì)driver端的壓力。可以使用foreachpartition的方法將數(shù)據(jù)全部在excutor端進(jìn)行
處理。
參考這篇文章執(zhí)行:https://segmentfault.com/a/1190000005365244?utm_source=tag-newest
?
這里注意一下,所有的數(shù)據(jù)都是按照row輸出在excutor端的不是我們的控制臺(tái)。
?
轉(zhuǎn)載于:https://www.cnblogs.com/gxgd/p/10179052.html
《新程序員》:云原生和全面數(shù)字化實(shí)踐50位技術(shù)專家共同創(chuàng)作,文字、視頻、音頻交互閱讀總結(jié)
以上是生活随笔為你收集整理的关于在本地idea当中提交spark代码到远程的错误总结(第二篇)的全部?jī)?nèi)容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: JavaScript面向对象(一)——J
- 下一篇: 记录MNIST采用卷积方式实现与理解