ITPub博客

首页 > 大数据 > 数据分析 > ERROR util.SparkUncaughtExceptionHandler: Uncaught exception in thread Thread

ERROR util.SparkUncaughtExceptionHandler: Uncaught exception in thread Thread

原创 数据分析 作者:xiaoyan5686670 时间:2017-02-23 21:37:56 0 删除 编辑
以集群的方式启动spark-shell
$ ./spark-shell --master spark://qxy1:7077 
报如下错误:
17/02/08 06:14:52 ERROR util.SparkUncaughtExceptionHandler: Uncaught exception in thread Thread[appclient-registration-retry-thread,5,main]
java.util.concurrent.RejectedExecutionException: Task java.util.concurrent.FutureTask@38cd581 rejected from java.util.concurrent.ThreadPoolExecutor@2d551be3[Running, pool size = 1, active threads = 0, queued t
asks = 0, completed tasks = 0] at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2047)
at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:823)
at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1369)
at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:112)
at org.apache.spark.deploy.client.AppClient$ClientEndpoint$$anonfun$tryRegisterAllMasters$1.apply(AppClient.scala:96)
at org.apache.spark.deploy.client.AppClient$ClientEndpoint$$anonfun$tryRegisterAllMasters$1.apply(AppClient.scala:95)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:108)
at scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
at scala.collection.mutable.ArrayOps$ofRef.map(ArrayOps.scala:108)
at org.apache.spark.deploy.client.AppClient$ClientEndpoint.tryRegisterAllMasters(AppClient.scala:95)
at org.apache.spark.deploy.client.AppClient$ClientEndpoint.org$apache$spark$deploy$client$AppClient$ClientEndpoint$$registerWithMaster(AppClient.scala:121)
at org.apache.spark.deploy.client.AppClient$ClientEndpoint$$anon$2$$anonfun$run$1.apply$mcV$sp(AppClient.scala:132)
at org.apache.spark.util.Utils$.tryOrExit(Utils.scala:1119)
at org.apache.spark.deploy.client.AppClient$ClientEndpoint$$anon$2.run(AppClient.scala:124)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
17/02/08 06:14:52 INFO storage.DiskBlockManager: Shutdown hook called
17/02/08 06:14:52 INFO util.ShutdownHookManager: Shutdown hook called
17/02/08 06:14:52 INFO util.ShutdownHookManager: Deleting directory /tmp/spark-2ac45590-1e2f-427d-8480-a3d39ed227ec
17/02/08 06:14:52 INFO util.ShutdownHookManager: Deleting directory /tmp/spark-8e009c17-65f9-4bda-bc15-5d16bc789ed8

查看/opt/spark-1.6.1-bin-hadoop2-without-hive/logs,发现如下错误:

17/02/08 05:56:44 ERROR akka.ErrorMonitor: dropping message [class akka.actor.ActorSelectionMessage] for non-local recipient [Actor[akka.tcp://sparkMaster@qxy1:7077/]] arriving at [akka.tcp://sparkMaster@qxy1:
7077] inbound addresses are [akka.tcp://sparkMaster@192.168.233.159:7077]akka.event.Logging$Error$NoCause$
17/02/08 05:57:05 INFO master.Master: 192.168.233.159:49896 got disassociated, removing it.
17/02/08 05:57:05 WARN remote.ReliableDeliverySupervisor: Association with remote system [akka.tcp://sparkDriver@192.168.233.159:49896] has failed, address is now gated for [5000] ms. Reason: [Disassociated] 
17/02/08 05:57:05 INFO master.Master: 192.168.233.159:49896 got disassociated, removing it.
17/02/08 06:00:08 ERROR akka.ErrorMonitor: dropping message [class akka.actor.ActorSelectionMessage] for non-local recipient [Actor[akka.tcp://sparkMaster@qxy1:7077/]] arriving at [akka.tcp://sparkMaster@qxy1:
7077] inbound addresses are [akka.tcp://sparkMaster@192.168.233.159:7077]akka.event.Logging$Error$NoCause$


经分析,找出如下原因:
[hadoop@qxy1 conf]$ vi spark-env.sh

#!/usr/bin/env bash

# This file is sourced when running various Spark programs.
# Copy it as spark-env.sh and edit that to configure Spark for your site.

export SPARK_MASTER_IP=192.168.233.159   #此处如果IP,--master 后就跟IP,否则就跟主机名
export HADOOP_CONF_DIR=/opt/hadoop-2.6.2/etc/hadoop
export SPARK_DIST_CLASSPATH=$(/opt/hadoop-2.6.2/bin/hadoop classpath)
export JAVA_HOME=/opt/jdk1.8.0_77
export SPARK_WORKER_MEMORY=1g
export SPARK_DRIVER_MEMORY=1g
以如下方式启动即可:
[hadoop@qxy1 bin]$ ./spark-shell --master spark://192.168.233.159:7077

来自 “ ITPUB博客 ” ,链接:http://blog.itpub.net/22569416/viewspace-2134183/,如需转载,请注明出处,否则将追究法律责任。

请登录后发表评论 登录
全部评论

注册时间:2012-07-25

  • 博文量
    108
  • 访问量
    199477