Spark Standalone cluster, memory per executor issue -


hi launch spark application spark submit script such

spark-submit --master spark://maatari-xxxxxxx.local:7077 --class estimatorapp /users/sul.maatari/ideaprojects/workshit/target/scala-2.11/workshit-assembly-1.0.jar  --d eploy-mode cluster --executor-memory 15g num-executors 2 

i have spark standalone cluster deployed on 2 nodes (my 2 laptops). cluster running fine. default set 15g workers , 8 cores executors. experiencing following strange behavior. although explicity setting memory , can seen in environmement variable of sparconf ui, in cluster ui says application limited 1024mb executor memory. makes me think of default 1g parameter. wonder why it.

cluster ui environment in sparkconf ui

my application indeed fail because of memory issue. know need lot of memory application.

one last point of confusion driver program. why given on cluster mode, spark submit not return ? though given driver executed on cluster, client i.e. submit application should return immediately. further suggest me not right conf , how things being executed.

can diagnose ?


Comments

Popular posts from this blog

Is there a better way to structure post methods in Class Based Views -

performance - Why is XCHG reg, reg a 3 micro-op instruction on modern Intel architectures? -

jquery - Responsive Navbar with Sub Navbar -