performance - how to tune out of memory exception spark -
i have 11 nodes, each 1 has 2g of memory , 16 cores, try submit spark application using this
./bin/spark-submit --class myapp.main --master spark://name:7077 --conf spark.shuffle.memoryfraction=0 --executor-memory 2g --deploy-mode client /home/mbala/fer/myjars7/etlpersist.jar /home/mfile80.csv
in slaves file didn't add node's ip in wich launch command because think in client mode driver must running in node.
but whenever try run it, out of memory exception (sometimes because of gc or because of heap),i tried many solutions suggested in spark website , here in stackoverflow, tried minimize code, used memoryanddiskstorage still have problem
ps: use line because found solution in forum
--conf spark.shuffle.memoryfraction=0
should minimize number of cores? because think if use 16 cores 2g memory won't enough shuffle
can please try use small g
--executor-memory
, --driver-memory
options in command you.
when set executor memory 2gb
. assigns 0.6%
of original memory storage , execution , takes 0.5%
storage memory 0.6%
of original memory. hence 0.5%
of memory remains available execution.
you should understand concept of memory management. in debugging application.
Comments
Post a Comment