79791414

Date: 2025-10-15 16:44:55
Score: 1
Natty:
Report link

I have changed in /usr/local/spark/conf/spark-defaults.conf
the following properties :
spark.driver.host=<hostname of the node itself where you change this file>
spark.driver.bindAddress=<ip address of the node itself where you change this file>
So from the perspective of any node that is elected, these properties are then locally read and learned where the driver is, on its own location/host...

I can spark-submit to any node with deploy mode cluster and spark resource manager of the spark standalone cluster is electing a suitable node to become the driver which may be another node than the one on which spark-submit is executed, without any address bind errors (unless a firewall or incorrect settings are given).

A small fix, so a standalone cluster is supporting cluster deploy mode as described in the documentation.

Even though running with deploy mode client works fine (maybe some latency differences which depend on the network setup) , cluster deploy mode is simulating a real deployment. It would raise deployment issues that need to be solved with for instance assembly (fat jar) or resolved missing jars by the environment..

Reasons:
  • Long answer (-1):
  • No code block (0.5):
  • Self-answer (0.5):
  • Low reputation (1):
Posted by: Rene