The file is written by the Spark worker nodes, so it should be written to a filesystem that is accessible both by the worker nodes and the client. Keep this in mind when setting up a cluster in Docker as the workers should be configured with the same volumes:
as the master to allow them to read/write partitions (missing volumes in workers won't give an error but only a repo with just _SUCCESS file).