As you mentioned data will not be written to big query directly. It will first write into google storage and then gets loaded to bigquery. To achieve this use the following statement before write statement
bucket = "<give your bucket name" spark.conf.set("temporaryGcsBucket",bucket) wordCountDf.write.format('bigquery').option('table', 'projectname.dataset.table_name').save()