It seems like you're dealing with an intermittent issue where Spark fails while trying to rename temporary files in your Data Lake, probably due to file locks or race conditions. Since manually deleting the table resolves it temporarily, it points to a problem with leftover files or some contention in the storage. You could try tweaking how you partition the data to reduce the chance of multiple processes trying to write to the same file at once, or scale up your Spark resources to handle more parallelism. It might also help to add a cleanup step before each run to clear out any old files or locks. Also, check if your Azure storage is getting throttled or running into other performance issues. Finally, enabling retries for those file operations could help make the job more robust against these occasional problems.