The idea is to break 200GB into smaller pieces then use Cloud functions, the way I see it is for you to break it by deploying a Cloud Run (it has a memory cap of 16GB) to split it or manually breaking it. Then, use a Cloud Function to transform the data so you can load it to BigQuery.