I'm currently working on a similar requirement to bulk read more than 3 lakh (300,000) records from the Catalyst Datastore and generate a CSV file. However, it appears that a single bulk read request can retrieve only 2 lakh (200,000) records at a time.
To overcome this limitation, a workaround is to execute multiple bulk read requests iteratively, where each request fetches 2 lakh records. By combining the results from multiple requests, we can successfully retrieve all the required records and generate the complete CSV file.
To fetch the next set of records, you need to pass the page key in the bulk read API request. For example, setting page: 1 will retrieve the first 2 lakh records, while page: 2 will fetch the next set, and so on.
You can refer their official help documentation here