Creating one child process per website can quickly overwhelm your system, leading to resource exhaustion. Instead, consider: • Using a Task Queue System: Leverage a queue (e.g., BullMQ) to manage and distribute scraping jobs. You can process tasks concurrently with a controlled level of concurrency to avoid overloading the system. • Pooling Child Processes: Use a process pool (libraries like generic-pool can help). Create a limited number of child processes and reuse them to handle scraping tasks in a more resource-efficient manner.