"The issue you're encountering is likely due to how `behavex` handles parallel execution. When you run your tests in parallel with `behavex`, it might be spinning up multiple instances of the test environment, which can lead to each instance executing the `after_all` hook separately. In `behave`, the `after_all` hook is designed to run only once after all features have finished executing in a single sequential run.
To address this issue, you can consider the following approaches:
1. **Check `behavex` Documentation**: Look into the `behavex` documentation or any available source code to understand how hooks are handled during parallel execution. There might be specific configurations or flags that can control this behavior.
2. **Synchronization Mechanism**: Implement a mechanism to ensure that the `after_all` actions are only performed once, no matter how many parallel processes complete. You could use file locks, environment variables, or a shared memory mechanism to control this.
3. **Custom Hook for Parallel Execution**: If `behavex` does not support a proper `after_all` in a parallel context, you might need to create a custom hook or script that runs after all parallel processes have completed. This could be a separate script that generates the report after the completion of all tests.
4. **Use a Single Process for Report Generation**: If feasible, run the report generation step in a single process after all other tests have completed. This can be a separate command you run manually or automate in your CI/CD pipeline.
5. **Feedback to Maintainers**: If the behavior is unexpected, consider reaching out to the maintainers of `behavex` with a report of the issue. They might be able to provide a fix or a workaround.