Data leakage is never good when testing models; data samples should not be present in training when they can be observed in validation/testing. I examined the dataset on Kaggle, and I can assume that different individuals produce distinct signal frequencies even when performing the same gesture. Can z-score normalization be applied per-channel, per-subject, to remove gesture variance? This could remove the subject bias and prevent your models from learning subject-specific patterns instead of gesture-specific patterns. Additionally, verify that you have an even class distribution within your training data.