You don’t need to retrain your entire model to adapt it to each user’s gesture style. A common approach is to collect a few samples of the user’s swipe gestures (e.g., 10–20 during a calibration step) and use these to fine-tune the detection threshold or train a small classifier on top of your pre-trained model’s embeddings.
For example:
Use your current model as a feature extractor.
Capture user-specific gesture data and adjust the decision threshold based on the model’s confidence scores.
If you need better accuracy, train a lightweight classifier (like logistic regression or SVM) on-device using the captured embeddings.
Frameworks like TensorFlow Lite or Core ML allow on-device personalization, so you can adapt without redeploying the entire model. If on-device retraining isn’t possible, you can collect user data (with consent) and periodically fine-tune the model server-side.