Variations in your model's accuracy and loss across runs can result from factors like the Adam optimizer's inherent randomness, data shuffling during training, and hardware differences(very unlikely). To enhance reproducibility, consider using a deterministic optimizer like SGD, control data shuffling by setting random seeds, and ensure consistent hardware environments.