Points to consider:
Any metric can be arbitrarily small. It's represented up to a certain decimal place, usually the 4th. That said, being it limited to the 4th decimal place, number smaller than 0.00001 are not visible in the logs. It's worth waiting a little bit while the training runs so the metric can accumulate up to larger, visible values.
If it's a custom metric, thoroughly review your implementation. Specially if the other metrics are logging values that make sense, you must re-asses your custom implementation. From a software engineering perspective, even better if a someone else can help you with it in a peer-review-like drill.
Whenever possible, stick to the "native", embedded metrics available in the package you are using. Once again, from a SE perspective, re-implementing well-known calculations or customizing whatever will inexorably add risk.
You did everything apparently correctly and the values are still non-sense: Be sure that you are using the metric to the purpose it was originally designed. For instance, F1 and Accuracy are typically label classification metrics (e.g: You are checking if the model predicts correctly if a football/baseball game should occur or be suspended/delayed). Scenarios where you want to predict a behavior (e.g: A curve) during a certain period of time is more likely a regression task. For regression tasks you want to know how far away from the ground truth you are, then you are more prone to use MAE and MRE, for example.
Get familiar with the data preprocessing and how exactly you are feeding your model: Do you need the normalization? Do you need clipping (clamping)? Should you convert/map the original input values to boolean (or 1s and 0s) representation?
Finally, good luck!