I tried fine tuning after fl training, both in server and in each client, and both ways give better accuracy in evaluation. The Fine-Tuning goes in the traditional way like this:
final_model_weights = iterative_process.get_model_weights(state)
best_model = create_dcnn_model()
for keras_w, fl_w in zip(best_model.trainable_weights, final_model_weights.trainable):
keras_w.assign(fl_w)
best_model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
best_model.fit(X_train, y_train, validation_data=(X_val, y_val), epochs=3, batch_size=8)
predictions = best_model.predict(X_val)
labels = (predictions > 0.5).astype("int32")
print(classification_report(y_val, labels, target_names=['Not Eavesdropper', 'Eavesdropper']))
This makes me think if there i a problem or an error it the way I use tensorflow federated process.