The second arrangement — where the dataset is connected to both the learner and the evaluation widget ("Test and Score") — is the correct and recommended method in Orange.
Why? This structure ensures that the learner (e.g., Random Forest, Logistic Regression, etc.) is trained within the cross-validation loop handled by "Test and Score". This prevents data leakage and gives an unbiased estimate of model performance.
The first arrangement, where data is not passed to the learner, may still work because "Test and Score" handles training internally. However, explicitly wiring the data to the learner, as in the second diagram, makes your workflow clearer, reproducible, and aligned with proper evaluation principles.
The choice of model (Tree, Logistic Regression, Naive Bayes, etc.) does not affect which scheme to use — the second setup remains correct for all learners.
In short: Use the second setup — it's structurally and methodologically sound, regardless of the model type.