Your L1-regularized logistic regression (a.k.a. Lasso penalty) might pick different subsets of correlated features across runs because L1-regularization enforces sparsity in a somewhat arbitrary way when correlation is present. Zeroed-out coefficients aren’t necessarily “worthless”; they may just be overshadowed by a correlated feature that the model latched onto first.