In PyTorch, both .flatten()
and .view(-1)
are used to reshape tensors into 1D, but they differ subtly in behavior. .view(-1)
requires the tensor to be contiguous in memory; if it’s not (e.g., after operations like .transpose()
), .view()
will throw a runtime error unless you call .contiguous()
first. On the other hand, .flatten()
internally handles non-contiguous tensors by calling .contiguous()
before reshaping, making it more robust but slightly less performant in some edge cases due to the additional memory copy. So yes, .flatten()
may copy data when needed, while .view(-1)
does not but is less flexible. Use .flatten()
when you want safer, more general code, and .view(-1)
when you're sure the tensor is contiguous and want slightly better performance.