When you do:
Decimal(1.2)
Python first has to take the binary floating-point approximation of 1.2 (1.2000000000000001776...) and then convert that into a Decimal. That conversion step is slower, and it gives you the full imprecise binary expansion:
>>> Decimal(1.2)
Decimal('1.20000000000000017763568394002504646778106689453125')
On the other hand:
Decimal("1.2")
parses the string directly into an exact decimal representation. This avoids the intermediate float conversion, so it’s faster and more accurate:
>>> Decimal("1.2")
Decimal('1.2')