My observation is this. Schema.org content can be deployed in both the HEADER and BODY. The question is not cut and dry. The nuance is the size of the script and what tools you use to deploy.
Consider page load, time to paint implications too.
In a world of AI engines we need to tackle this problem further upstream and look to create AI endpoints with full JSON-LD Schema.org content files for real-time ingestion. Not depend on page by page loading.
There is an initiative, see git Schema.txt, to direct this traffic via robots.txt to a dedicated schema.txt file that contains links to json-ld files and catalog schema.org @IDs. AI crawlers can then access what they need to run semantic queries, that replace today's regular search. In turn this makes your Schema metadata even more precious to your future website.