The spark session configuration that you mentioned seems to be using a BigLakeCatalog implementation, which is not supported for Bigquery Apache Iceberg tables. Note that the "Bigquery" Apache Iceberg tables are different from "BigLake" tables which are a kind of external tables that can be registered with Biglake metastore.
You may not be able to modify the files on storage outside of BigQuery in the case of Bigquery managed iceberg tables, without risk of data loss. As far as querying the data, you can try querying using a spark session, with catalog type hadoop.