I know Spark will do a lazy evaluation and will push down the filters to the data source while loading the data. But it only pushes down dynamic filters in case of HDFS. For RDBMS it needs to have pre defined filter conditions to push down filter.
Which in my case will be dynamic as the filter is based on the input view. So spark is loading the complete data from transactions table and then doing a join with input view later to filter data.