First, "scontrol show config" will display all system-wide config settings. Of course, there are many limits which are set per-association (some combination of user and account).
As a large-cluster admin, I beseech you to consider what you're doing. Yes, you have a lot of separate work-units. We love that! The point is that they should probably not be separate jobs - remember that a job array is just a shorthand for submitting jobs. Each jobarray member is a full-scale job that incurs the same cost to the cluster - so short jobarray members are as shameful as short single jobs.
Instead of thinking: I have 18k "jobs", how do I beat the BOFH admins and get them run, think "what is the most efficient resource configuration to run a single unit, and how long does it take on average, and how many resources can I expect to consume from the cluster at once, so how should I lay out those work units into separate jobs?"
measure your units of work. by that I really mean measure the scaling and resource requirements. are each of them serial? measure their %CPU, measure how much memory they need. heck, how much IO do they do? if you run them with plenty of memory, but one core, and the %CPU is not 100%, you're probably waiting on IO. You should know the average %CPU of a unit, it's required memory, and the amount of IO the job performs. you should also understand any parallel scaling of your workload (treating it as serial is a perfectly find initial assumption).
what resources can you realistically expect to consume? you might not be allowed to consume all the cluster's nodes. you might be "charged" for both cpu and memory occupancy. the cluster certainly has limited IO capabilities, so how much of that can use?
now do a sanity-check: what's your total resource demand, and how many resource can you realistically expect to grab? you can do this calculation for cores as well as IO. this exercise will tell you whether you need to make your workflow more efficient (maybe optimizing jobs so they can share inputs, do node-local caching, tune the number of cores-per-task, etc).
schedule batches of work-units onto the resource-units you have available. for instance, you could simply run 180 jobs of 100 work-units each. that could make sense if 100 units take elapsed time within the limits of your cluster. if your work-units are quick, each such job could just be a sequence of those 100 units. if your cluster allows only whole-node jobs, you'll almost certainly want to use something like GNU Parallel to run several work-units at a time.
now also think about your rate of consuming resources: for instance, if your self-scheduling works, what will be the aggregate IO created by the concurrent jobs? is that achievable on your cluster? similarly, unless you have the cluster to yourself, all your jobs may not start instantly, so how does that affect your expectations?
In other words, you need to self-schedule, not just throw jobs at the cluster. You can start with a silly configuration (submit one serial job that runs each of 18k work-units one at a time). that will take forever, so if each work-unit consumes 100% of a cpu, use GNU Parallel (or even xargs!) to run several at the same time, still within one job (presumably allocating more cores for the job). and further is to split the 18k across several such jobs. the right layout depends entirely on your jobs (their resource demands and efficient configs) convolved with cluster policy (like core or job limits), and "clipped" by rate-like capacity such as IO.