Some updates to the scheduler right now
1) A limit of 128 slots per user.
No user can run more than 128 slots (or CPU cores) at any one time.
2) Changes to the h_vmem
h_vmem has previously been requested per CPU core. i.e. if you ask for 10G and 4 CPU cores, you would use 40G of h_vmem. This has been fine for the majority of users, however a few keep forgetting or new users aren't reading the documentation.
This has now been changed so that h_vmem is now for the whole job, not per slot. This makes more sense for users running multicore jobs, but will prevent users running MPI jobs across multiple nodes. Since noone is doing this, this change shouldn't be a problem.
Look into the possibility of moving back to mem_free rather than h_vmem due to the harsh nature of h_vmem. However this does mean that users can break other users jobs. Maybe force h_vmem to be applied to be the same as mem_free value, but don't use h_vmem as a consumable.
Overloaded queue with higher (meaning lower) nice? Maybe restricted to 2hours.
Restrict number of jobs allowed in the queue in order to promote the use of task arrays.
Move from intel/amd queues to become long and slow jobs with time limits.