PBS Information Specific to Mimosa
- Warning: Interactive jobs are forbidden, and will be hunted down and killed by the systems scripts, and the user's privileges on suspended
- Mimosa being a Beowulf Cluster, it is possible for a user to be specific about what sort of nodes, the users jobs will run on. See here for more information.
- Mimosa's PBS configuration does not respect funded status over other statuses. Everyone is equal on mimosa, in terms of getting their jobs into a run status.
- In the PBS script to submit jobs, the user should request PBS for required number of nodes instead of number of CPUs as in the case of shared memory multiprocessors ( Redwood, Sweetgum, Onyx).
Ex: #PBS -l ncpus=2 (for shared memory multiprocessors) and #PBS -l nodes=2 (for Mimosa).
PBS information specific to Redwood
- Redwood Upgrade:
- In March 2005 Redwood underwent a major upgrade, 128 more 1.3 GHz Itanium2 processors
were added to the existing 64 900 Mhz Itanium2 processors. So now effectively Redwood consists of
- Since there are two different types of processors on Redwood, to ensure that all the processors
allocated to a job are of the same type, There are two instances of PBS running, One allocates jobs
to the older 64 900Mhz processors and the other instance allocates jobs to the newer 128 1.3GHz
processors depending on the requirements for the job.
- To support this additional instance of PBS, the PBS commands qstat, qalter, ch_jobs_usage, fqstat and qdel have been supplemented
by qstat2, qalter2, ch_jobs_usage2, fqstat2 and qdel2 for the second PBS instance.
- Currently CPUs 0-7 are designated as for interactive jobs. Thus, any interactive jobs will be run on these processors. So for now there is no i-limit on redwood, instead all interactive processes compete with one another for resources on these 8 CPUs (containing about 8 GB of memory.)
- Resource usage per job can be found using the ck_jobs_usage script.
PBS Information Specific to Sweetgum
- Since sweetgum has a shared memory multiprocessor architecture (like Redwood and Onyx) In the PBS script to submit jobs,
the user should request PBS for required number of CPUs instead of number of nodes as in the case of Mimosa which is a Beowulf Cluster.
- jobs submitted interactively will be subject to a maximum total CPU limit (i-limit) of 30 minutes. Jobs exceeding this 30 minute
i-limit will be automatically killed by the system.
- Resource usage per job can be found using the ck_jobs_usage script
<< Previous Next>>