Home :: Academic Members :: News

view:20763   Last Update: 2019-9-29

Moladad Nikbakht

HPC Physics

كلاستر محاسباتي گروه فيزيك


Cluster pop-os runs single node operating system which is based on Ubuntu 18.01.

The cluster contains 44 compute nodes with specifications:

       enlightened  Intel(R) Xeon(R) CPU E5-2699 v4 (processors: 44, cpu cores: 22,cache size:56320 KB)

       enlightened  32GB memory

       enlightened  VGA compatible controller: NVIDIA Corporation GP104GL [Quadro P5000]

کامپایلرها و نرم افزار های نصب شده بر روی کلاستر


      enlightened   C++

      enlightened   FORTRAN

      enlightened   Intel compiler (icc)

      enlightened   Parallel programming

                   --- Openmp

                   --- MPI

      enlightened   GPU based programs

                  --- opencl

                  --- cuda

      enlightened   Mathlab

سیستم صف

به منظور استفاده بهینه از کلاستر، سیستم صف مبتنی بر slurm در این کلاستر پیاده شده است.

در این کلاستر کاربران میبایست برنامه (job) خود را از طریق دستورات تعریف شده در slurm به کلاستر ارسال (sbatch or srun ) نمایند. امکان روئیت وضعیت برنامه، تخصیص منابع، ارسال ایمیل وضعیت برنامه و .... تحت slurm فراهم شده است. کاربران پس از دریافت حساب کاربری (user) میتوانند با استفاده از پروتکل ssh به کلاستر با آدرس (<user>) متصل شده و job خود را submit نمایند.   

Cluster pop-os manages resources using Simple Linux Utility for Resource Management (SLURM). Slurm is a highly scalable cluster management and job scheduling system for large and small Linux clusters.

for more information see: http://slurm.schedmd.com

ليست پارتيشن هاي تعريف شده در كلاستر

کاربران job خود را تحت پارتیشن تعریف شده برای آنها می توانند ارسال نمایند

دیگر موارد رم اختصاصی هسته اختصاصی زمان اختصاصی اولویت محدودیت Node پارتیشن
  10% 1 هسته 1 ساعت 1 ندارد pop-os all
  30% 8 هسته 72 ساعت 3 دانشجویان pop-os phys-student
  40% 10 هسته 72 ساعت 2 اساتید pop-os phys-academic
  50% 15 هسته 100 ساعت 4 ندارد pop-os phys-larg-job






نحوه اعمال محدودیت منابع برای کاربران

برای هر یک از همکاران اکانت (account) تعریف شده و حساب کاربری دانشجویان تحت اکانت استاد راهنمای مربوطه خواهد بود. محدودیت استفاده از منابع  به هر account متناسب با CPU و زمان استفاده شده مطابق زیر اعمال خواهد شد.

The GrpCPURunMins is used to manage job for users
What is GrpCPURunMins (or whatever your scheduler calls it)?
It is a limit on the remaining cputime per account or user. You can think of remaining cputime as sum (job_core_count*job_remaining_time) for all of a user's or account's jobs.
If an account has 10 jobs that each use 2 cores and each have 24 hours remaining, the remaining cputime is 10 * 2 cores * 24 hours = 480 cpuhours = 28800 cpuminutes. As the jobs continue to run, the remaining cputime will decrease because the 24 hours in the equation will decrease.
Once a user/account reaches this limit, no more jobs are allowed to start for that association.

نحوه ارسال job به کلاستر و دستورات مورد نیاز

The Slurm job scheduler

This guide describes basic job submission and monitoring for Slurm and isan introduction to the Slurm job scheduler and its use on the pop-os system.
Jobs on pop-os are run as batch jobs, i.e. in an unattended manner. Typically a user logs in to the pop-os
login nodes (<user>@, prepares a job (which contains details of the work to carry out and the computer resources needed) and submits it to the job queue. The user can then log out (if she/he wishes) until their job has run, to collect the output data.
Jobs are managed by Slurm, which is in charge of
• allocating the computer resources requested for the job,
• running the job and
• reporting the outcome of the execution back to the user.

Running a job involves, at the minimum, the following steps
• preparing a submission script and
• submitting the job to execution.

 Preparing a submission script

چنانچه از دستور sbatch برای ارسال job به کلاستر استفاده شود بایستی فایل script مطابق توضیحات زیر آماده و به کلاستر ارسال گردد.

A submission script is a shell script that
• describes the processing to carry out (e.g. the application, its input and output, etc.) and requests computer resources (number of cpus, amount of memory, etc.) to use for processing.

Example 1: a job by strudent (user=neda) running a single cpu job
Supposing the test.cpp is a c++ code which uses g++ compiler and requires 1 cpu for 30 minutes.
In this case that a job requires a single cpu (this is the smallest unit we allocate on pop-os) So the user can use partition=phys-all with the following requirements:
• the job uses pop-os node,
• the application is a single process,
• the job will run for no more than 1 hours,
• the job is given the arbitrary name e.g :"test1" and
• the user should be emailed when the job starts and stops or aborts.

the following submission script runs the application in a single job

#SBATCH -A neda                                 <--------------------------set user
#SBATCH -p phys-all                             <--------------------------set partition
#SBATCH --time=00:30:00                     <--------------------------set wallclocktime (arbitrary)
#SBATCH --error=job.%J.err                   <--------------------------set output file for error (arbitrary)
#SBATCH --output=job.%J.out                <--------------------------set output file (arbitrary)
#SBATCH --job-name="test1"                 <--------------------------set a name for the job
#SBATCH --mail-type=ALL                     <--------------------------set the massage status form (arbitrary)
#SBATCH --mail-user=????@znu.ac.ir  <--------------------------set the email for job status messaging (arbitrary)
g++ test.cpp -o myfirstcode

Example 2: a job by strudent (user=neda) running a multi-cpu job
Supposing the test.cpp is an openmp parallel code in c++ code which uses g++ compiler and lapack library and requires 6 cpu for 2 days.
In this case that a job requires a multui-cpu. Since the user is student he/she should use partition=phys-student with the following requirements:
• the job uses pop-os node,
• the application is a single process,
• the job will run for no more than 72 hours,
• the job is given the arbitrary name e.g :"test2" and
• the user should be emailed when the job starts and stops or aborts.

the following submission script runs the application in a single job

#SBATCH -A neda  <--------------------------set user
#SBATCH -p phys-student  <--------------------------set partition
#SBATCH --nodes=1  <--------------------------set number of nodes (or --ntask=1)
#SBATCH --cpus-per-node=6  <--------------------------set number of cpupernode (or--cpus-per-node=6 )
#SBATCH --time=2-00:00:00  <--------------------------set wallclocktime (arbitrary)
#SBATCH --error=job.%J.err  <--------------------------set output file for error (arbitrary)
#SBATCH --output=job.%J.out  <--------------------------set output file (arbitrary)
#SBATCH --job-name="test2"  <--------------------------set a name for the job
#SBATCH --mail-type=ALL <--------------------------set the massage status form (arbitrary)
#SBATCH --mail-user=????@znu.ac.ir  <--------------------------set the email for job status messaging (arbitrary)
export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK  <--------------------------set number of cpu by slurm protocol
g++ -Ofast  test.cpp -llapacke -llapack -mcmodel=large -fopenmp -o mysecondcode

In large part, the script above is similar to the one for a single node job except in this example, #SBATCH --cpu-per-node=m is used to reserve m cores per node (here on a single node i.e. --node=1) and to prepare the environment for a openmp parallel run with m processes in pop-os node.


phys-all is the default partition and the one your jobs will go to if you do not specify a partition in your job submission script.

Users can also use phys-large-job partition if their job requires  larger resorces (Time, Cpu, Mem).

ارسال job به کلاستر توسط دستور sbatch

Once you have a submission script ready (e.g submit.sh), the job is submitted to the execution queue with the command sbatch script.sh. The queueing system prints a number (the job id) almost immediately and returns control to the linux prompt. At this point the job is in the submission queue. Once you have submitted the job, it will sit in a pending state until the resources have been allocated to your job (the length of time your job is in the pending state is dependent upon a number of factors including how busy the system is and what resources you are requesting). You can monitor the progress of the job using the command squeue (see below).

Once the job starts to run you will see files with names such as job-12.out either in the directory you submitted the job from (default behaviour) or in the directory where the script was instructed explicitly to change to.

مشاهده وضعیت job با دستور squeue

squeue is the main command for monitoring the state of systems, groups of jobs or individual jobs. The command squeue prints the list of current jobs. The list looks something like this:

JOBID         PARTITION        NAME       USER       ST       TIME       NODES        NODELIST(REASON)
2497           phys-academic   heat           ghanbari     R        10:07        1              pop-os
2499           phys-all              test1          neda         R         0:22         1              pop-os
2511           phys-student      test2          neda        PD      14:36         1             pop-os (Resources)
The first column gives the job ID, the second the partition (or queue) where the job was submitted, the third the name of the job (specified by the user in the submission script) and the fourth the owner of the job. The fifth is the status of the job (R=running, PD=pending, CA=cancelled, CF=configuring, CG=completing, CD=completed, F=failed). The sixth column gives the elapsed time for each particular

Slurm Job States

Your job will report different states before, during, and after execution. The most common ones are seen below, but this is not an exhaustive list.

Job was killed, either by the user who submitted it, a system administrator, or by the Cgroups plugin (for using more resources than requested). CA CANCELLED
Job has ended in a zero exit status, and all processes from the job are no longer running. CD COMPLETED
This status differs from COMPLETED because some processes may still be running from this job. CG COMPLETING
Job did not complete successfully, and ended in a non-zero exit status. F FAILED
The node or nodes that the job was running on had a problem. N NODE_FAIL
Job is queued, so it is not yet running. Look at the Jobs that never start section for details on why jobs can remain in pending state. PD PENDING
Job has been allocated resources, and is currently running. R RUNNING
Job exited because it reached its walltime limit. TO TIMEOUT












کنسل کردن job ارسال شده به کلاستر

Use the scancel command to delete a job, e.g. scancel 1121 to delete job with ID 1121. A user can delete his/her own jobs at any time, whether the job is pending (waiting in the queue) or running. A user cannot delete the jobs of another user. Normally, there is a (small) delay between the execution of 10 the scancel command and the time when the job is dequeued and killed.

پشتیبانی کامپیوتر محاسباتی گروه فیزیک دانشگاه زنجان

دکتر مولاداد نیکبخت

ایمیل: hpc_physics[at]znu.ac.ir




Copyright © 2019, University of Zanjan, Zanjan, Iran