Difference between revisions of "Monitoring your jobs"

From IBERS Bioinformatics and HPC Wiki
Jump to: navigation, search
Line 9: Line 9:
 
There are a variety of statistics to view. Most useful is probably <nowiki>load_one</nowiki> which shows you the cpu load average on each node. You can also monitor the overall averages along with memory and network usage.  
 
There are a variety of statistics to view. Most useful is probably <nowiki>load_one</nowiki> which shows you the cpu load average on each node. You can also monitor the overall averages along with memory and network usage.  
  
'''Check on what is running'''
+
'''Check on you've submitted'''
  
 
Once you have submitted your job scripts, you may want to check on the progress of what is running. This is achieved using the <nowiki>qstat</nowiki> command. This will show you your jobs. It might look something like;
 
Once you have submitted your job scripts, you may want to check on the progress of what is running. This is achieved using the <nowiki>qstat</nowiki> command. This will show you your jobs. It might look something like;
Line 66: Line 66:
  
 
   </nowiki>
 
   </nowiki>
 +
 +
 +
'''Figuring out why you're job is still in the queue'''

Revision as of 11:51, 21 July 2014

There are various ways for you to monitor and check up on your running and completed jobs.

See the status of the nodes

The easiest way to see what is happening on the cluster is to firstly check ganglia. This is a web based monitoring application that displays statistics about the cluster and its nodes. To view this, simply visit;

http://bert.ibers.aber.ac.uk/ganglia

There are a variety of statistics to view. Most useful is probably load_one which shows you the cpu load average on each node. You can also monitor the overall averages along with memory and network usage.

Check on you've submitted

Once you have submitted your job scripts, you may want to check on the progress of what is running. This is achieved using the qstat command. This will show you your jobs. It might look something like;

  
[user@bert ~]$ qstat
job-ID  prior   name       user         state submit/start at     queue                          slots ja-task-ID 
-----------------------------------------------------------------------------------------------------------------
 758061 0.50042 k2bRC-a1.i user         r     07/20/2014 14:13:33 amd.q@node010.cm.cluster           1        
 758062 0.50042 k2bRC-a2.i user         r     07/20/2014 14:13:33 amd.q@node009.cm.cluster           1        
 758063 0.50042 k2bRC-a3.i user         r     07/20/2014 14:13:48 amd.q@node009.cm.cluster           1        
 758064 0.50042 k2bRC-a4.i user         r     07/20/2014 14:13:48 amd.q@node008.cm.cluster           1        
 758065 0.50042 k2bRC-a5.i user         qw    07/20/2014 14:14:03                                    1        
 758066 0.60208 k2bRC-a6.i user         qw    07/20/2014 14:14:18                                    1        
   


Check a the status of a job

You can use the qstat -j JOB_ID command to get information about a running or queued job. Below is what you might find on a running job.

  
[user@bert ~]$ qstat -j 758061
==============================================================
job_number:                 758061
exec_file:                  job_scripts/758061
submission_time:            Sun Jul 20 14:13:32 2014
owner:                      user
uid:                        100000
group:                      users
gid:                        100000
sge_o_home:                 /ibers/ernie/home/user/
sge_o_log_name:             user
sge_o_path:                 /ibers/ernie/home/user/perl5/bin
sge_o_shell:                /bin/bash
sge_o_workdir:              /ibers/ernie/scratch/user/CGR/dots
sge_o_host:                 bert
account:                    sge
cwd:                        /ibers/ernie/scratch/user/CGR/dots
stderr_path_list:           NONE:NONE:k2bRC-a1.e
hard resource_list:         h_stack=512m,h_vmem=20.0G
mail_list:                  user@bert.cm.cluster
notify:                     FALSE
job_name:                   k2bRC-a1.i
stdout_path_list:           NONE:NONE:k2bRC-a1.o
jobshare:                   0
hard_queue_list:            amd.q
script_file:                k2bRC-a1.i
usage    1:                 cpu=22:21:14, mem=967478.88534 GBs, io=4.91085, vmem=13.401G, maxvmem=13.401G
scheduling info:            queue instance "intel.q@node003.cm.cluster" dropped because it is full
                            queue instance "amd.q@node008.cm.cluster" dropped because it is full
                            queue instance "intel.q@node004.cm.cluster" dropped because it is full
                            queue instance "amd.q@node009.cm.cluster" dropped because it is full
                            queue instance "amd.q@node007.cm.cluster" dropped because it is full
                            queue instance "amd.q@node010.cm.cluster" dropped because it is full

   


Figuring out why you're job is still in the queue