Cluster: Difference between revisions

From IIHE Wiki
Jump to navigationJump to search
No edit summary
No edit summary
Line 70: Line 70:
==== /software/src<br> ====
==== /software/src<br> ====


<br>
Sources of software to install<br>


==== /software/icecube<br> ====
==== /software/icecube<br> ====


Icecube specific tools&nbsp;:<br>
Icecube specific tools<br>


===== /software/icecube/i3_ports<br> =====
===== /software/icecube/i3_ports<br> =====
Line 82: Line 82:
In order to use it, you must define the environment variable $I3_PORTS<br>
In order to use it, you must define the environment variable $I3_PORTS<br>
<pre>export I3_PORTS="/software/icecube/i3_ports"
<pre>export I3_PORTS="/software/icecube/i3_ports"
</pre>
</pre>  
This variable is set only for the current session, in order to not define it everytime,&nbsp; you can add this command to your .bashrc
This variable is set only for the current session, in order to not define it everytime,&nbsp; you can add this command to your .bashrc


===== /software/icecube/offline-software<br> =====
===== /software/icecube/offline-software<br> =====
This folder contains the '''offline-software''' meta-project
To use it, just run the following command (note the point at the beginning of the line)
. /software/icecube/offline-sofware/[VERSION]/env-shell.sh
Available versions :
*V14-02-00<br>


===== /software/icecube/icerec<br> =====
===== /software/icecube/icerec<br> =====


This folder contains the icerec meta-project<br>
This folder contains the '''icerec''' meta-project<br>


To use it, just run the following command (note the point at the beginning of the line)<br>
To use it, just run the following command (note the point at the beginning of the line)<br>
Line 97: Line 107:


*V04-05-00  
*V04-05-00  
*V04-05-00-jkunnen<br>
*V04-05-00-jkunnen<br>  


===== /software/icecube/simulation<br> =====
===== /software/icecube/simulation<br> =====


<br>
This folder contains the '''simulation''' meta-project
 
To use it, just run the following command (note the point at the beginning of the line)
<pre>. /software/icecube/simulation/[VERSION]/env-shell.sh</pre>
Available versions :
 
*V03-03-04
*V04-00-08
*V04-00-09<br>


=== /ice3<br> ===
=== /ice3<br> ===
Line 168: Line 186:
-l&nbsp;: resources options
-l&nbsp;: resources options


 
<br>


=== Job management ===
=== Job management ===
Line 174: Line 192:
To see all jobs (running / queued), you can use the '''qstat''' command or go to the [http://ganglia.iihe.ac.be/addons/job_monarch/?c=iihe JobMonArch] page
To see all jobs (running / queued), you can use the '''qstat''' command or go to the [http://ganglia.iihe.ac.be/addons/job_monarch/?c=iihe JobMonArch] page
<pre>qstat
<pre>qstat
</pre>
</pre>  
''OPTIONS''
''OPTIONS''


-u username : list only jobs submitted by username
-u username&nbsp;: list only jobs submitted by username


-n : show nodes where jobs are running
-n&nbsp;: show nodes where jobs are running
 
-q : show the job repartition on queues


-q&nbsp;: show the job repartition on queues


<br>


== Useful links ==
== Useful links ==

Revision as of 12:24, 13 February 2014

IIHE local cluster

Overview

The cluster is composed by 4 machine types :

  • User Interfaces (UI)

This is the cluster front-end, to use the cluster, you need to log into those machines

Servers : ui01, ui02

  • Computing Element (CE)

This server is the core of the batch system : it run submitted jobs on worker nodes

Servers : ce

  • Worker Nodes (WN)

This is the power of the cluster : they run jobs and send the status back to the CE

Servers : slave*

  • Storage Elements

This is the memory of the cluster : they contains data, software, ...

Servers : datang (/data, /software), lxserv (/user), x4500 (/ice3)

How to connect

To connect to the cluster, you must use your IIHE credentials (same as for wifi)

ssh username@icecube.iihe.ac.be

TIP : icecube.iihe.ac.be points automatically to available UI's (ui01, ui02, ...)


After a successful login, you'll see this message :

==========================================
Welcome on the IIHE ULB-VUB cluster

Cluster status http://ganglia.iihe.ac.be
IT Help support-iihe@ulb.ac.be
==========================================

username@uiXX:~$

Your default current working directory is your home folder.


Directory Structure

Here is a description of most useful directories

/user/username

Your home folder

/data

Main data repository

/software

The custom software area

/software/src

Sources of software to install

/software/icecube

Icecube specific tools

/software/icecube/i3_ports

This folder contains the I3 ports used by icecube (meta-)projects

In order to use it, you must define the environment variable $I3_PORTS

export I3_PORTS="/software/icecube/i3_ports"

This variable is set only for the current session, in order to not define it everytime,  you can add this command to your .bashrc

/software/icecube/offline-software

This folder contains the offline-software meta-project

To use it, just run the following command (note the point at the beginning of the line)

. /software/icecube/offline-sofware/[VERSION]/env-shell.sh

Available versions :

  • V14-02-00
/software/icecube/icerec

This folder contains the icerec meta-project

To use it, just run the following command (note the point at the beginning of the line)

. /software/icecube/icerec/[VERSION]/env-shell.sh

Available versions :

  • V04-05-00
  • V04-05-00-jkunnen
/software/icecube/simulation

This folder contains the simulation meta-project

To use it, just run the following command (note the point at the beginning of the line)

. /software/icecube/simulation/[VERSION]/env-shell.sh

Available versions :

  • V03-03-04
  • V04-00-08
  • V04-00-09

/ice3

Batch System

Queues

The cluster is decomposed in queues


any lowmem standard highmem gpu
Description




CPU's




Walltime default/limit




Memory default/limit





Job submission

To submit a job, you just have to use the qsub command :

qsub myjob.sh

OPTIONS

-q queueName : choose the queue (default: any)

-N jobName : name of the job

-I : pass in interactive mode

-m : mail options

-l : resources options


Job management

To see all jobs (running / queued), you can use the qstat command or go to the JobMonArch page

qstat

OPTIONS

-u username : list only jobs submitted by username

-n : show nodes where jobs are running

-q : show the job repartition on queues


Useful links

Ganglia Monitoring : Servers status

JobMonArch : Jobs overview