Cluster: Difference between revisions

From IIHE Wiki
Jump to navigationJump to search
No edit summary
Line 84: Line 84:


The custom software area<br>
The custom software area<br>
==== /software/src<br> ====
Sources of software to install<br>
==== /software/icecube<br> ====
Icecube specific tools<br>
===== /software/icecube/ports<br> =====
This folder contains the I3 ports used by icecube (meta-)projects<br>
In order to use it, you must define the environment variable $I3_PORTS<br>
<pre>export I3_PORTS="/software/icecube/ports"
</pre>
This variable is set only for the current session and will be unset after logout, to avoid typing this each time,&nbsp; you can add this command to your .bashrc
'''''List installed ports'''''<br>
<pre>$I3_PORTS/bin/port installed
</pre>
'''''List available ports'''''<br>
<pre>$I3_PORTS/bin/port list
</pre>
'''''Install a port'''''<br>
<pre>$I3_PORTS/bin/port install PORT_NAME
</pre>
<br>
===== /software/icecube/offline-software<br> =====
This folder contains the '''offline-software''' meta-project
To use it, just run the following command (note the point at the beginning of the line)
<pre>. /software/icecube/offline-sofware/[VERSION]/env-shell.sh</pre>
Available versions&nbsp;:
*V14-02-00<br>
===== /software/icecube/icerec<br> =====
This folder contains the '''icerec''' meta-project<br>
To use it, just run the following command (note the point at the beginning of the line)<br>
<pre>. /software/icecube/icerec/[VERSION]/env-shell.sh
</pre>
Available versions&nbsp;:<br>
*V04-05-00
*V04-05-00-jkunnen<br>
===== /software/icecube/simulation<br> =====
This folder contains the '''simulation''' meta-project
To use it, just run the following command (note the point at the beginning of the line)
<pre>. /software/icecube/simulation/[VERSION]/env-shell.sh
</pre>
Available versions&nbsp;:
*V03-03-04
*V04-00-08
*V04-00-09<br>
*V04-00-09-cuda<br>


=== /ice3<br> ===
=== /ice3<br> ===

Revision as of 11:44, 17 November 2014

IIHE local cluster

Overview

The cluster is composed by 4 machine types :

  • User Interfaces (UI)

This is the cluster front-end, to use the cluster, you need to log into those machines

Servers : ui01, ui02

  • Computing Element (CE)

This server is the core of the batch system : it run submitted jobs on worker nodes

Servers : ce

  • Worker Nodes (WN)

This is the power of the cluster : they run jobs and send the status back to the CE

Servers : slave*

  • Storage Elements

This is the memory of the cluster : they contains data, software, ...

Servers : datang (/data, /software), lxserv (/user), x4500 (/ice3)



How to connect

To connect to the cluster, you must use your IIHE credentials (same as for wifi)

ssh username@icecube.iihe.ac.be

TIP : icecube.iihe.ac.be & lxpub.iihe.ac.be points automatically to available UI's (ui01, ui02, ...)


After a successful login, you'll see this message :

==========================================
Welcome on the IIHE ULB-VUB cluster

Cluster status  http://ganglia.iihe.ac.be
Documentation   http://wiki.iihe.ac.be/index.php/Cluster
IT Help         support-iihe@ulb.ac.be
==========================================

username@uiXX:~$

Your default current working directory is your home folder.


Directory Structure

Here is a description of most useful directories

/user/{username}

Your home folder

/data

Main data repository

/data/user/{username}

Users data folder

/data/ICxx

IceCube datasets

/software

The custom software area

/ice3

This folder is the old software area. We strongly recommend you to build your tools in the /software directory

Batch System

Queues

The cluster is decomposed in queues


any lowmem standard highmem gpu
Description default queue, all available nodes



GPU's dedicated queue
CPU's 494
88
384
8
14
Walltime default/limit 144 hours (6 days) / 240 hours (10 days)
Memory default/limit
2 Gb
3 Gb
4 Gb


Job submission

To submit a job, you just have to use the qsub command :

qsub myjob.sh

OPTIONS

-q queueName : choose the queue (default: any)

-N jobName : name of the job

-I : pass in interactive mode

-m : mail options

-l : resources options


Job management

To see all jobs (running / queued), you can use the qstat command or go to the JobMonArch page

qstat

OPTIONS

-u username : list only jobs submitted by username

-n : show nodes where jobs are running

-q : show the job repartition on queues


Useful links

Ganglia Monitoring : Servers status

JobMonArch : Jobs overview