Skip to content

Metacentrum

Metacentrum is a national grid computing infrastructure in the Czech Republic, operated within CESNET and integrated into the national e-infrastructure. Metacentrum connects and coordinates high-performance computing clusters, storage facilities, and specialized computing systems across universities and research institutes nationwide.

Metacentrum has very extensive documentation that we advise reading. Below we collect some relevant quick information and links.

Create an Account

To create an account, apply using the online form, and you can check the application status here.

After creating an account, please confirm if you are part of the cucam and lic_vasp6 groups to get access to the cucam queue and VASP 6+ versions. You can check your groups here. If you don't see them, please contact Lukas or Chris so they can email MetaCentrum to add you.

Access the Cluster

MetaCentrum contains several frontends that you can SSH to. You typically use just one or two, but if your preferred frontend is down or slow, you can SSH to any other frontend. Here is a list of all frontends:

Frontend address Frontend home
charon.metacentrum.cz /storage/liberec3-tul
elmo.metacentrum.cz /storage/praha5-elixir
nympha.metacentrum.cz, alfrid.meta.zcu.cz /storage/plzen1
oven.metacentrum.cz /storage/brno2
perian.metacentrum.cz /storage/praha1
onyx.metacentrum.cz /storage/praha1
skirit.metacentrum.cz /storage/brno2
tarkil.metacentrum.cz /storage/praha1
tilia.metacentrum.cz /storage/pruhonice1-ibot
zenith.metacentrum.cz /storage/brno12-cerit

To access MetaCentrum, SSH to one of the frontends:

ssh username@tarkil.metacentrum.cz

Working on MetaCentrum

MetaCentrum uses PBS Pro. For detailed information, see the official documentation.

Submit a Job

Jobs are submitted with qsub job.sh. You can get help building and selecting node configuration with the qsub assembler. Below is a generic example of a submission script:

#!/bin/bash
#PBS -N fps
#PBS -q cucam
#PBS -l select=1:ncpus=24:mem=512gb:scratch_local=128gb
#PBS -l walltime=02:00:00

# INFO about client and jobid to retrieve data from scratch if task fails or job is killed
date                >  $PBS_O_WORKDIR/client.log
uname -n            >> $PBS_O_WORKDIR/client.log
echo "$SCRATCHDIR"  >> $PBS_O_WORKDIR/client.log

# Load a conda environment
source /path/to/your/conda/environment

# Copy input files into scratch directory on the machine
cp -r $PBS_O_WORKDIR/*  $SCRATCHDIR
cd $SCRATCHDIR

mpirun vasp_std

# Copy output back to home directory
cp -r $SCRATCHDIR/* $PBS_O_WORKDIR

# Clean the SCRATCH directory
clean_scratch
exit 0

Check Status and modify a Job

To see the status of your jobs:

qstat -u your_username

Sometimes you might want to extend the walltime of a job that is taking longer than anticipated. You can use the qextend command:

qextend job_number.pbs-m1.metacentrum.cz 01:00:00

Where job_number.pbs-m1.metacentrum.cz is your job reference and 01:00:00 is the time extension in hours:minutes:seconds. Keep in mind that every user has a limited time quota for job extensions.

You can also delete you running job with

qdel job_number.pbs-m1.metacentrum.cz