Logo

Computing cluster of the Institute of Botany, Czech Academy of Sciences

Česky

The Institute of Botany (IBOT) of the Czech Academy of Sciences (CAS) operates within the MetaCentrum of CESNET infrastructures, its own computer cluster and data storage based in Průhonice. In order to access these resources, the user must be a member of the MetaCentrum.

Menu

News

MetaCentrum

The Institute of Botany & MetaCentrum, usage of the computing resources

Every employee and student in a Czech research institute can get access to the MetaCentrum. MetaCentrum membership is obtained after completing an application. Usage of the computing resources must be carried strictly in accordance to the MetaCentrum’s rules of use. Employees of IBOT enjoy a higher priority on the Průhonice hardware, which means that their analyses will start sooner than non-IBOT requests. However, in order to beneficiate from this priority one must join the ibot group, which also grants access to the Průhonice data storage. If you wish to be part of the ibot group, please contact the cluster’s administrator.

Login to frontend tilia.

MetaCentrum, and therefore the Průhonice cluster, run in a Linux environment, which implies that users must master the basic UNIX commands in order to interact with the operating system. The user must prepare in advance a script containing all commands and operations that will be performed during the analysis. The job is executed by submitting the script with the qsub command. To submit a job to a queue on our cluster the user must add to the qsub command the –q ibot parameter. The node on which the computations will be carried out can be specified such as -l cluster=carex or -l cluster=draba. Other parameters are available and can be obtained from the qsub assembler tool (e.g. qsub -l walltime=1:0:0 -q ibot -l select=1:ncpus=4:mem=4gb:scratch_local=1gb -m abe script.sh).

Although all the machines support hyperthreading, the queuing system can only manage physical cores. If the user wants to take advantage of hyperthreading, she needs to book the entire node on which the analysis will be carried on, e.g. qsub -l walltime=1:0:0 -q ibot -l select=1:ncpus=8:mem=500gb:scratch_local=1600gb:hyperthreading=True:cluster=carex -l place=exclhost -m abe script.sh or qsub -l walltime=1:0:0 -q ibot -l select=1:ncpus=80:mem=1500gb:scratch_local=1600gb:hyperthreading=True:cluster=draba -l place=exclhost -m abe script.sh. The queueing system then does not prevent the analyses to exceed the requested CPU resources, which consequently enables the use of hyperthreading.

Users without sufficient knowledge of work in Linux command line should start by studying, e.g. Course of work in Linux command line not only for MetaCentrum of Vojtěch Zeisek.

Access to the computing cluster of IBOT for employees and collaborators

  1. Complete the application for the MetaCentrum (you can register under any organization, not solely IBOT).
  2. Join the ibot group by contacting the cluster’s administrator. Membership of this group is not mandatory for using the computing resources, but grants a higher priority.
  3. It is recommended that computational tasks are prepared on the frontend node tilia.ibot.cas.cz (alias tilia.metacentrum.cz), although this is not a requirement.
  4. Data can be stored on the Průhonice data storage (again not a requirement) at /storage/pruhonice1-ibot/.
  5. At submission to the computing queue, by adding to the qsub command the –q ibot parameter, the analysis will be carried on our cluster, e.g. qsub -l walltime=1:0:0 -q ibot -l select=1:ncpus=1:mem=1gb:scratch_local=1gb -m abe script.sh.

Employees and students from other organizations can submit tasks and run any applications on the Průhonice cluster using the same requesting commands and following the same rules as for any other node in the MetaCentrum.

Login into MetaCentrum infrastructure

All logins intro MetaCentrum infrastructure, including application (and e.g. ownCloud or FileSender) use EduID:

Login using EduID.

After finding and selecting Institute of Botany, the user is redirect to page requesting loging with institutional credentials. User name and password for VERSO are used (no domain):

Login to VERSO of the Institute of Botany, CAS.

If the user from the Institute of Botany does not have VERSO user name and password, it is necessary to visit https://praha.verso.eis.cas.cz/ and request sending of forgotten password (figure below; using institutional e-mail or personal number). After obtaining user name and password it is possible to use the above login screen.

Login to VERSO.

Disk array

Employees and collaborators of IBOT can take advantage of the data storage capacity that amounts to a total of 180 TB, for running analyses on the Průhonice cluster and for long term storage. The basic disk usage allowed for members of the ibot group is 2 TB. Shared folder with restricted access to specific users can be also implemented. The storage is backed up in the CESNET infrastructure. For disk quota increase or for the creation of shared folders please contact the administrator.

How to access the disk array

Access to the disk array is possible from any node in the MetaCentrum, where it can be found at /storage/pruhonice1-ibot/ with the home directories located in /storage/pruhonice1-ibot/home/. One can access the storage through a direct connection (using any client application such as FileZilla) by SFTP, SCP or SSHFS at tilia-nfs.ibot.cas.cz (alias storage-pruhonice1-ibot.metacentrum.cz).

FileZilla

Shared folders

Shared folders are located on the storage in folder /storage/pruhonice1-ibot/shared/GROUP_FOLDER/ accessible from all MetaCentrum frontends and computing nodes. There is UNIX group available for every shared folder, administered in Perun. Any user can be group administrator. Login uses MetaCentrum credentials, group admin can than click to "Group Manager", "Select Group", "Select VO: MetaCentrum" and select group whose members should be administered.

All files, regardless if they are located in shared folders, or in users' home directories, are counted into personal file quota of particular user (shared folders do not have separate quota). If needed, user can request increase of the quota.

Files newly created in directory /storage/pruhonice1-ibot/shared/GROUP_FOLDER/ have automatically set group owner GROUP. If it would not happer, user must change group owner by chgrp -R GROUP /storage/pruhonice1-ibot/shared/GROUP_FOLDER or similar command. It is then necessary to alter the permissions so that data are accessible exclusively for members of the group using command like find /storage/pruhonice1-ibot/shared/GROUP_FOLDER/ -type d -exec chmod 770 '{}' \; for directories and find /storage/pruhonice1-ibot/shared/GROUP_FOLDER/ -type f -exec chmod 660 '{}' \; for files. Because of security of files and necessity of their share, user must be well aware of UNIX permissions! Wrong settings of the permissions might result in data loss (e.g. as by wrongly working script of another user). Alongside it is necessary to set up permissions that another members of the group have full access to them. These tasks can be done in graphical SFTP clients (e.g. FileZilla, WinSCP), however usage of command line is faster and simpler. In case of issues contact maintainers.

Hardware overview

Whole rack with all equipment.

This equipment was purchased in 2019 on grants from the Academy of Sciences of the Czech Republic.

High-Performance computing (HPC) nodes

Nodes draba1, draba2 and draba3.

These three nodes (draba1, draba2 and draba3) are primarily intended for running computations requiring massive parallelization such as Apache Hadoop and Spark. Each node is equipped with 4 CPU Intel Xeon Gold 6230 (4x 20 cores (4x 40 threads), 2.1 GHz, turbo 3.9 GHz), 1536 GB RAM and 1920 GB NVMe disk (RAID 0). If you are interested in using Apache Spark please contact Yann Bertrand.

Computing nodes carex1-carex6.

Standard computational nodes

Each of the six nodes (carex1 to carex6) is equipped with 1 CPU AMD EPYC Naples 7261 (8 cores (16 threads), 2.5 GHz, turbo 2.9 GHz), 512 GB RAM and 1920 GB NVMe disk (RAID 0). They are intended for general purpose computation.

Virtualization servers, where frontend, database server and another special servers are hosted.

Data storage

File server and disk array.

The cluster's a disk array is made of a QSAN XCubeSAN with 21x 14 TB 7200RPM SAS3 disks and SSD cache (SW Qcache COQ SSD-C and 2x SSD PAH WUSTR6440ASS200). The array can only be access through the file server tilia-nfs.ibot.cas.cz (storage-pruhonice1-ibot.metacentrum.cz) which is equipped with 1 processor AMD EPYC Naples 7401P (24 cores, 2 GHz, turbo 3 GHz), 128 GB RAM and 1920 GB NVMe disk (RAID 0).

Frontend node

The node comprises of 4 vCPU, 2.4 GHz, 8 GB RAM and 50 GB SSD disk. Its usage follows the same procedures as for any other frontend node in the MetaCentrum. It is mainly advantageous to log there in order to process data stored on the Průhonice data array and to send requests to the Průhonice cluster. It can be reached at tilia.ibot.cas.cz (alias tilia.metacentrum.cz) and home directories are located at /storage/pruhonice1-ibot/home/ on the data array.

Database server

The server is equipped with 8 vCPU, 2.4 GHz, 32 GB RAM and 60 GB SSD disk. It is mainly intended for running NoSQL databases such as Neo4j and MongoDB that are used during Apache Hadoop and Spark computations on the HPC nodes, and for other specialized tasks.

Virtualization servers, where frontend, database server and another special servers are hosted.

Administration and support

Events related to the cluster (e.g. shutdowns or seminars) can be seen in public calendar (ICS).

E-mail conference and communication

E-mail conference (archive) [cluster-ibot (at) metacentrum (dot) cz] is available, members are all members of the ibot group (membership in the conference is automatic, using primary e-mails users are registered in MetaCentrum with; password is same as for MetaCentrum). It serves for annoucements of news about the cluster as well as fro discussion among users. All requests to the cluster admins should be send to e-mail [cluster (at) ibot (dot) cas (dot) cz]. Events related to the cluster (e.g. shutdowns or seminars) can be seen in public calendar (ICS).

Cluster calendar can be added in institutional webmail after clicking to calendar, adding calendar from directory and searching name cluster. It will then be displayed among other calendars of the user.

Adding cluster calendar.