Ares supercomputer
Access to Ares
Computing resources on Ares are assigned based on PLGrid computing grants. To perform computations on Ares you need to obtain a computing grant through the PLGrid Portal and apply for access to Ares.
If your grant is active, and you have applied for the service access, the request should be accepted in about half an hour. Please report any issues through the helpdesk.
Available login nodes:
ares.cyfronet.pl
Nodes details
Ares is built with Infiniband EDR interconnect and nodes of the following specification:
| Partition | Number of nodes | CPU | RAM | RAM available for job allocations | Proportional RAM for one CPU | Proportional RAM for one GPU | Proportional CPU for one GPU | Accelerator |
|---|---|---|---|---|---|---|---|---|
| plgrid (includes plgrid-long) | 532 + 256 (if not used by plgrid-bigmem) | 48 cores, 2x Intel(R) Xeon(R) Platinum 8268 CPU @ 2.90 GHz (a total of 4 NUMA nodes, each with 12 cores; SMT/hyperthreading disabled) | 192 GB | 184800 MB | 3850 MB | n/a | n/a | n/a |
| plgrid-bigmem | 256 | 48 cores, 2x Intel(R) Xeon(R) Platinum 8268 CPU @ 2.90 GHz | 384 GB | 369600 MB | 7700 MB | n/a | n/a | n/a |
| plgrid-gpu-v100 | 9 | 32 cores, Intel(R) Xeon(R) Gold 6242 CPU @ 2.80 GHz | 384 GB | 368000 MB | n/a | 46000 MB | 4 | 8x Tesla V100-SXM2 |
Job submission
Ares is using Slurm resource manager, jobs should be submitted to the following partitions:
| Name | Time limit | Resource type (account suffix) | Access requirements | Description |
|---|---|---|---|---|
| plgrid | 72h | -cpu | Generally available. | Standard partition. |
| plgrid-testing | 1 h | -cpu | Generally available. | Only for testing jobs, limited to 1 running or queued job, 2 nodes maximum, high priority. |
| plgrid-now | 12 h | -cpu | Generally available. | Intended for interactive jobs, limited to 1 running or queued job, 1 node maximum, the highest priority. |
| plgrid-long | 168 h | -cpu | Requires a grant with a maximum job runtime of 168 h. Used for jobs with extended runtime. | |
| plgrid-bigmem | 72 h | -cpu-bigmem | Requires a grant with CPU-BIGMEM resources. Resources used for jobs requiring an extended amount of memory. | |
| plgrid-gpu-v100 | 48 h | -gpu | Requires a grant with GPGPU resources. | GPU partition. |
If you are unsure of how to properly configure your job on Helios please consult this guide: Batch system.
Accounts and computing grants
Please, get familiar how to specify an account: Accounts and grants.
Billing
General billing process is described here: Billing.
Storage
To avoid problems use variables instead of full path to the filesystems.
Current usage, capacity and other storage attributes can be checked by issuing the hpc-fs command, see HPC tools.
Please check if your workload may benefit from using ramdisk or localfs: Storage features.
Note
The localfs feature is not available on Ares.
Important
In the $SCRATCH space, data older than 30 days and job directories older than 7 days are automatically deleted.
Software
Software is organized in Modules.
Compilation should be done on a worker node inside of a computing job. It is most convenient to use an interactive job to do all compilation and application setup.
Login node doesn't include development libraries!
For the time being, please install your own software in the $HOME or the group directory.
Support
Keep in mind general rules.
In case of problems check the Support page.