Number of Cores
4096
Partner institutions of the SIVVP national project provide their users various software. Additional software equipment is made available according to user needs. Tu nájdete zoznam výpočtových softvérov poskytovaných jednotlivými partnermi projektu.
University of Žilina Slovak University of Technology in Bratislave Technical University of Košice Institute of Experimental Physics, Slovak Academy of Sciences, Košice Institute of Informatics, Slovak Academy of Sciences, Bratislava Matej Bel University in Banská BystricaHome — about HPC
High performance computing (HPC) means using (super)computers and computing clusters to solve problems that are numerically or data intensive problems freom various fields of research and technology. Examples include medicine (genetics, „in silico“ drug design, …), physics (meteorological a climatological models, nuclear and particle physics, …), chemistry (properties of atoms and molecules, correlation between structure and reactivity, ….), but also economics (investment risk, stock growth, …) and many other fields.
Supercomputers are systems with a high level of performance compared to common, personal computers. Standardized testing is used to determine the performance level, such as the LINPACK (LINear equations software PACKage) benchmark. Computing performance is measured is Flop (floating point operations per second). Modern supercomputers performance in the range of terraFlops (10^12) and petaFlops (10^15). Agregate computing power is influenced by other parameters such as latency and throughput of the data networks used for communication between the processors and the nodes of the supercomputer, delay and throughput of the communication with data storage and others. All of these parameters determine the scalability of the so called parallel computations that use multiple nodes, processors or cores at the same time.
Parallel computing is the dominant phenomenon in modern HPC. Multi-node, multi-processor and multi-core architectures are used by the vast majority or computing resources, with individual computing units communicating together and offering aggregate performance as a single system. The connections are implemented by high speed network, such as Infiniband or 10Gb/s ethernet, or some other dedicated network. Architectures of this type can be scaled (at least theoretically) to huge sizes, the only limiting factors being electrical consumption, cooling and room space of the data-center.
Medicine
Physics
Chemistry
Ekonomika
A computing cluster consists of multiple computers, connected by a high speed local network, that cooperate to behave like a single homogeneous system from the outside. Due to their low cost, clusters are usually used to increase performance and availability. They use software, that allows high performance distributed computing. Cluster performance is oftentimes increased by using graphical accelerators. HA – high availability can be implemented by redundancy of the key components of the cluster and ensures its continuous production even when some of its components fail.
System: IBM dx360 M3
Number of compute nodes: 46
Processor: 2x 6 core Intel Xeon X5640 @2,27GHz
Memory: 96GB
Hard drive SubSystem : 2 x 500GB
Compute network: 40Gb/s Infiniband
Operating system: Scientific Linux 6.3
2x NVIDIA Tesla M2070 6GB RAM, 448 CUDA cores
System: IBM iDataPlex dx360
Number of compute nodes: 52
Processor: 2x 6 core Intel Xeon X5670 @2,93GHz
Memory: 48GB
Hard drive SubSystem : 1x 2TB
Compute network: 40Gb/s Infiniband
Operating system: Scientific Linux 6.4
2x NVIDIA Tesla M2050 6GB RAM, 448 CUDA cores
System: IBM Blade system x HS22
Number of compute nodes: 24
Processor: 2x 6 core Intel Xeon X5640 @2,27GHz
Memory: 48GB
Hard drive SubSystem : 500GB
Compute network: 40Gb/s Infiniband, 10Gb/s Ethernet
Operating system: Scientific Linux 6.3
System: IBM Blade system x HS23
Number of compute nodes: 19
Processor: 2x 8 core Intel Xeon E5-2650 @2,6GHz
Memory: 64GB
Hard drive SubSystem : 500GB
Compute network: 40Gb/s Infiniband, 10Gb/s Ethernet
Operating system: Scientific Linux 6.3
System: IBM iDataPlex dx360 M3
Number of compute nodes: 2
Processor: 2x 6 core Intel Xeon X5640 @2,27GHz
Memory: 24GB
Hard drive SubSystem : 2x 500GB
Compute network: 40Gb/s Infiniband
Operating system: Linux
2x NVIDIA Tesla M2070 6GB RAM, 448 CUDA cores
System: IBM iDataPlex dx360 M4
Number of compute nodes: 1
Processor: 2x 8 core Intel Xeon E5-2650 @2,6GHz
Memory: 64GB
Hard drive SubSystem : 2x 500GB
Compute network: 40Gb/s Infiniband
Operating system: Linux
1x NVIDIA Tesla K20 5GB RAM, 2496 CUDA cores
System: IBM iDataPlex dx360 M3
Number of compute nodes: 30
Processor: 2x 6 core Intel Xeon X5640 @2,27GHz
Memory: 4GB
Hard drive SubSystem : 2x 500GB
Compute network: 40Gb/s Infiniband
Operating system: Linux
16x NVIDIA Tesla M2070 6GB RAM, 448 CUDA cores
10x NVIDIA Tesla K20 5GB RAM, 2496 CUDA cores
System: IBM iDataPlex dx360 M3
Number of compute nodes: 54
Processor: 2x Intel E5645 @2,4GHz
Memory: 48GB
Hard drive SubSystem : 2 x 500GB
Compute network: 40Gb/s Infiniband, 10Gb/s Ethernet
2x NVIDIA Tesla M2070 6GB RAM, 448 CUDA cores
System: IBM iDataPlex dx360 M4
Number of compute nodes: 8
Processor: 2 x Intel E5-2670 @2,6GHz
Memory: 64GB
Hard drive SubSystem : 2 x 500GB
Compute network: 40Gb/s Infiniband, 10Gb/s Ethernet
2x NVIDIA Tesla K20 5GB RAM, 2496 CUDA cores
System: IBM iDataPlex x3550 M4
Number of compute nodes: 1
Processor: 1 x Intel E5-2640 @2,5GHz
Memory: 8GB
Hard drive SubSystem : 2 x 500GB
Compute network: 40Gb/s Infiniband, 10Gb/s Ethernet
System: IBM iDataPlex dx360 M3
Number of compute nodes: 24
Processor: 2x Intel Xeon X5670 @2,93GHz
Memory: 48GB
Hard drive SubSystem : 2 x 500GB
Compute network: 40Gb/s Infiniband
Operating system: Linux
2x NVIDIA Tesla M2070 6GB RAM, 448 CUDA cores
System: IBM iDataPlex dx360 M4
Number of compute nodes: 8
Processor: 2x Intel Xeon E5-2670
Memory: 64GB
Hard drive SubSystem : 2x 500GB + 2x 900GB
Compute network: 40Gb/s Infiniband
Operating system: Linux
System: IBM iDataPlex dx360 M4
Number of compute nodes: 6
Processor: 2x Intel Xeon E5-2670
Memory: 128GB
Hard drive SubSystem : 2x 900GB
Compute network: 40Gb/s Infiniband
Operating system: Linux
System: IBM iDataPlex dx360 M4
Number of compute nodes: 3
Processor: 2x Intel Xeon E5-2670
Memory: 64GB
Hard drive SubSystem : 2x 900GB
Compute network: 40Gb/s Infiniband
Operating system: Linux
2x NVIDIA Tesla K20 5GB RAM, 2496 CUDA cores
Number of Cores
4096
Memory Space
12144GB
Hard Drive Space
483.2TB
Massively parallel processing (MPP) means running complex scientific calculations on multiple computing cores. Unlike in cluster computing, the nodes are more densely integrated and better optimized for distributed computations. Higher density and number of processors and faster connections between computing nodes ensure better and faster communication between the processes and higher aggregate performance. MPP is appropriate for the most complicated of scientific calculations with a large amount of communication between processes.
System: IBM Power 775
Number of computing cores: 4096
Memory: 32TB
Compute network: 5-48GB/s Internal Optical Links, 10GB/s Ethernet prepojenie s úložiskom dát
Capacity of external data storage: 600TB
Capacity of internal data storage: 300TB
Operating system: AIX
Aurel (IBM Power 775)
Shared memory processing (SMP) means using various implementations od shared memory. Shared memory allows multiple processes to access a common memory space, without the need of a network. SMP computing jobs are usually run on a single node, where they are split into threads and parallelized among the processors. SMP architectures are appropriate for jobs that require a large memory, frequent memory access and use parallelism on the thread level.
Typ 1
System: IBM Power 755/750
Number of compute nodes: 18
Processor: 4x 8-core Power7 3,3GHz
Memory per compute core: 128GB
Hard drive SubSystem : 6 x 600GB
Compute network: 40GB/s Infiniband
Operating system: SUSE 11.3 Linux
Typ 2
System: IBM Power 755/750
Number of compute nodes: 2
Processor: 4x 8-core Power7 3,3GHz
Memory per compute core: 256GB
Hard drive SubSystem : 6 x 600GB
Compute network: 40GB/s Infiniband
Operating system: SUSE 11.3 Linux
Typ 3
System: IBM Power 780
Number of compute nodes: 1
Processor: 8x 8-core Power7 3,3GHz
Memory per compute core: 1024GB
Hard drive SubSystem : 6 x 600GB
Compute network: 40GB/s Infiniband
Operating system: SUSE 11.3 Linux