Skip to main content

This infrastructure provides access to high‑performance computing (HPC) resources for research, development, and innovation. It offers massive computing power to solve complex problems, run simulations, process large volumes of data, or develop and train Artificial Intelligence models across different scientific and technological fields.

The infrastructure is made available to both the public and private sectors, as well as academic institutions, technological centres, and research organisations… 

The individuals who directly benefit from the high‑performance computing service are those who need to process large amounts of data, run highly complex simulations that exceed the capacity of conventional computers, design new materials, perform genomic analysis, or develop and train Artificial Intelligence models.

This high‑performance computing service uses cutting‑edge technology optimised to deliver maximum performance in intensive computational workloads.

Workload and queue management system (Workload Manager): Ensures fair and efficient distribution of resources. This tool is the brain of the cluster, allowing users to submit jobs, specify their resource needs (number of cores, memory, runtime), and prioritise task execution.

Parallel file system: A parallel file system is used to accelerate data access. It allows multiple compute nodes to access data simultaneously and with massive bandwidth, eliminating input/output (I/O) bottlenecks.

Parallelisation libraries and frameworks: The development and use of HPC applications is supported through optimised standards and libraries. The infrastructure supports the following technologies for the development and execution of parallel code:

  • MPI (Message Passing Interface): The de facto standard for distributed programming, enabling processes on different nodes to communicate and collaborate in solving a problem.

  • OpenMP (Open Multi-Processing): A shared‑memory programming model that simplifies the parallelisation of loops and code regions on a single node with multiple processors.

  • CUDA and OpenCL: To fully leverage the power of GPUs, we offer support for these frameworks, which allow developers to program directly on graphics processors and massively accelerate computations.

Execution of containerised applications: To ensure portability, reproducibility, and security of HPC applications, the infrastructure supports container use through Apptainer technology. This platform is specifically designed for high‑performance computing environments, allowing users to package an application and all its dependencies (libraries, frameworks, etc.) into a single portable file.

Programming environments and compilers: A complete environment is provided for developers, including high‑performance compilers from the GNU Compiler Collection (GCC) and Intel for C/C++, Fortran, among others. Optimised scientific libraries such as BLAS, LAPACK, and FFTW are also available for efficient matrix manipulation and Fourier transforms.

Secure access and connectivity: To guarantee the confidentiality and integrity of data, external access to our HPC resources is carried out using secure protocols. Users can connect to the service network via VPN (Virtual Private Network), creating an encrypted tunnel that protects all communication between their workstation and the cluster. Once inside the network, access to login nodes is provided through SSH (Secure Shell), a protocol that allows secure and authenticated communication for remote command execution, file transfer, and job management, ensuring that only authorised users can interact with the system.

 

The high‑performance computing service is built on a robust and scalable infrastructure designed to ensure maximum availability and performance.

Data Processing Centre (Datacenter): The supercomputing cluster is hosted in the Government of Navarre’s Data Processing Centre, a secure and controlled environment for the hardware, equipped with advanced cooling systems, uninterruptible power supply (UPS), and fire protection systems.

Compute nodes: The core of the infrastructure is the compute nodes. These nodes are interconnected and configured for massive parallel workloads, including both nodes with traditional processors (CPU) and specialised nodes with hardware accelerators (GPU).

Storage systems: We offer different storage tiers to optimise data access:

  • High‑performance storage: For working data and temporary files, with a Lustre parallel file system that provides very high read/write speeds.

  • Long‑term storage: For permanent storage of results and project data, offering large capacity and backup systems.

High‑speed, low‑latency network: The internal network infrastructure is based on a low‑latency InfiniBand and Omni‑Path fabric, enabling extremely fast communication between compute nodes and storage systems. This network is essential for the performance of applications requiring a high degree of parallelisation.

External network connection: We provide high‑capacity external network connections through RedIRIS, enabling the transfer of large volumes of data to and from the supercomputing service.

  • Gobierno de Navarra

  • Red Española de Supercomputación

  • NAITEC

  • Navarrabiomed

  • ADItech

  • Universidad Pública de Navarra

  • CIMA Universidad de Navarra

  • Polo de Innovación Digital de Navarra IRIS

  • Brocade.
  • Cornelis Networks.
  • Dell.
  • H3C.
  • Hitachi.
  • HPE.
  • Intel.
  • Lenovo.
  • NVIDIA.