
Our customer is a university-affiliated research network with international partner institutions in the Mediterranean region. Due to growing demands in the areas of data-intensive research, simulations, and visualization workloads, the IT department was looking for a high-performance, sovereign infrastructure outside the public cloud. The decision against a hyperscale cloud solution was made primarily due to concerns about data sovereignty, access rights, and expandable hardware availability.
In addition to complete control over infrastructure components, it was crucial that the system could be expanded and maintained independently as needed—without relying on third-party providers. Furthermore, the architecture had to be capable of efficiently processing high write loads and data volumes on a permanent basis.
Project period: Q2/2025
Project volume: approx. 370,000 €
Table of Contents
Project Description
The aim of the project was to provide a powerful, future-proof server infrastructure to support data-intensive research and analysis tasks. The institution originally planned to use a cloud solution, but deliberately opted for a local infrastructure in order to:
- retain complete control over data and infrastructure
- become independent of individual providers and their pricing models
- create expandable resources for future requirements
- ensure maximum performance for parallel data access and computing tasks
- map extreme write loads locally without relying on restrictive or expensive cloud storage solutions
Particular attention was paid to the protection of sensitive research data, the write endurance of the media used, and ensuring long-term hardware compatibility over many years.
Project Implementation
Memorysolution implemented and delivered a customized server solution based on its in-house Mustang Systems platform, consisting of:
Compute Nodes
- 4× Supermicro SuperServer SYS-620H-TN12R
- Equipped with 2× Intel Xeon Gold 6342 (24C/48T, 2.80 GHz)
- 768 GB DDR4-3200 ECC RAM per node, with 20 free DIMM slots for expansion
- 6× 3.84 TB Samsung PM897 SSDs per system for fast, parallel data access
- Integration of one NVIDIA RTX A4000 GPU per node for graphics-intensive research or visualization tasks
- Dual-port 10GBase-T network via AIOM/OCP module for redundancy and performance
Storage Infrastructure & Additional Nodes
- OS storage: 2× 480 GB SSDs per node (Samsung PM897)
- SAS controller with CacheVault supercap for protection against data loss in the event of a power failure
- Supplemented by additional storage systems with
- 6× KIOXIA PM7-V SAS4 SSDs at 6.4 TB (3 DWPD)
- 21× KIOXIA PM7-V SAS4 SSDs at 3.2 TB (3 DWPD)
- All KIOXIA SSDs feature power loss protection, end-to-end data protection, and self-encrypting drive (SED)
- RAID management via Broadcom MegaRAID 9560-8i controller including CacheVault module
- All systems redundantly connected and prepared for later GPU or memory expansion
Result
With the new server architecture, the research facility now has a high-performance, locally operated infrastructure that flexibly meets both current and future requirements:
- Higher computing power and I/O throughput compared to previous cloud configurations
- Full data and access control, independent of third-party providers
- Future-proof thanks to modular expandability and open standards
- Better risk management, as there is no dependency on account lockouts, cloud service availability, or vendor contracts
- Sustainable investment, as components can be stocked and replaced in-house
- High write endurance thanks to KIOXIA SSDs with 3 DWPD, which are optimized for use in research-intensive scenarios with high write volumes
Particularly noteworthy: The decision in favor of Memorysolution was made not least because of the technical characteristics of the KIOXIA drives used. No other offering could provide comparable write performance and data retention reliability while offering the same platform openness.