
Part III: Digital Sovereignty in Practice
What companies need to do to regain control over data, systems, and infrastructure.
Digital sovereignty has long been an issue primarily in the public sector. It is now also gaining importance in the private sector—especially where sensitive data must be processed, regulatory requirements met, or technological dependencies reduced.
This article examines what digital sovereignty actually means, which legal and technical frameworks are relevant, and how sovereign IT architectures can be realistically implemented today.
From Political Ideal to Operational Challenge
For a long time, the topic of digital sovereignty was primarily anchored in the public sector. Authorities, administrations, and publicly funded IT projects dealt with issues of data sovereignty, access rights, and legal jurisdictions early on – often driven by procurement guidelines, data protection requirements, or political expectations.
For many private-sector companies, however, the topic was not initially a priority. Criteria such as scalability, time-to-market, and operating costs dominated here. The choice of a global hyperscaler was often based on pragmatic reasons – regulatory risks or legal dependencies often played a subordinate role.
That is changing.
With the increasing use of sensitive data, the integration of AI systems, growing requirements for traceability, and the discussion about extraterritorial access (e.g., through the CLOUD Act), digital sovereignty is also becoming more important for companies in research, industry, healthcare, and finance.
Today, infrastructure decisions can no longer be made solely on the basis of performance and cost.
Jurisdiction, operational sovereignty, and controllability are becoming fixed parameters—not only in risk management, but also in strategic IT planning.
Legal Pressure: CLOUD Act, EUCS, GAIA-X
The CLOUD Act – a transatlantic weak link
Since 2018, the CLOUD Act has required US technology companies to disclose stored data even if it is located outside the US – for example, in European data centers.
This means that the physical storage location does not protect against access by US authorities if the company is subject to US law.
For many organizations, this is a fundamental problem – and not just in theory. Because as soon as sensitive or personal data is involved, this access conflicts with European data protection standards. The legal risk is difficult to calculate – and uncontrollable in serious cases.
EUCS & EU Cloud Rulebook – New Rules for Providers
In response, the EU is working intensively on a certification system for cloud services: the EUCS (EU Cloud Services Scheme).
The aim is to create a uniform level of security for cloud offerings – while also incorporating requirements relating to legal jurisdiction, operation, and control.
The highest level of trust (Level 3) stipulates, among other things:
- Operation exclusively in the EU
- Administration by EU personnel
- No subjection to non-European legal systems
For many US-based providers, this level is virtually unattainable, as they are structurally tied to parent companies in the US.
Even the establishment of European subsidiaries is not always sufficient to ensure complete legal separation.
GAIA-X – European Standards Instead of Monoliths
Parallel to regulation, the GAIA-X initiative is pursuing a different approach: it does not want to be a provider, but rather a framework for a federated, interoperable, and traceable cloud ecosystem.
Instead of creating monoliths, providers, operators, and users should develop common standards – for example, for:
- Transparency of dependencies
- Portability of data and workloads
- Trustworthy identities and certifications
The first GAIA-X labels have been awarded, tools such as the Cloud Data Engine for testing services exist, and GAIA-X is increasingly considered a requirement, especially in research and administrative projects.
Addendum: EU Data Act – New Obligations for Interoperability
With the EU Data Act, which was passed in 2024, the EU is also addressing the portability and usability of data – including in cloud infrastructures. Providers will be required to enable data portability and easy switching between cloud services.
For IT strategies, this means that technological sovereignty will not only be an option in the future, but will increasingly be required by law – also with regard to interoperability and lock-in prevention.
Sovereignty Means Control Over Data, Rights, and Operations
Digital sovereignty is not a state that is achieved once and then secured. It results from several factors – and must be actively shaped. Three levels are crucial in this regard:
1. Legal Control: Which Rules Apply in which Legal Jurisdiction?
- Is data processing subject exclusively to European law?
- Is there potential for access by third countries – directly or via provider relationships?
- Can compliance and audit requirements be met without a doubt?
In regulated industries in particular, it is crucial to know where data is processed—and who would have legal or technical access in an emergency.
2. Technological Control: How Transparent and Independent is the Infrastructure?
- Is the software stack traceable, documented, and auditable?
- Are there dependencies on proprietary APIs, licensing models, or platform mechanisms?
- Can the system be developed further in a modular fashion, or is it tied to a specific provider?
Transparent, open technologies reduce risks—not only legally, but also in terms of operation and further development.
3. Operational Control: Who Manages, Operates, and Secures the Systems?
- Is it operated by internal staff – or by external service providers with global access?
- Are there protective mechanisms such as client separation, key sovereignty, or data localization?
- How traceable and documented are administrative access and processes?
Sovereignty does not mean operating everything yourself – but it does require the ability to control operations and responsibilities in a targeted manner and to secure them in a traceable way.

Technological Levers for Sovereign Architectures
Digital sovereignty cannot be achieved through guidelines or declarations of intent alone – it requires concrete technical decisions. Two approaches have proven themselves in practice: on-premise infrastructures with complete operational sovereignty and cloud models with targeted legal and operational safeguards.
On-premise: Maximum Control, Maximum Responsibility
The classic path to digital sovereignty remains in-house operation. On-premise infrastructures offer:
- Full control over data flows, storage locations, and administrative access
- No external dependence on non-European jurisdictions
- High integration capability with existing security, network, or legacy systems
The price of this control is clear: responsibility for operation, maintenance, physical security, and compliance lies entirely with the operator. This makes on-premise particularly attractive where regulatory requirements are strict, sensitive data is processed, or operations can be planned for the long term.
Sovereign Cloud Offerings: Security Without Full In-house Operation
For many organizations, complete in-house operation is not realistic – whether for resource reasons or due to limited scalability. Sovereign cloud offerings offer an alternative here.
Typical approaches:
- European providers such as IONOS, Swisscom, or the Open Telekom Cloud
- Specialized partnerships, e.g., Google's "Sovereign Cloud" in collaboration with T-Systems
- Self-operated open source stacks based on Kubernetes, OpenStack, or Nextcloud
Large hyperscalers are also responding: Microsoft has announced with its "EU Data Boundary" that customer data from EU users will be processed exclusively in the EU and managed only by EU personnel. However, it remains to be seen whether this commitment will be legally enforceable in the event of a conflict.
The important thing to remember is that sovereignty in the cloud does not mean giving up all the advantages of scaling, but rather actively shaping the conditions of use.
Case Study: croit GmbH – Sovereignty Through Open-source Storage Infrastructure
We primarily support projects in which on-premise infrastructures are strategically desirable and technically necessary. The reasons for this vary – often it is a matter of regulatory requirements, depth of integration, performance, or long-term operational sovereignty.
A concrete example: the project with croit GmbH.
The goal was to build a sovereign, high-performance storage architecture for data-intensive customers – including universities, research institutions, and companies with particularly high requirements for data sovereignty, scalability, and openness.

The technical implementation was based on the open-source technologies Ceph and DAOS – supplemented by high-performance server and storage platforms from Memorysolution. Among other things, the following were used:
- Supermicro servers with AMD EPYC CPUs, Intel Xeon Scalable Gen 3, and Optane Persistent Memory
- NVMe drives with up to 30.72 TB per node
- Over 200 individually configured systems from the Mustang Systems series
- An additional 120 systems for accompanying co-location and university projects
The result: a scalable, completely manufacturer-independent storage environment with over 6 TB/s bandwidth in the IO500 benchmark – documentable, auditable, modularly expandable.
Conclusion: Sovereignty is Not an End in Itself – but a Strategic Advantage
Digital sovereignty does not mean rejecting innovation or modern technologies – it means consciously determining the conditions under which they are used.
The question of operational sovereignty, data control, and legal protection is becoming increasingly important, especially in data-intensive, security-relevant, or research-critical environments. Those who rely on open systems, transparent architectures, and clearly defined operating models will achieve long-term independence—without having to sacrifice scalability or performance.
Outlook for Part 4
The next article will focus on the technological implementation of hybrid infrastructures: How can cloud and on-premises be meaningfully combined? And how can architectures be created that jointly reflect digital sovereignty, performance, and operational efficiency?