COMMISSION STAFF WORKING DOCUMENT Equipping Europe for world-class High Performance Computing in the next decade Accompanying the document Proposal for a Council Regulation on Establishing the European High Performance Computing Joint Undertaking
Tilhører sager:
Aktører:
1_EN_autre_document_travail_service_part1_v6.pdf
https://www.ft.dk/samling/20201/kommissionsforslag/kom(2020)0569/forslag/1690614/2246791.pdf
EN EN
EUROPEAN
COMMISSION
Brussels, 18.9.2020
SWD(2020) 179 final
COMMISSION STAFF WORKING DOCUMENT
Equipping Europe for world-class High Performance Computing in the next decade
Accompanying the document
Proposal for a Council Regulation
on Establishing the European High Performance Computing Joint Undertaking
{COM(2020) 569 final}
Europaudvalget 2020
KOM (2020) 0569
Offentligt
Table of Contents
Executive Summary ................................................................................................................... 2
1. Introduction ........................................................................................................................... 6
2. The European strategy on HPC: state of play........................................................................ 8
2.1 A brief overview of the Union’s policy in HPC....................................................... 8
2.2 The EuroHPC Joint Undertaking and its current mission and activities.................. 8
2.3 Main achievements of the EuroHPC JU in its first year of operations .................. 12
2.4 The EuroHPC strategy and its impact on the HPC value chain............................. 20
2.5 Lessons learnt from the EuroHPC JU governance and administration.................. 21
3. HPC in a fast evolving environment.................................................................................... 23
3.1 The increasing importance of HPC for a wide range of applications .................... 23
3.2 HPC Market drivers ............................................................................................... 25
3.3 Evolution of user requirements .............................................................................. 27
3.4 The convergence of HPC with AI.......................................................................... 29
3.5 Evolution of supercomputing technologies............................................................ 30
3.6 New computing paradigms Neuromorphic and Quantum Computing................... 31
3.7 Training and skills for the next decade .................................................................. 33
3.8 New political guidelines and Commission priorities for the period 2019-2024 .... 34
4. The Union’s HPC strategic approach for the next MFF (2021-2027)................................. 36
4.1 Rationale for a new mission of the EuroHPC JU in the next MFF........................ 36
4.2 The mission of the EuroHPC JU in the next MFF ................................................. 37
5. The main activities of EuroHPC JU in the next MFF ......................................................... 40
5.1 The new pillars of activity...................................................................................... 40
5.2 The supporting programmes of the next MFF ....................................................... 46
5.3 Interactions and synergies with other strategic objectives and policies................. 47
5.4 International Cooperation....................................................................................... 48
Acronyms and abbreviations.................................................................................................... 50
List of Figures .......................................................................................................................... 52
Annex I: Market Analysis and Investments ............................................................................. 53
The economic impact of HPC ...................................................................................... 53
The HPC market........................................................................................................... 55
Europe and the HPC Market ........................................................................................ 57
Uses of HPC with AI and Cloud .................................................................................. 58
HPC worldwide investments: the strategic race towards exascale computing ............ 60
Annex II: HPC and AI.............................................................................................................. 64
Annex III: Applications of HPC............................................................................................... 71
The data revolution and the strategic digital autonomy ............................................... 71
HPC and industry’s innovation potential ..................................................................... 72
Scientific leadership ..................................................................................................... 74
Societal challenges, policy making and national security............................................ 75
HPC and the COVID-19 crisis..................................................................................... 78
Endnotes and web references ................................................................................................... 81
2
Executive Summary
This Staff Working Document (SWD) outlines the continuation of Europe’s ambitious
strategic approach in High Performance Computing (HPC) for the next decade. HPC is an
essential digital infrastructure for achieving the Commission’s aim of maximising the benefits
of digitisation for everyone as outlined in the Commission Communications on “A European
strategy for data”1
and “Shaping Europe’s Digital Future”2
, and one of the priority recovery
investments identified in “Europe's moment: Repair and Prepare for the Next Generation”. 3
The SWD accompanies the Commission proposal for a revised Regulation on the EuroHPC
Joint Undertaking (EuroHPC JU). It provides an updated information to the JU’s Impact
Assessment that was held in 2018,4
and that is still applying to a large extent as the JU was
only established late 2018. The SWD builds on the European strategy on HPC implemented in
the period 2012-2020 and analyses the evolution of this strategy since the launch of the
EuroHPC JU, which has become the strategy’s main implementation body.
HPC is a critical capability for the digital transformation of our society, and is the “engine”
that powers the data economy, enabling key technologies like Artificial Intelligence (AI), data
analytics and cybersecurity to exploit the enormous potential of big data.
HPC is used in more than 800 scientific, industrial and public sector applications that play a
major role in boosting industry’s innovation capability, advancing science, and improving
citizens’ quality of life. Europe is today a leader in HPC applications in a wide range of areas
such as personalised medicine, weather forecasting, the design of new aeroplanes, cars,
materials, and drugs, and energy, engineering and manufacturing.
HPC enables many industrial sectors to innovate and to move up into higher value products
and services paving the way to novel industrial applications in combination with other
advanced digital technologies. HPC applications and infrastructures are essential in nearly
every field of research for deeper scientific understanding and breakthroughs from
fundamental physics to biomedicine. HPC is also an essential tool for researchers and policy-
makers to address major societal challenges, from climate change, smart and green
development, and sustainable agriculture to personalised medicine and crisis management. A
good example is the COVID-19 pandemic, where HPC is used, often in combination with AI,
to accelerate the discovery of new drugs, predict the virus’ spread, plan and distribute scarce
medical resources, and anticipate the effectiveness of different containment measures and
post-epidemic scenarios. Another good example is the EU Destination Earth initiative5
which
aims to use vast amounts of data gathered from satellite and terrestrial data and build a high-
precision digital model of the Earth to monitor and simulate, by using HPC, natural and
human activity. Destination Earth would provide to a large number of users applications and
services such as weather forecasting, urban and rural planning, waste and water management
and oceanographic, marine and frozen environment modelling. This will help speed up the EU
green transition objectives and assist in preparing for as well as managing major
environmental degradation and disasters.
Europe’s leading role in the data economy, its scientific excellence, and its industrial
competitiveness will increasingly depend on its capability to autonomously develop key HPC
technologies, reducing dependence on foreign providers, provide access to world-class
supercomputing and data infrastructures, and maintain global leadership in HPC applications.
To make this happen, a pan-European strategic approach is essential.
3
The EuroHPC JU was established in 2018 as a Joint Undertaking under Article 187 TFEU,
pooling resources from the EU, 32 countries (EU Member States and countries associated to
Horizon 2020), and two Private Members: the European Technology Platform for HPC
(ETP4HPC) and the Big Data Value (BDVA) Associations.
After two years of operation, the EuroHPC JU has substantially increased the overall
investment in HPC at European level and has started to deliver on its mission to restore
Europe’s position as a leading HPC power globally. By the end of 2020, it will deploy a first-
class supercomputing and data infrastructure accessible to public and private users all over
Europe. Investments under the current Regulation will also support HPC Competence Centres
throughout Europe, the development of HPC skills, and R&I in critical HPC hardware and
software technologies and applications, increasing the EU’s capability to autonomously
produce and use competitive HPC technology.
For the strategic investments made so far, the EuroHPC JU has used funds from the current
Multiannual Financial Framework (MFF). The implementation of the European HPC strategy
with funds from the next MFF requires a revision of the EuroHPC Council Regulation.6
This SWD describes the essential role HPC will play in the next MFF period for the EU’s
competitiveness, the digital transformation of Europe, and the creation of European public
common data spaces. It provides evidence of the importance of the EuroHPC JU’s activities,
and of the impact that its continuation will have on an increasing number of critical
technologies and applications in the next decade, notably for European leadership in low-
power processor technologies and in AI.7
The SWD also analyses the key socio-economic and
technological drivers affecting the future evolution of HPC and data infrastructures,
technologies and applications in the EU and worldwide, including the EU’s political priorities
for 2020-2025.
One of the most important technological drivers that the SWD considers is the emergence of
quantum computing. Initial quantum computing systems already exist mostly in an
experimental form; larger systems are expected to become available in the period 2021-2027
that could operate next to or be integrated with traditional HPC systems.
The revision of the Council Regulation provides an opportunity to update the mission and
objectives of the EuroHPC JU, taking into consideration these new drivers and the lessons
learnt from the JU’s current activities. This SWD suggests an updated mission for the JU:
By 2027, develop, deploy, extend and maintain in the Union a world leading federated, secure
and hyper-connected supercomputing, quantum computing service and data infrastructure
ecosystem; support the production of innovative and competitive supercomputing systems
based on a supply chain that will ensure components, technologies and knowledge limiting
the risk of disruptions and the development of a wide range of applications optimised for
these systems; widen the use of this supercomputing infrastructure to a large number of
public and private users, and support the development of key skills for European science and
industry.
The roadmap for the development and deployment of the federated computing infrastructure
of the EuroHPC JU is summarised in the table below (including infrastructure investments for
2019-2020). Investments in the period 2021-2027 will target an ambitious combination of
interconnected world-class HPC and quantum computing systems:
4
2019 2020 2021 2022 2023 2024 2025 2026 2027
HPC
Infrastructure
3 pre-exascale and
5 petascale HPC
systems
Several pre-exascale systems
and 2 exascale HPC systems
One or more exascale
and post-exascale
HPC systems
Quantum
Infrastructure
Quantum
simulators
interfacing
with HPC
systems
First
generation
of quantum
computers
Quantum
simulators
interfacing
with HPC
systems
Second generation of
quantum computers
In order to realise this mission and roadmap, the SWD identifies the following overall
objectives for the continuation of the EuroHPC JU:
1. Deploy and maintain in the Union a secure, hyper-connected and integrated world-class
HPC, quantum computing and data infrastructure, based on the best existing computing,
data and networking technologies;
2. Federate the HPC, quantum computing service and data infrastructure, interconnect it with
the European public data spaces and cloud ecosystem, and provide EU-wide services to a
wide range of public and private users;
3. Develop and support an innovative HPC and data ecosystem contributing to the standing
and technological autonomy of the Union in the digital economy, capable to produce
computing technologies and architectures and their integration into leading
supercomputing systems, and advanced applications optimised for these systems;
4. Widen the use of HPC and develop the key skills that European science and industry need.
The main expected outcomes for the EuroHPC JU in the next decade would include:
A federated, secure and hyper-connected European HPC and data infrastructure with mid-
range supercomputers and at least two top class exascale and two top class post-exascale
systems (integrating as much as possible European technology);
Hybrid computing infrastructures integrating advanced computing systems – notably
quantum simulators and quantum computers – in HPC infrastructures;
A secure cloud-based HPC and data infrastructure for European private users;
HPC-powered capacities and services based on European public data spaces for scientists,
industry and the public sector;
Next generation technology building blocks (hardware and software) and their integration
into innovative HPC architectures for exascale and post-exascale systems;
Centres of Excellence in HPC applications and industrialisation of HPC software, with
novel algorithms, codes and tools optimised for future generations of supercomputers;
Large-scale industrial pilot test-beds and platforms for HPC and data applications and
services in key industrial sectors;
National HPC Competence Centres, ensuring a wide coverage of HPC in the EU, with
specific services and resources for industrial innovation (including SMEs);
A significant increase for Europe’s workforce in HPC skills and know-how;
Reinforced data storage, processing capacities, and new services, in areas of public
interest across the Member States.
5
To achieve these objectives, the SWD advocates an all-encompassing approach covering
national and EU investments, the participation of industrial Private Members, and close
collaboration with key European actors such as PRACE8
and GEANT.9
The EuroHPC JU
should also foster international collaboration that benefits the EU.
Finally, the SWD advocates for the EuroHPC JU to operate in synergy with actions in major
EU priority areas, namely AI, cybersecurity, quantum technologies, big data, European public
common data spaces,, and advanced digital skills. This would require collaboration with
initiatives such as the Joint Undertaking on Key Digital Technologies, the Quantum
Technologies Flagship,10
and the European partnerships on AI, data, cybersecurity and the
European Open Science Cloud. The EuroHPC JU should also forge links and synergies with
other EU and national programmes and their stakeholders.
6
1. Introduction
This Staff Working Document (SWD) accompanies a Commission proposal for the revised
Regulation for the EuroHPC Joint Undertaking, covering the period 2021-2033.i
The SWD
analyses the evolution of the main factors affecting the European strategy in High
Performance Computing (HPC), in particular since the establishment in 2018 of the EuroHPC
Joint Undertaking (EuroHPC JU).5
The reasons behind the creation of the EuroHPC JU are presented in the corresponding 2018
Impact Assessment4
, which is still applicable for the continuation of the JU. This SWD
provides an update on the supporting data of that Impact Assessment and provides evidence to
support the continuation of the EuroHPC JU as a main instrument to implement the European
HPC strategy. It describes the key role that HPC will continue to play in the Union’s
competitiveness and the digital transformation of the European economy and society. It sets
out the increasing number of industrial, scientific and public sector applications and user
needs that will benefit from continuing the EuroHPC JU.
The SWD is structured as follows:
Chapter 1 provides a brief overview of the EU policy on HPC;
Chapter 2 presents the role and achievements of the EuroHPC JU for the European
strategy on HPC and the lessons learnt since its establishment in 2018;
Chapter 3 analyses the main drivers that are likely to affect the evolution of HPC in the
next decade. They include the increasing importance of HPC for a wide range of
applications, the evolution of the market, user requirements, developments in the
underlying technology, notably in quantum computing, and the new political factors to
consider;
Chapter 4 describes the suggested mission and mandate of the EuroHPC JU in the period
2021-2027, taking into consideration the drivers analysed in Chapter 3;
Chapter 5 presents the suggested main implementation activities of the EuroHPC JU in
the period 2021-2027 and the different instruments to support such implementation;
The Annexes gather the supporting data, in particular on the economic impact of HPC, the
evolution of the world market and investments, and the key convergence of HPC with AI.
They also provide representative examples of the key HPC applications in different
domains, and illustrate the important role of HPC in tackling global crises, such as the
COVID-19.
What is High Performance Computing (HPC) and why is it important?
The term “High Performance Computing” is used in this document as a synonym for high-end
computing, supercomputing or world-class computing, dealing with problems so demanding,
that the massive computations needed to solve them cannot be performed by general-purpose
i
This covers the next MFF (2021-2027), the period required for depreciating the operation of any
supercomputer(s) that the JU may acquire at the very end of the MFF - typically 5 years -, and the period
required for winding up the JU.
7
computers. Instead, they require very powerful systems, called HPC systems or
supercomputers.
A supercomputer interconnects a large number of processors (from a few hundred to several
thousand) working in parallel. The fastest supercomputersii
are currently achieving petascale
performance, and the next frontier is the exascale computing. Supercomputing power is
currently increasing so fast that world-class machines are becoming obsolete after, on
average, 5-6 years. As an example, an ordinary laptop today has the computing power of the
world's top supercomputer 25 years ago.
HPC methodologies include modelling and simulation, and are frequently associated with big
data analytics, visualisation, AI (e.g. machine and deep learning), and other techniques. HPC
powers hundreds of applications across virtually all branches of science, industry and sectors,
including the public sector. Representative examples of such applications include:
personalised medicine, models for climate change, earth observation, precision agriculture,
engineering and manufacturing, cybersecurity, and oil and gas exploration. This SWD does
not aim to provide an exhaustive list of all HPC applications. However numerous examples
are presented in the Annexes. Examples of the application areas currently supported in
projects funded under Horizon 202011
and the Connecting Europe Facility12
(CEF) include:
Horizon 2020 support for HPC Centres of Excellence in areas such as weather and climate
prediction, materials, energy, biomolecular and biomedical sciences. Large scale industrial
pilots are supported in the areas of digital twins, health, precision agriculture and farming,
finance and insurance. A specific project (Esxcalate4CoV, see box below) has been set up
to develop solutions for COVID-19.
Projects funded by under CEF-Telecom address HPC supporting application areas such as
smart agriculture, forestry evolution and fire control, air quality and pollution,
atmospheric, marine and earth observation, and cultural heritage in the EU.
In 2019 the EuroHPC JU launched a Call for HPC powered innovative applications in
sectors of societal and industrial relevance for Europe. Selected projects are expected to
start in autumn 2020.
HPC and the COVID-19 pandemic
The global COVID-19 crisis illustrates how HPC in combination with other digital
technologies such as big data and AI can be critical in the fight against the pandemic by
supporting the decision-making on containment measures and dramatically accelerating the
development of a treatment or a vaccine.
An outstanding example of the European effort is the EU funded “Esxcalate4CoV” project
(EXaSCale smArt pLatform Against paThogEns – Corona Virus – see
https://www.exscalate4cov.eu/), which uses massive supercomputing resources (more than
120 Petaflops) from 4 European HPC systems to analyse the effectiveness of over 500 billion
pharmacological molecules against the COVID-19 viral proteins that are key for its
propagation. Using classical computing methods, the analysis of each molecule would take
ii
The most powerful supercomputer as of June 2020 is Fugaku (Japan), with 514 petaflops of peak
performance (https://www.top-500.org/). A flop is one floating point operations per second. One petaflop is
1015
(ten to the power of 15 or one million billion) flops. One exaflop is 1018
(ten to the power of 18 or 1
billion billion) flops.
8
several months, while the HPC simulation can do so in just 50 milliseconds. Over 100
molecules have been selected so far for biological screening and clinical testing that could
lead to eventual identification and production of a treatment.
The use of HPC resources with big data sets, deep learning methods and large-scale complex
computational models is also critical to effectively support policymakers during epidemic
emergencies. It helps to rapidly forecast the trajectory of the spread of an infectious disease,
planning the public health policy response, as well as simulating the efficiency of different
containment measures and evaluating the different post-epidemic scenarios.
2. The European strategy on HPC: state of play
2.1 A brief overview of the Union’s policy in HPC
In the last few years, the ambition of the Union has been to be among the world's top
supercomputing powers and provide everywhere in the Union an integrated world-class HPC
capability, high-speed connectivity and leading-edge data and software services.
The EuroHPC JU was established to implement the Union’s strategy on HPC developed in
several Commission Communications since 2012: COM(2012) 45 final13
, COM(2016) 178
final14
, COM(2017) 228 final15
, and further supported by the Competitiveness Council (May
201316
and May 201617
), the European Council (June 201618
), the European Parliament
(January 201719
), and the Telecommunications Council (June 202020
).
In the decade 2008-2018, more than EUR 1.2 billion of the EU budget was invested in HPC
activities through several programmes:
In 2008-2013, around EUR 145 million, from the 7th
EU Framework Programme for
Research and Innovation (FP7)21
;
In 2014-2020, around EUR 1135 million, from Horizon 202011
and the Connecting
Europe Facility12
(CEF, in particular under CEF-Telecom).
From 2014 onwards, a contractual public-private-partnership (cPPP) between the Commission
and the European Technology Platform (ETP4HPC) Association22
was established. This cPPP
lasted until the set-up of the EuroHPC JU in 2018.
For the next MFF, the EuroHPC JU would work in synergy with other major EU priority
areas identified in several Commission Communications, in particular “A European strategy
for data”1
, ”Shaping Europe’s Digital Future”2
and the “White Paper on Artificial
Intelligence”7
. HPC has also been identified in the Communication “Europe's moment:
Repair and Prepare for the Next Generation”3
as a strategic digital capacity that will be a
priority of European recovery investments such as the Recovery and Resilience Facility,
InvestEU and the Strategic Investment Facility.
2.2 The EuroHPC Joint Undertaking and its current mission and activities
The EuroHPC JU was established on 28 September 2018 by a Council Regulation6
. Its current
Members are the Union (represented by the Commission), 32 Participating Statesiii
(26
iii
26 Member States (Austria, Belgium, Bulgaria, Croatia, Cyprus, Czech Republic, Denmark, Estonia, Finland,
France, Germany, Greece, Hungary, Ireland, Italy, Latvia, Lithuania, Luxembourg, Netherlands, Poland,
9
Member States and 6 Countries associated to Horizon 2020), and two Private Members: the
European Technology Platform for High Performance Computing Association (ETP4HPC)22
and the Big Data Value23
Association (BDVA). EuroHPC also relies on collaboration with
key European actors such as PRACE (Partnership for Advanced Computing in Europe) and
GEANT (the pan-European high-speed network for research and education).
Figure 1 - Map of the EuroHPC JU Participating Countries
The mission of the EuroHPC JU as stated in the current Regulation of 2018 is “to develop,
deploy, extend and maintain in the Union an integrated world-class supercomputing and data
infrastructure and to develop and support a highly competitive and innovative High-
Performance Computing ecosystem”, for the next-generation exascale supercomputing era
and beyond.
The EuroHPC JU has become the strategic instrument for supporting EU competitiveness in
the global digital economy. The JU mobilises critical national and Union efforts around a joint
strategy to provide Europe with world-class supercomputing capabilities and knowledge
according to its economic potential, aiming at technological autonomy in critical HPC
technologies and infrastructures.
The Union’s digital autonomy in HPC requires the capacity of developing world class
infrastructures integrating as much as possible top-quality European technology. The
objectives of the current EuroHPC JU reflect the need to invest in both the deployment of a
world-class infrastructure and the development of a full European HPC ecosystem.iv
Portugal, Romania, Slovakia, Slovenia, Spain and Sweden) plus 6 Associated Countries (Iceland,
Montenegro, North Macedonia, Norway, Switzerland and Turkey).
iv
According to the Regulation 2018/1488, the main objectives of the JU are:
10
Establishment, budget and activities under the current Regulation
The EuroHPC JU was initially established for the period 2019 to 2026. The initial co-
investment of the Union and the Participating States is around EUR 1.1 billion, with the
Union contributing EUR 536 million from the current MFF (2014-2020), and the remainder
coming from the Participating States. The Private Members will contribute an additional EUR
422 million in the form of in-kind contributions. The governance structure of the EuroHPC JU
is described in the Council Regulation. Currently, the Commission ensures the transitional
period until the full autonomy of the JU (expected in Q4 2020). In accordance with its work
for 2019 and 2020, the EuroHPC JU is supporting the following two main types of activities:
1. Acquisition of HPC infrastructure: Eight sites were recently selected24
to host top world-
class supercomputers. They are located in Member States and are supported also by many
Participating States of the JU, giving a very wide geographical coverage to the initiative:
Three precursors-to-exascalev
supercomputers aiming to be classed in the world top 5
supercomputers. The hosting entities and supporting consortia are the following:
BSC (Barcelona), Spain, with Croatia, Portugal and Turkey
CINECA (Bologna), Italy, with Austria, Slovenia, Slovakia and Hungary
CSC (Kajaani), Finland, with Belgium, Czech Republic, Denmark, Estonia, Norway,
Poland, Sweden, and Switzerland
The EuroHPC JU will own these systems and will support 50% of their total cost of
ownership (TCO). The JU will have 50% of the total computing access rights. The
other 50% will be with the Member State of the hosting site and its supporting
consortia.
Five petascalevi
supercomputers, aiming at ranking in the global top-50 systems. The
hosting entities and the supporting consortia of partners are the following
FCT, Portugal, with Spain
(a) To provide the research and scientific community, as well as the industry, including SMEs, and the public
sector from the Union or countries associated to Horizon 2020 with the best available and competitive HPC
and data infrastructure and to support the development of its technologies and its applications across a
wide range of fields;
(b) To provide a framework for the acquisition of an integrated, demand-oriented and user-driven world-
class supercomputing and data infrastructure in the Union;
(c) To provide Union-level coordination and adequate financial resources to support the development and
acquisition of such infrastructure, which will be accessible to users from the public and private sector
primarily for research and innovation purposes;
(d) To support an ambitious research and innovation agenda to develop and maintain in the Union a world-
class HPC ecosystem, exascale and beyond, covering all scientific and industrial value chain segments,
including low-power processor and middleware technologies, algorithms and code design, applications
and systems, services and engineering, interconnections, knowledge and skills, for the next generation
supercomputing era;
(e) To promote the uptake and systematic use of research and innovation results generated in the Union by
users from science, industry, including SMEs, and the public sector.
v
Capable of executing more than 150 Petaflops, or 150 million billion calculations per second
vi
Capable of executing at least 4 Petaflops
11
IT4Innovations, Czech Republic
LuxProvide S.A., Luxembourg
Sofia Tech Park, Bulgaria
IZUM, Slovenia
The EuroHPC JU and the Member State of the hosting site will jointly own these
systems. The EuroHPC JU will support 35% of the acquisition costs of the systems,
and will have 35% of the total computing access rights. The Member State of the
hosting site (with their associated Participating States) will support the rest of the TCO
and will have 65% of computing access rights for their own use.
The EuroHPC JU signed specific hosting agreements with each of the 8 hosting sites that
define the acquisition and operation procedures and the funding arrangements for the new
supercomputers. The acquisition process started in the last quarter of 2019. The
supercomputers are expected to be installed during the second half of 2020 and will be
interconnected and connected to the existing national supercomputers of the PRACE
members through the GEANT network.
2. Funding strategic Research and Innovation (R&I) actions through calls for proposals
contributing to the EuroHPC JU’s mission to develop and support a highly competitive
and innovative HPC ecosystem.
For 2019, EuroHPC has launched a Call for R&I actions25
for EUR 190 million that
supports the development of essential technologies for exascale systems, industry-
oriented HPC application platforms, and industrial software codes; innovation
activities for manufacturing and engineering SMEs; and the establishment of HPC
competence centres in the Participating States and the coordination of their activities
at European level.
For 2020, the plan is to invest another EUR 170 million in supporting the following
R&I activities: funding the next phase of the European Processor Initiative (EPI)26
(see box below); the integration of European technologies in advanced pilots that will
lay the basis for developing future European exascale supercomputers; training and
education in HPC; and pilots on quantum simulators.
In addition to the above, two Horizon 2020 Calls for proposals will be launched in
2020, one for complementing the current 9 HPC Centres of Excellence27
for scientific
leadership in HPC applications, and one on promoting International Cooperation28
,
with the aim of developing a strategic partnership in HPC with Latin America.
The European Processor Initiative (EPI) is a 5-year ambitious large-scale European
initiative aiming to develop critical low-power microprocessor technology. The EPI
consortium brings together 23 partners from 10 European countries. The first phase started in
December 2018 under Horizon 2020 with EUR 80 million of EU funding. The EuroHPC JU
plans to support the second phase of EPI, as of 2020.
EPI will develop components developed to cover not only the HPC sector but also broader
markets and domains (e.g. extreme-scale, big-data and emerging applications based on edge
computing) that demand high-end computing capabilities such as autonomous driving. A new
European semiconductor company (Si-pearl) is tasked to commercialise the results of EPI.
12
The EuroHPC JU plans to launch a call in 2020 aiming to integrate the EPI-based
technologies into advanced pilots for exascale systems. The pilots will adopt a co-design
approach to bridge the gap between suppliers and users to define new architectures and better
computational methods and algorithms adapted to real application needs.
2.3 Main achievements of the EuroHPC JU in its first year of operations
In 2018, the Impact Assessment (IA)4
and a study by the European Investment Bank (EIB) 29
on financing supercomputing in Europe identified a range of obstacles to the development of
a successful HPC ecosystem in Europe. The EuroHPC JU has already removed a number of
these obstacles, bringing substantial improvement to the situation previous to its
establishment, as confirmed by the “Impact Assessment Study for Institutionalised European
Partnerships under Horizon Europe - Candidate Institutionalised European Partnership in
High-Performance Computing (Final Report)”30
.
The following paragraphs describe the main achievements of the JU in advancing from the
baseline situation analysed in the 2017 Impact Assessment and the EIB study on financing
supercomputing, for simplicity referred to as “EuroHPC IA” and “EIB study”, respectively.
1. The EuroHPC JU has substantially increased the level and quality of investments in
HPC at European level in a single and coordinated effort with Member Statesvii
The EuroHPC JU coordinates and pools EU and national investments
Overall, the EuroHPC JU provides a powerful single legal framework for pooling the
necessary EU, national and regional resources, mobilising public and private efforts at
European level to support the HPC ecosystem. All the Participating States provide
strong support to the JU and contribute to its success. The JU has now become the
most powerful and effective instrument for pooling resources, coordinating
investments and promoting collaboration between the EU and the Participating States
in organising specific actions, driven by an ambitious European HPC strategy.
vii
The baseline, as defined in “EuroHPC IA” and the “EIB study”, is the following:
EuroHPC IA, “Problem driver number 1” (Public funding for HPC in EU/MS remains uncoordinated and
insufficient to cope with the demand)
“MS investments are insufficient and uncoordinated to acquire enough high-end HPC systems that satisfy
the demand… No MS has the capabilities to develop the necessary HPC ecosystem on its own in a
competitive timeframe with respect to the USA, China or Japan…. When compared to current investments
of the EU and MS, the gap with the USA can be estimated at least at EUR 700 million per year.”
EuroHPC IA, “Problem Nr 3” (Member States do not have a framework for joint procurement)
“… (the current situation) does not cover the coordination of national programmes, nor joint investments
for the procurement of systems, …. In Europe, the large fragmentation of HPC programmes and efforts,
the non-coordinated activities and the lack of a common procurement framework lead to a waste of
resources.... Europe thus misses the opportunity to take advantage of efficiency gains by aligning the
strategies and pooling resources.
EIB Study: “Finding 2” (Fragmentation and limited coordination at the EU level has resulted in a
suboptimal investment climate and an underinvestment in strategic HPC infrastructures in Europe):
“…the majority of HPC centres tend to be standalone organisations with a close link to a local academic
institution or are embedded in a research cluster… resulting in fragmentation and limited coordination
across Europe… Furthermore, the financing for large-scale HPC facilities is challenging due the large
amount of resources required and the need for long-term and sustained financing … (leading to)
significant underinvestment in this strategically important sector in Europe.”
13
The EuroHPC Research and Innovation Advisory Group (RIAG) draws up the
strategic R&I agenda of the JU with the support of the European industry players
participating as Private Members of the JU (the ETP4HPC and BDVA Associations).
This agenda reflects the global strategic developments in the field.
The EuroHPC JU has rationalised the implementation of national and EU investments
and programmes
At EU level, before the JU’s establishment, the Commission was using four different
programmes to implement HPC activities: FET, LEIT-ICT and the e-Infrastructures
(Horizon 2020) and the Connecting Europe Facility (CEF). EuroHPC represents a
clear improvement to that situation: by integrating all HPC activities into a single
instrument, it helped resolve the problem of fragmentation and lack of coordination of
the Union’s HPC resources.
At national level, the EuroHPC JU helps rationalise and optimise investments and
redefine national strategies and programmes by exploiting synergies and
complementarities with the relevant Union’s actions and European priorities.
The EuroHPC JU has increased the overall HPC investments in Europe
Regarding the overall level of funding in HPC, the JU has initiated so far an
unprecedented investment in European advanced digital infrastructures, responding to
the need identified in the EIB Study: “Increase financial support from the public
sector for strategic HPC infrastructure and services with an emphasis on improved
coordination and a strong public value investment approach”.
For the period 2019-2020 alone, the JU will mobilise public commitments of around
EUR 1.1 billion, representing a net increase of nearly EUR 250 million per year at
European level, compared with the situation before the creation of the EuroHPC JU.
2. By the end of 2020 EuroHPC will provide the EU with the best world supercomputersviii
World and EU supercomputing capabilities
Europe was, and still is, a world leader in HPC applications, but its supercomputing
infrastructure is falling behind in the world ranking. A widely accepted headline
indicator of regional competitiveness in HPC is the number of systems in the “top-10”
and “top-500” lists of world supercomputers31
in each world region.
viii
The baseline, as defined in the “EuroHPC IA” and the “EIB study”, is the following:
EuroHPC IA, “Problem Nr 1” (The EU does not have the best supercomputers in the world…)
Today, none of the 10 leading supercomputers in the world is located in the EU … Collectively, the EU
and the MS are significantly under-investing in HPC technology supply and infrastructures when
compared to USA, China or Japan.”
EuroHPC IA, “Problem Nr 2” (Supercomputers available in Europe do not satisfy the demand)
“Not only Europe does not have the best machines but it also cannot sufficiently satisfy the demand. …
the European scientific and engineering research community prefers to use USA supercomputing
facilities rather than PRACE…. Ultimately, our scientific and industrial leadership will become
dependent on the accessibility to the highest-end machines that are outside Europe.”
14
Figure 2 - World top 500 supercomputers - regional share
In June 2020, only two of the world’s top ten supercomputers were to be found in the
Union, ranking sixth and ninthix
, compared with four systems in 2012. The current
supercomputing power in the world top 500 supercomputers available in the Union is
less than half of the US or China. Out of these top 500 systems, 79 are installed in EU
Member States, as compared with 114 in the US and 226 in China.
Figure 3 - Share of HPC systems in global top-10 per country
With the acquisition of its supercomputing infrastructure planned for the second half
of 2020, the EuroHPC JU will start to bridge the gap. For example, its three precursor-
to-exascale systems would be placed in the top-5 in the world, and its five petascale
supercomputers will be among the top-50 – see Figure 4.
Figure 4 - Computing power of world top 10 supercomputers
ix
“HPC5” (ENI - IT) and “Marconi-100” (CINECA - IT), with peak performances of 51 and 29 petaflops
15
EuroHPC will also make it possible to address the first issue identified in the
EuroHPC IA, namely a lack of investment in Europe in world-class supercomputers to
address the digital divide across Europe with regard to access to available resources.
The EuroHPC JU allows consortia of different Participating States to contribute to the
acquisition and operation of the supercomputers. This pooling mechanism gives
Participating States the right to allocate part of the computing resources proportional
to their financial contribution to national priorities and users, and will have a very
important effect at national level: it will give national users of consortia countries a
direct access to world-class resources that would have never been possible otherwise.
The EuroHPC JU will have an enormous effect in facilitating access to the best
supercomputer resources in Europe
EuroHPC will have a knock-on effect in the widening of HPC in countries that have
never had “ownership” of such computing power. In total, 20 of the 32 countries
participating in the EuroHPC JU will be part of the consortia operating the centres. For
some Participating States, this will also mean hosting or enjoying national exclusive
access to a “top-500” supercomputer for the first time ever – resulting in a quantum
leap in supercomputing knowledge for their national stakeholders.
Figure 5 - Members of Consortia in EuroHPC JU supercomputers
With the exception of the Czech Republic, the other four sites selected for EuroHPC
petascale computers do not have a particular track record of hosting “top-500” HPC
facilities. This underlines the value proposition of participation in the partnership for
smaller Member States – they can reap the benefits of hosting facilities that they
would not otherwise have been likely to access.
The EuroHPC JU will dramatically increase the computing power supply for EU users
The EuroHPC IA identified a shortage of supply of supercomputing infrastructures in
Europe, and this remains the case today: given the lack of top-performing HPC
machines in the EU, the European scientific and engineering research community
prefers to apply for US supercomputing facilities (i.e. in the US Advanced Scientific
Computing Research (ASCR) Programme)32
rather than resources in PRACE.33
Current demand for HPC infrastructure and services in the Union far exceeds the
16
supply offered by public HPC centres and private operators. For instance, PRACE
Tier-0x
calls have an average oversubscription ratio of 3:1,34
and there is evidence that
a part of the scientific community in Europe, especially in the EU13, does not have
access to the level of supercomputing performance that they need for research
purposes.
A comparison between PRACE and its US counterpart, ASCR, shows the extent to
which European HPC facilities cannot satisfy the demand for them: ASCR awards ten
times more projects to European scientific and engineering communities than
PRACExi
. Even the five hosting PRACE members (Germany, France, Spain, Italy, and
Switzerland) obtained more projects from ASCR than the maximum PRACE could
offer per call. This is also true of some associated members of PRACE like Denmark
and the UK that do not provide computing systems to PRACE. In conclusion, there is
strong demand for HPC access in the EU that is not sufficiently satisfied by PRACE.
The new computational capacities which the EuroHPC JU will provide will efficiently
tackle this problem. EuroHPC will multiply by eight the computing power (in
petaflops) and by at least 10 times the available computing time currently offered by
the PRACE Tier-0 top supercomputing systemsxii
, meaning that many more users will
have access to top HPC resources for European-level use.
Figure 6 - European computing power in 2020 (forecast)
x
Tier-0 systems are world-class supercomputers accessible at European level through the PRACE pan-
European scheme for allocating HPC resources
xi
The comparison is relevant since both programs have a similar allocation mechanism awarding one year or
multi-year core hours. The number of awarded projects is based on the last available ASCR report for 2017.
xii
The percentage of time dedicated for European use of the EuroHPC systems will vary between 50% of the
pre-exascale systems and 35% of the petascale systems, whereas Tier-0 systems dedicate in average less than
40% of their time to PRACE
17
3. EuroHPC will provide a European source of key technologiesxiii
The EuroHPC will change the landscape of the European supply chain ecosystem
Compared to the baseline situation of the HPC technological supply chain described in
the EuroHPC IA, the situation is stationary: Europe continues to consume around 30%
of the world HPC resources, but European vendors hardly get between 5% and 6%
market share in the “top-500” systems (with ATOS as the most significant European
vendor, with 5.2%). No European company supplies key components like general
processors or accelerators.
EuroHPC will be in an excellent position to contribute to the EU’s digital autonomy in
critical technologies by supporting ambitious actions to support the development of an
excellent European HPC ecosystem capable of significantly increasing the production
of innovative technology. Europe’s investments and strengths in technology building
blocksxiv
need now to be integrated in the next generation of supercomputers to help
European industry become a leading technology supplier and reinforce its position as a
world-leading user of HPC.
An opportunity for this to happen is the EuroHPC JU’s planned 2020 calls, which will
fund the next phase of the European Processor Initiative (EPI) and the integration of
European technologies in advanced pilots for exascale computing. The transition to
exascale computing is already announced in the present EuroHPC JU Regulation. It
represents the opportunity for Europe’s supply industry to leverage on technologies
across the computing continuum.
Establishing a world level playing field in HPC
The EuroHPC IA summarises the situation that HPC system vendors are facing in the
market as follows: there is no level playing field in the HPC market; and procurement
xiii
The baseline, as defined in the “EuroHPC IA” and the “EIB study”, is the following:
EuroHPC IA, “Problem Nr 1” (.. (the EU) is entirely dependent on non-European HPC supply chains with
the increasing risk of not having access to latest strategic technology even if resources were available)
“Our HPC technology supply chain is still weak, with an insignificant level of integration of European
technologies into operational HPC machines… EU depends on other regions for the supply of critical
technology for its HPC infrastructure. EU risks getting technologically deprived of strategic know-how
for innovation and competitiveness…”
EuroHPC IA, Problem Nr 4 (The European HPC technology supply chain is weak and the integration of
European technologies into operational HPC machines remains insignificant)
“(In 2017) Europe consumes about 29% of HPC resources worldwide, but the EU industry provides only
~5% of such resources worldwide….In addition, close to one fifth of the top 500 HPC systems are located
in the EU, and out of these, ~20% are provided by EU vendors (oscillating between 20% and 25% over
the last years)”
EuroHPC IA, Problem Driver Nr 2 (European HPC system vendors face stiff competition from large
foreign corporations still to solve)
On the global market, the European suppliers face unequal treatment on public procurement. The USA
and China restrict the development and procurement of the high-end machines to domestic suppliers….
xiv
Examples of such technology building blocks include: power-efficient nanoelectronics, interconnect and
processor designs, middleware solutions, parallel programing and computing resource optimisation solutions,
scientific and industrial codes, etc.).
18
for the development and acquisition of the high-end machines is restricted to domestic
suppliers for national security reasons.4
To improve this situation, the Governing Board of the EuroHPC JU has enforced the
mandatory use of Article 30.335
of the Horizon 2020 Model Grant Agreement in all the
R&I actions it supports, in order to avoid transfers of critical intellectual property to
third countries. The EuroHPC JU retains the right to object to a transfer of ownership
or the exclusive licensing of results if it is to a third party established in a non-EU
country not associated with Horizon 2020, if the JU considers that the transfer or
licence is not in line with EU interests regarding competitiveness or is inconsistent
with ethical principles or security considerations. The EuroHPC JU may also consider
using additional exploitation obligations (for example, that the first exploitation of a
given technology has to be in the Union) as provided by Article 28.1 of the Horizon
2020 Model Grant Agreement.
4. EuroHPC will increase and widen the use of HPC in the EUxv
Widening HPC use
Each industrial sector must understand the potential of HPC before specialised HPC
applications can be developed and exploited. Too often HPC use still requires
advanced knowledge about the sector and highly specialised technology skills. This is
particularly relevant for SMEs, as pointed out in the EIB Study:
“While data-driven high-tech SMEs are rapidly engaging in the adoption of HPC into
their businesses, a large number of more ‘traditional’ SMEs (such as engineering
SMEs manufacturing components for large automakers) still lack awareness of the
important opportunities HPC would provide to their businesses.”29
In its R&I 2019 Call,25
the EuroHPC JU supports actions that directly address this
situation: establishing HPC competence centres, enabling industrial applications and
codes to optimise the use of HPC and stimulating the innovation potential for
manufacturing and engineering SMEs, complemented by a Horizon 2020 call for new
HPC Centres of Excellence.27
These actions respond to a key recommendation of the
EIB Study: to “strengthen the uptake by HPC users, in particular for commercial
applications by industry, SMEs, and innovative companies and start-ups by
strengthening the role of HPC intermediaries via public support.”
– EuroHPC Competence Centres will be established in all Member States in 2020.
Their aim will be to increase the use and expertise in HPC technologies, and help
xv
The baseline, as defined in the “EuroHPC IA” and the “EIB study”, is the following:
EIB study, Finding 1 (Demand for HPC capabilities is rapidly increasing in key sectors of the European
economy, such as aerospace, automotive, energy, manufacturing and financial services, while Europe’s
more ‘traditional’ SMEs are lagging behind) “The rapid growth of the data economy is leading to a
significant increase in demand for HPC infrastructure and services (including commercial uses)…
However, an ongoing critical challenge is the need to better support researchers and entrepreneurs in
appropriating this technology in line with their needs and adopting it accordance with their
requirements… there is a gap in HPC adoption and usage between large industry players and more
‘traditional’ SMEs.”
EuroHPC IA, the main consequence for Science and Industry of the low use of supercomputers is the
“Loss of innovation / stiff competition to access to few available resources (particularly for SMEs))”
19
bridge the digital skills gap. The Centres will act locally, to provide knowledge and
computing services supporting the real needs of businesses, in particular SMEs
that do not have the in-house resources to profit from the new technologies. The
Centres will be networked together and will be part of the pan-European network
of current and future Digital Innovation Hubs36
being established across the Union.
This networking will help exchange best practices, making the resources and
expertise of the Competence Centres available across the Union.
– Stimulating manufacturing and engineering SMEs to improve their innovation
potential by using advanced HPC services. The aim is to widen the HPC user base
by attracting new users in different application domains, and to provide an
effective mechanism for the inclusion of innovative, agile SMEs by lowering the
barriers for small players to enter the market and exploit new business
opportunities. This EuroHPC action will build on the Fortissimo actions37
, which
were highly successful in attracting new SME users to advanced cloud-based HPC
solutions based on modelling, simulation and/or high performance data analytics
(HPDA).
Industrial and commercial use of HPC infrastructures
According to the EIB study, there is a need for developing commercially oriented
business models based on secure and flexible HPC services and data infrastructures, as
shown by growing interest from European industry.
Today, the European HPC landscape is driven by the public sector, in both usage and
financing. Most of the high-end HPC capacity and use (over 90% of operating time) is
located at and allocated to universities or academic research centres. The remaining
10% is available for commercial use or with HPC end users. However, HPC is of
strategic importance for companies, and HPC use involves company data that must be
stored and treated under strict security conditions and respecting privacy regulations.
These conditions cannot be easily met by publicly owned/operated HPC centres.
The EuroHPC JU is addressing this as follows: as soon as the JU’s supercomputers
become available (by the end of 2020), they will be accessible to industry players for
publicly funded R&I purposes. In addition, the EuroHPC JU Regulation explicitly
stipulates that up to 20% of the Union's access time of the EuroHPC systems can be
allocated for commercial purposes, following a pay-per-use service, based on market
prices.
On the advice of INFRAG, its Infrastructure Advisory Group, the EuroHPC JU is now
working on how to define and provide these access conditions to European industry
players by noting that the EuroHPC supercomputers will be at least ten times more
powerful than any industrial system installed in Europe.xvi
The JU will also investigate
the necessary usability, trust, and security needs of industrial users. As a result,
EuroHPC will play a decisive role in spreading and substantially increasing HPC use
by key EU industry players, creating a demand for more HPC resources for industrial
use, and setting the foundations for a strategic collaboration between public and
private HPC stakeholders.
xvi
The most powerful industrial system in the EU in November 2019 is Pangea III (Total company, France)
20
2.4 The EuroHPC strategy and its impact on the HPC value chain
Horizon 2020 is the main EU programme to have supported the Union’s efforts in
implementing the European HPC strategy and in developing the European HPC value chain.
Up to 2018, around EUR 430 million was committed to R&I activities, including technology
projects (EUR 313 million), Centres of Excellence (EUR 118 million) and Coordination and
Support actions (EUR 10 million). The EuroHPC JU is implementing the support from
Horizon 2020 for 2019-2020.
The impact of this support to the European HPC value chain has been analysed by European
stakeholders38
. The main conclusions are that Horizon 2020 has contributed to achieving
significant impacts on the European HPC value chain. It has supported the development of
key technology results (most of them up to TRL 5-6-7xvii
) and applications and contributed to
creating a stronger and more connected HPC ecosystem.
1. The European HPC value chain
The production and operation of HPC systems involve a complex value chain of system
hardware, system and application development software, applications, and transversal aspects.
HPC system hardware: Horizon 2020 has supported key projects in the system hardware
value chain: EPI26
to reposition Europe in a processor market dominated by US
technologies, Exanode39
for chip integration. Exanest40
, EuroExa41
and Mango42
for
interconnects, and cooling solutions industrialised by SMEs (such as Iceotope or Submer).
In addition, the suite of Montblanc43
projects are world-leading contributors to the
emergence of ARM processor HPC systems and have achieved key results in FPGAsxviii
that have been commercialised by some SMEs (such as Maxeler).
HPC system and application development software: The contribution of the Horizon
2020 projects is very diverse. Domains in which significant results are expected include a
software stack for ARM-based processors systems, software for efficient use of the new
levels of the storage hierarchy, resource and energy management tools, scientific libraries,
domain specific languages for machine learning applications, programming environments
for heterogeneous systems (CPU, GPU and FPGA), and visualization tools for interactive
HPC. Most of the SMEs funded by the projects (e.g., Maxeler, Appentra, Arctur,
Synelexis and Kitware) have been effective in bringing project results to the market.
HPC applications: This part of the value chain is of utmost importance in Europe. 10
Centres of Excellence (CoEs) in HPC applications have been established in Horizon 2020.
They have delivered results such as a weather and climate simulation framework ready for
exascale systems, new advances in codes for material science, life science and carbon-free
energy, and a methodology to assess and improve the performances of HPC codes. CoEs
have the task to promote HPC research applications within industry. Large scale industrial
pilots are supported in the areas of digital twins, health, precision agriculture and farming,
or finance and insurance. Projects funded by the CEF-Telecom programme use HPC to
support objectives in important application areas such as smart agriculture and forestry
xvii
Technology readiness levels (TRL) is a method for estimating the maturity of technologies, from level 1
(basic principles) to level 9 (actual system proven through successful operations)
xviii
Field-programmable gate array (FPGA) is an integrated circuit that can be programmed in the field after
manufacture.
21
evolution and fire control, air quality and pollution, atmospheric, marine and earth
observation, or cultural heritage in the EU.
Besides the specific actions above, funded projects have also impacted the application
value chain: almost all the project consortia include application-oriented partners. In total
more than 100 codes have benefited from the Horizon 2020 support. In addition, Horizon
2020 funding has contributed to developing tailored solutions for more than 100 European
SMEs, increasing their innovation potential. This funding made it possible to raise
awareness of HPC and AI and provide high-end computing capabilities to SMEs through
the PRACE SHAPE44
initiative, and to provide SMEs with cloud-based HPC services
through Fortissimo’s marketplace.37
Transversal aspects: The most important transversal aspect in Horizon 2020 was
training, provided mainly by Centres of Excellence and PRACE with a total estimated
value of around EUR 23 million. Both domain specific training and training on general
HPC topics was provided to more than 13 000 people.
2. Conclusions
Horizon 2020 and CEF funding have already produced very good results in the different
segments of the value chain. However, the impact has not yet changed the worldwide market
position of the European players and has not reached some parts of the value chain, such as
the industrial HPC application sector. This is because the investment has not been substantial
enough to impact the complete R&D value chain (except for investments in the European
Processor Initiative).
Projects are not managing to transform the day to day use of HPC, as the gap between
developed technologies (TRLs 5-7) and their use in production environments is still big. The
majority of project partners are from research organizations (75% of the total funding of the
FET projects), whose mainspring is not industrialisation of results achieved. The addition of
projects targeting the TRL 6/7-9 gap and a more programme-based approach should help to
maximise the impact of the future investments.
Enhanced and sustained training efforts will also be a major factor in fully exploiting not only
the next EuroHPC-funded pre-exascale and exascale supercomputers but also future
computing generations. Moving from simulation-centric HPC to integrating HPC in a full
continuum of IT infrastructure, from edge to HPC, is a major challenge. This would require to
develop a strong relationship between the HPC community with other ecosystems such as big
data, AI and Internet of Things (IoT). Here Europe can be a worldwide leader if the
momentum created by Horizon 2020 continues.
2.5 Lessons learnt from the EuroHPC JU governance and administration
The EuroHPC JU has already acquired solid working experience,xix
with extensive
discussions of the stakeholders on the governance, administration and other operational and
xix
Examples include: The 13 meetings of the EuroHPC JU Governing Board with the regular participation of
Delegates from the European Commission and the 32 Participating States; the JU’s advisory groups (RIAG
and INFRAG) - these held already numerous meetings and were supported by the active involvement of the
two Private Members (ETP4HPC and BDVA); the selection of the 8 hosting sites and the launch of
procurement of the 8 EuroHPC supercomputers; and, the launch of the JU’s 2019 and 2020 calls.
22
implementation aspects from which the following main lessons learnt so far can be
summarised as follows:
Simplification of the co-funding scheme: The combination of EU and national funds
in the different EuroHPC activities needs to be simplified and optimised.
Recommendations include a single set of eligibility criteria for participation (instead
of 32 different national eligibility criteria); implementation of central management of
all financial contributions (except in duly justified cases), in line with Article 8(1)(c)
of the proposed Regulation establishing Horizon Europexx
; and flexibility in
introducing different percentages of EU and national funding to fund participants in
R&I activities.
More flexibility in defining the acquisition time and technology of new
supercomputing systems e.g., by avoiding fixing in the Regulation the acquisition dates
of future EuroHPC systems requiring a particular performance level, since this may clash
with market reality of available technologies. The performance could be set when a
decision is taken for the acquisition. In addition, the expected “performance” of the
supercomputers should be flexibly defined, for example as a combination of computing
power, application performance gains, expected ranking in world-class supercomputing
categories, etc.
More flexibility in the resource allocation of the EuroHPC systems: By the end of
2020, the available computing power at the Union level will increase almost eight-fold.
There will then be a need for more choice and flexibility when allocating the EuroHPC
computing resources, notably by considering new user requirements taking advantage of
novel computing architectures (see next Chapter). Currently, on the advice of INFRAG,
the EuroHPC JU Governing Board is drawing up more flexible resource allocation
policies and priority allocation. Criteria under consideration include different access
policies for scientific and industrial players; direct access possibilities for key European
projects and initiatives without submitting a call for expression of interest (e.g., for HPC
Centres of Excellence, HPC Competence Centres or other key EuroHPC projects), priority
access for crisis management situations, etc.
Well-defined access policies for the industrial/commercial use of the EuroHPC
infrastructure that would enable the full exploitation of the EuroHPC capabilities in
either pre-competitive research access, or in commercial terms of use. In addition, they
would need to consider an enhanced quality of service, and the usability, trust, and
security needs that industrial users require to access secure HPC resources.
A clearer framework for collaboration with PRACE and GEANT. Specific
arrangements may need to be established with PRACE for the tasks related to the
allocation of the access time to the JU’s systems, for training and dissemination activities,
and for developing a fully cloudified and federated HPC service and data infrastructure in
Europe. Similarly, the EuroHPC JU could make use of the experience of GEANT for
procuring dedicated connectivity for the EuroHPC supercomputers.
A better definition of the different contributions to the activities of EuroHPC. For
example, there is a need to further define the in-kind contributions of the Participating
xx
Council of the European Union (7942/19 COR 1 of 29 March 2019) - General Partial Agreement text on
Horizon-Europe 9, Art 8.1.(c) reflecting the common understanding between Council and Parliament
23
States and of the Private Members to the EuroHPC JU; and to better define the costs that
EuroHPC can/cannot support for the acquisition and operations of supercomputers.
More flexibility in the contribution of Private Members and other private actors to
the activities of the EuroHPC JU, notably by including novel forms of cooperation, for
example co-funding specific HPC infrastructure for industrial use.
3. HPC in a fast evolving environment
This chapter identifies the evolution of key market, socio-economic and technological drivers
that will affect the way HPC is developing world-wide and in the Union in the next 5-7 years.
Part of this evolution is provided in the Vision Paper prepared by the EuroHPC JU Industrial
and Scientific Advisory Board45
. The chapter analyses in particular the following main
drivers:
The increasing importance of HPC for a wide range of applications
The HPC market drivers
The evolution of the user requirements (from science and industry)
The convergence of HPC with AI and related technologies
The evolution of supercomputing technologies
New computing paradigms: Neuromorphic and Quantum computing
The training and skills required in the next decade
The new Political Guidelines for the period 2020-2025
3.1 The increasing importance of HPC for a wide range of applications
During the last decade, political leaders in the developed countries have recognized the key
importance of HPC-powered applications to help transform their economies and societies.
Mastering HPC applications has become indispensable to advancing science, boosting
industrial competitiveness, improving the quality of daily life for citizens, and reinforcing
national security and technological autonomy.
Excellence in the development and use of applications exploiting the supercomputing
capabilities will be a major global driver for HPC. More than 800 HPC applications are used
across all scientific fields, branches of government and virtually all industries and sectors, and
Europe has traditionally been a world leader in this domain4
. Several of the most used HPC
codes are European (e.g. FOAM for computational fluid dynamics, GROMACS for molecular
dynamics, VASP and Quantum Expresso for quantum materials modelling, Simulia
Abaqus for finite element analysis and computer-aided engineering). European companies
have pioneered the industrial use of HPC applications such as crash test codes in automotive,
in-silico drug design and testing in pharma, or computer aided design in aerospace. The
importance of HPC applications as a key driver can be illustrated as follows:
HPC and the data revolution
The convergence of HPC, Artificial Intelligence (including Machine and Deep Learning), big
data and high performance data analytics (HPDA), and Cloud are already and will continue to
be the main innovation drivers in the “data revolution”, creating entirely new possibilities for
24
HPC-powered applications to extract useful and usable knowledge from the huge amount of
raw data produced every day. HPC are increasingly becoming the “engine” that powers such
data revolution, and a key element to fulfil the ambition of putting Europe in the driving seat
of the global data economy. Section 3.4 analyses in further detail the convergence of HPC
with Artificial Intelligence and related technologies.
HPC and industry’s innovation potential
HPC is a mainstream technology for the digital transformation of European industry. The use
of HPC applications and tools is expanding to all industries as it becomes more accessible
with today's and future broadband networks. HPC applications have traditionally enabled
industrial sectors that are “computationally aware” like manufacturing to move up into higher
value products and services. In particular, the use of HPC applications and services over the
cloud will make it significantly easier for SMEs that do not have the financial means to invest
in their in-house HPC development to develop and produce better products and services.
HPC and scientific leadership
HPC is now firmly established as the third pillar of modern research, alongside theory and
experimentation. HPC applications have become an essential component in nearly every field
of scientific research, thanks to steadily increasing computing power and widespread
availability of HPC infrastructure, in particular since the 1990s.
The applications of HPC in science are countless: fundamental physics, advancing the
frontiers of knowledge of matter or exploring the universe; in material sciences, designing
new critical components for the pharmaceutical or energy sectors; in fluid dynamics and
adaptive control problems for the design of airplanes or planning of smart cities; in modelling
the atmospheric and oceanic phenomena at planetary level, etc. It is probably in the field of
life sciences and medicine where the tremendous impact of bioinformatics is very visible, for
example in understanding the generation and evolution of epidemics and diseases and their
early detection and treatment.
Supercomputers will be essential for the success of the European “1+ Million Genomes”
initiative launched in 201846
aiming to creating a data space with access to at least 1 million
sequenced genomes in the EU by 2022. Supercomputers will make possible for example the
fast identification of genetic disease variants by processing billions of DNA sequences, or the
screening of hundreds of billions of molecules in a few hours (rather than months or years) to
identify potential candidates for a treatment or a vaccinexxi
.
HPC’s role in societal Challenges and policy making
Citizens expect sustained improvements in their everyday life, while at the same time society
is confronted with an increasing number of complex challenges – at the local urban and rural
level as well as at the planetary scale. HPC applications are a strategic resource for policy-
making, helping us to understand our ever-changing world, and providing a much-needed
evidence for designing efficient solutions in many of the global challenges.
The inter-disciplinary nature of HPC and the wide range of applications provides policy
makers with powerful tools in critical areas, for example: Weather and Climate change;
Health, demographic change and wellbeing; Secure, clean and efficient energy; Smart, green
xxi
The role of HPC applications in the COVID-19 global crisis is illustrated in Annex IV of this document.
25
and integrated urban planning; Food security, sustainable agriculture, marine research and the
bio-economy; or crisis management. xxi
HPC’s role in the Union’s technological autonomy and security
Supercomputers are essential for national security, defence and technological autonomy. They
are already used to increase cyber-security and in the fight against cyber-criminality, in
particular for the protection of critical infrastructures. The exponential rise of the economic
losses associated to cybercrime (expected to reach EUR 5.4 trillion by 2021) reveals the need
for developing secure applications and infrastructures that can anticipate and promptly react
to an ever increasing menace.
In cyber-security, HPC-powered applications will unlock the power of security tools thanks to
their capability to speed up the AI- and machine learning (ML)-driven complex software.
Hybrid techniques combining HPC and AI (in particular ML techniques) will increasingly be
used for a more effective threat analysis and security event correlation, contributing to
developing self-healing and self-adaptive cyber-security systems: detecting strange systems
behaviour, insider threats and electronic fraud; detecting and fighting very early cyber-attack
patterns (in a matter of few hours, instead of a few days) or potential misuse of systems; or
taking automated and immediate actions even before hostile events occur.
HPC applications in combination with AI will be a game changer in defence and security.
Both the US and China have already linked closely HPC and AI developments in their
defence programmes. The US President Trump's executive order on Maintaining American
Leadership in Artificial Intelligence47
makes explicit the HPC-AI link, asking his
administration to prioritise the allocation of high-performance computing resources for AI-
related applications.
3.2 HPC Market drivers
The impact of HPC in the most developed economies of the world is impressive.xxii
HPC has a
growing contribution to the digitisation of critical industrial sectors that account for c. 53% of
the Union’s GDP. Investments in HPC have shown particularly strong growth rates in recent
years. The overall HPC market is expected to reach EUR 39.6 billion in 2023, including
expenditure in HPC servers, storage, software and technical support. In particular, expenditure
in HPC servers will grow from EUR 12.33 billion in 2018 to EUR 17.9 billion in 2023.
Europe maintains a relatively constant share of around one-quarter (c. 26%) of the overall
spending in all categories of HPC systems, while the USA continues as world leader in HPC
investments. The main market developments can be summarised as follows:
Strategic Investments in HPC
Return on investments (ROI) on HPC is yielding excellent returns in the EU: on
averagexxiii
, every Euro invested in 2018 returned EUR 43.2 in profits or costs savings and
EUR 260 in revenues.
xxii
The data source in this section is Hyperion Research 2019 and all figures and sources of the market data
provided in this section can be found in Annex I of this document, unless otherwise indicated.
xxiii
This includes public and private investments
26
Spending levels in the strategically important high-end market of systems over EUR 2.2
million, are an important measure of HPC leadership. The current situation of the EU is
not very satisfactoryxxiv
, amounting to less than one third of the US and less than half of
China supercomputing power in the world top-500 supercomputers.
The exascale race and the ICT market
Exascale performance is now driving investments in the high-end segment of
supercomputers. Between 2020 and 2025, the global spending on pre- and full-exascale
supercomputers is expected to total about EUR 8 billion.
Advances in exascale HPC technologies will affect a market worth EUR 1 trillion, out of
the EUR 5 trillion broader ICT market. For example, low-power processors and
accelerators for high-end computing will drive developments in technologies like Internet
of Things (IoT), cybersecurity, AI, robotics and augmented and virtual reality.
US vendors have almost 100% of the world-wide processor market, with Intel holding
more than 95% in several categories of processors (CPU, GPU, etc.).xxv
In the HPC sector,
Intel’s share is lower, and around 95% of accelerators in the top 500 supercomputers are
from NVIDIA.
The race to exascale and the convergence of HPC with AI and cloud technologies is
changing the market, putting more emphasis on low-power processors, accelerators and
GPGPUs computing in data servers, which fosters the use of ARM-based technologies
and allows US companies like AMD and NVIDIA to get an increasing market share.
In the “top-500”, Chinese companies sell around 65% of the systems. Chinese indigenous
processors are present in only a few of those systems, but it is expected that Chinese
technology for the exascale supercomputers will likely enter the market in the next few
years.
Solutions based on open-hardware (in particular RISC-Vxxvi
) are gaining momentum as a
credible alternative to the proprietary solutions for processors and accelerators across the
computing continuum. Any company can use RISC-V without any fear of losing access in
the future (for instance due to commercial bans on technology exports). RISC-V will
create opportunities for non-US companies to break the almost monopoly situation in chip
design. (e.g., European RISC-V accelerators in the EPI project and other activities).
European HPC technology supply and market
On HPC supply (all segments), the US was the world leader in 2018 with 67% of global
HPC sales, followed by China (16.2%), Japan (3.5%) and the EU (c. 1.1%). No European
company supplies key components like general processors or accelerators.
Participation of EU vendors in the global HPC market is still weak. Out of all “top-500”
supercomputers, only 28 (5.6%) are supplied by EU manufacturers. 26 were supplied by
xxiv
This does not take into account the planned acquisition of EuroHPC JU in 2020 that will be visible in 2021.
xxv
CPU: Central processor Unit; GPU: Graphic Processor Unit, GPGPU: General-purpose computing on
graphics processing units.
xxvi
RISC-V is an open-source hardware instruction set architecture based on reduced instruction set computer
principles
27
one main EU vendor (Bull-Atos), with 19 of these 26 supercomputers purchased by
clients in the EU and only seven clients globally.
Out of the 79 HPC systems in the top-500 list that are located in the EU, only 21 (26.5%)
were supplied by European manufacturers. This means that almost 75% of the European
HPC market is being supplied by non-EU manufacturers.
The markets for HPC, Artificial Intelligence and Cloud are converging
Expenditure related to the use of HPDA and AI will be the fastest-growing market
segment for HPC. By 2023, the overall HPDA-AI market for HPC servers is expected to
reach about EUR 5.76 billion, or about 32% of the EUR 18 billion worldwide market for
HPC server systems with a five-year CAGR of 15.4%. The subset of HPC-based AI
(machine/deep learning and other) is expected to reach EUR 2.43 billion by 2023 for a
2018-2023 CAGR of 29.5%.
Worldwide spending on public cloud services and infrastructure is expected to reach EUR
333 billion by 2022. The use of cloud for HPC workloads will jump substantially in the
next few years, especially due to the growth in hybrid cloud deployments. Worldwide, the
proportion of sites exploiting cloud computing to address parts of their HPC workloads
has grown to over 70% in 2019, helping the "democratisation of HPC". By 2023,
expenditure in cloud usage fees for HPC will reach around EUR 7.5 billion.
3.3 Evolution of user requirements
In the next 5-10 years, the requirements of private and public users for supercomputing and
data infrastructures are expected to evolve at a rapid pace. An analysis carried out in the
Vision Paper45
shows that users require a computing and data environment allowing a
seamless access and execution of complex workflows across the European hyper-connected
supercomputing network. This environment would serve interdisciplinary user communities
using algorithms and data for generating knowledge. To this purpose, future supercomputing
systems would need to provide many new computing and data applications and services, such
as the following:
Data-driven computingxxvii
and compute-intensivexxviii
applications: User demand is
now split almost equally between these two workload types, with many areas utilising
combinations of both, e.g. engineering.
Big-data management: Storage and I/O requirements are expected to grow even faster
than computing needs, in particular for data-driven and deep learning applications. These
requirements would have to be coupled with provisioning of a large-scale end-to-end data
e-infrastructure to collect, handle, analyse, visualise, and disseminate data.
Real-time and interactive computing: future applications that depend on human
intervention will require real-time and/or interactive computing, with examples including
product design, medical applications, smart cities, smart grids and digital twins.xxix
xxvii
Data-driven computing (e.g. deep/machine learning) is characterised by low arithmetic intensity, irregular
memory access, and fine grain recursive computations. Memory, network and data storage performance are
the rate-limiting factors.
xxviii
Compute-intensive applications – e.g. material science – are characterised by high arithmetic intensity and
regular memory access and have a rate limiting factor governed by floating-point throughput.
28
Urgent computing is closely related to real-time and interactive computing, the main
difference being its lack of predictability. Examples are typically disaster management
and decision support in the event of e.g. floods, blackouts or traffic jams.
The design of any future exascale and post-exascale supercomputer would need to be based
on a co-design approach that considers enhanced synergies between technology suppliers,
industrial and research users (in particular, advanced early adopters of new technologies) and
algorithm and application developers.
Evolution of industrial user requirements
One of the critical success factors of HPC usage in industry is the capability to adapt to the
specific industrial needs, which differ significantly from those of scientific users.
Regarding access to computing resources, the main differences are time and access
procedures, quality and type of services, security and data protection, and intellectual
property: Academic users have longer research timescales, usually months or years. They can
expect access decisions months after applying. In stark contrast, industrial users expect short
access time to computing resources to reduce lead-time (from the demand to the final
delivery), with access to be primarily provided through the cloud. They also expect flexibility
for adaptation to small changes, support to access configuration and/or set-up, and dedicated
“services” and commodity for managing peaks of computer demand.
Industry also requires intellectual property protection for the results and, most importantly,
data security and confidentiality. This concerns the secure management of the whole data
lifecycle: from the initial datasets, to intermediate datasets produced during the
computing/simulation process and the final result. Finally, industry players require guarantees
of safe access to supercomputing capabilities when developing new industrial capabilities
based on confidential studies.
Some additional points identified in the Vision Paper45
are the following:
Large companies have a unique role in the HPC ecosystem, as they purchase many of the
world top-500 computers. They are generally already involved in HPC and collaborate in
many areas with public research. Many of them are already accessing supercomputers of
petaflop performance and their main requirement is for a flexible, competitive system
tailored to their expected use.
SMEs: Most European SMEs (and start-ups) are just moving to numerical simulation and
data analytics. They generally need support, training and coaching to switch into the new
HPC-powered world, with specific structures and activities.
For all industrial players there is an urgent need to adapt software codes to new computing
architectures. In some cases this can be difficult due to certification issues (e.g. airplane
simulations). Industry also needs to develop innovative applications capable of exploiting not
only the new architectures but also new paradigms (in particular AI-based). An example of
the radical transformative role of HPC is digital twins.
xxix
Digital twins are exact digital replica of physical entities, products and constructions that reflect the static
properties as well as the evolving behaviour – for more, see section “HPC and industry’s innovation
potential” in Annex 3.
29
Industry and digital twins: A new generation of digital twins requires the powerful, agile
computing capabilities provided by HPC to facilitate global mobility and collaboration,
combining different technologies such as mixed reality tools, cloud rendering, real-time
simulation and analysis, IoT and deep learning/AI. New HPC-powered digital twins are able
to significantly accelerate product development and manufacturing processes, by generating
digital representations of their end-to-end business processes while providing new ways of
collaborating simultaneously in the virtual and physical world.
Finally, one of the biggest barriers that industry users face to efficient uptake and use of HPC
is the need for wide outreach and dissemination of the concrete benefits that HPC
technologies can bring to them. Entrepreneurs will be encouraged to invest into HPC if
demonstrated with successful business plans, innovations, etc. A proactive engagement at
local, regional and EU level will have to be undertaken to effectively engage with individual
enterprises, industrial hubs, and SME networks and associations and, in particular, the
network of Digital Innovation Hubs and the Enterprise Europe Network.48
In conclusion, industry users have specific usability, trust, and security requirements and need
secure HPC resources and HPC-powered platforms for industrial innovation. Any future HPC
initiative should address the following: specific access portals (e.g. cloud-based access), easy
to-use tools (e.g. for big data analytics), secure application workflows between industry
premises and a supercomputer, an appropriate high-bandwidth network infrastructure in all
European countries, etc. An important aspect is the certification of HPC centres as reliable
partners for industry, covering the aspects of usability, trust and security.
3.4 The convergence of HPC with AI
The convergence of HPC and Artificial Intelligence (AI) is critical for applications that rely
on big data and high performance data analytics (HPDA). Exascale computing in combination
with AI will have an enormous impact on the way computing is done.
By 2018, the amount of computational power used to train the largest AI models had doubled
every 3.4 months since 2012.49
AI and HPC are by their nature synergetic: for example, HPC
simulations generate huge amounts of data and AI techniques can make sense of it.
Conversely, HPC can be used to explain and understand AI methods, building trust in the
decisions made by AI. Annexes I and II present more details on key synergies and the market
evidence on the convergence of these two technologies.
One of the factors facilitating this convergence is the need to deal with the explosion of data.
Besides the general growth of available data in the digital universe (e.g. more than 460
exabytes, i.e., 1018
bytes, will be generated every day by 2025)50
, some key applications will
require both extreme-scale computing capabilities and AI-techniques to handle the huge
volumes of data. The following examples illustrate the magnitude of the challenge:
Square Kilometre Array (SKA): the data production of SKA is estimated at 11 exabytes
daily, i.e. the same amount of data in a day as the entire planet produces in a year
today.51
Copernicus: Seven Sentinel satellites already in orbit deliver tens of terabytes of data
every day. Six Copernicus services deliver information products from which ocean,
atmosphere and climate models outputs are based on HPC computing: Copernicus is
the biggest provider of Earth observation data in the world.52
30
Genome sequencing: As a single human genome takes up 100 gigabytes of storage
space, and more and more genomes are sequenced, an estimated 40 exabytes of storage
capacity will be required for human genomics by 2025.53
The High-Luminosity CERN Large Hadron Collider (LHC), the successor to the current
LHC, is planned to come online after 2025. By this time, the total computing capacity
required by the experiments is expected to be 50-100 times greater than today, with
data storage needs expected to be in the order of exabytes.54
In the more common version of the pairing, “HPC for AI”, AI uses an ecosystem that
requires HPC, from embedded systems, to edge computing systems and large computer
centres, all interconnected in a secure fabric that exchanges and processes huge amounts of
data. Autonomously driving vehicles are a perfect example: embedded low-power HPC
processors will enable real-time decisions in the vehicle, while less urgent decisions will be
made by interaction with edge computers. More strategic decisions will require complex AI-
powered models and simulations to be run on centralised HPC systems that feed on data
provided by vehicles and edge computers.
A symmetrical topic of importance is “AI for HPC”. This is less well developed, but the new
capabilities offered by AI will improve the development and deployment of HPC technologies
and solutions. In the traditional use of HPC for simulation, model-based approaches are being
challenged or replaced by hybrid modelling / AI techniques across many science and
engineering fields including physics, chemistry, and molecular biology. For example,
machine learning is used to control and drive complex HPC simulations, making them faster,
more accurate, and self-improving over time.
3.5 Evolution of supercomputing technologies
The trends in supercomputing for the next decade and beyond will be driven by
disaggregationxxx
, enabled by network improvements, and even greater specialisation, as
necessitated by the end of Moore’s Law. 55
This will be enabled by an open source computing
platform (hardware and software). Big data and AI techniques are key drivers in the global
race to master the next supercomputing frontier of exascale performance. The next generation
of supercomputers now under development in the USA and elsewhere is being optimised with
new processor types, memories, software and system designs to maximise HPDA and AI
workflows. An added benefit of this trend is that dedicated HPC architectures designed to
support such workflows and workloads would reduce the computing time for many tasks and
would therefore yield a more sustainable energy footprint.
The whole ICT technology landscape, driven by an exponential increase in big data and
AI services, displays performance shortcomings and limits similar to traditional HPC-
technologies, and ultimately related to the slowing of the Moore's law. As a
consequence, both big data and HPC markets are shifting toward a new hardware
landscape, where specialisation can be used to mitigate technology limits and meet the
increasing power and performance needs of applications (HPC and non-HPC). The
advent of open source software, and by extending the boundaries of open technology to
all aspects of the HPC system including hardware, provides the focus, flexibility and
xxx
Disaggregation is the separation into components, e.g. separating data-centre equipment, in particular servers,
into resource components to offer flexibility and ensuring optimal utilization.
31
freedom to build new, specialized systems that can meet new power and performance
requirements….45
Some of the main trends can be summarised as follows:
System architectures: Specialisation, heterogeneity, modularity and composability will
be the dominant paradigms at all levels of computer technology, allowing customised and
cost-effective supercomputing architectures that optimise the use of resources for a class
of applications. The future architectures of the exascale and post-exascale era will be
optimised by default to support heterogeneous modelling, simulation and AI tasks.
“The heterogeneous architecture that underlies El Capitanxxxi
is actually uniquely able
to host both artificial intelligence machine learning applications, and modelling and
simulation….We are already starting to think how to combine them to accelerate our
ability to simulate beyond the factor of ten that the hardware alone is going to give us
with El Capitan.” 56
Edge/fog computing: Further complexity arises with the federation of computing
resources between HPC systems and the "edge", consisting of datacentres, cloud
computing services, local clusters and data generation instruments as well as the IoT
devices (e.g. sensors, actors, local computing systems). On-the-fly calculations at
network-level can reduce the need for expensive data transport.
High-speed intra-networking: This is at the core of HPC system design. Increasingly
large systems require optical connections to meet the large distances between the systems
racks. Furthermore, smart network technology such as adaptive routing, dynamical
network reconfiguration, network virtualisation, and in-network computations to reduce
data movement and to speed-up collective operations will become of high importance.
Reconfigurable computing: Being able to orchestrate the varieties of computing
resources will enable “reconfigurable computing” at the system level that is able to adapt
to the requirements of individual users. Along with virtualisation and containerisation,
each user can be assigned a “virtual cluster” meeting his/her specific needs.
Energy efficiency. Moore’s law implies that energy and waste heat densities grow
exponentially too. New supercomputers need more electricity and have demanding
requirements, i.e. 5-17 Megawatts (MW) today and up to 50 MW in the near future, and
switching loads of at least 10 MW on and off in milliseconds. Power management, energy
efficiency, cooling and recuperation of waste energy are key research fields to reduce the
operating costs of the systems.
3.6 New computing paradigms Neuromorphic and Quantum Computing
New emerging computing paradigms will gradually find their way into the traditional
supercomputing infrastructure, in the beginning as specific accelerating processing
components for certain applications, and later on as main computing elements. Technologies
that will progressively find their place in the computing continuum include CMOSxxxii
scaling,
RISC-V, 2.5/3D stacking, non-volatile memory (NVM), silicon photonics, memristive
xxxi
El Capitan is the US National Nuclear Security Administration (NNSA) supercomputer expected to reach
more than 1.5 exaflops, to be installed in Lawrence Livermore National Laboratory (LLNL) in late 2023.
xxxii
CMOS (complementary metal-oxide semiconductor) - semiconductor transistor technology.
32
devices, optical systems, analogue computing, dataflow architectures, “in memory”
computing, and more.
The effective use of non-traditional computing architectures, like quantum computers,
neuromorphic systems, digital annealers and data flow machines, requires close coupling and
interaction with HPC machines that can only be realised at system level in a modular and
composable manner.
Two novel computing approaches that are starting to show interesting complementarities with
HPC are neuromorphic and quantum:
Neuromorphic computing
The development of AI, in particular deep learning, has led to huge interest in
neuromorphic architectures, which are inspired by a theoretical model of a neuron, or
“simulated annealing” processors. As more and more applications (or a part of an
application) are mapped to this paradigm, it is worth developing specific circuits that
implement only the operations and data paths mandatory for this architecture.
Quantum computing
Quantum technology uses the properties of quantum effects – the interactions of
molecules, atoms, and even smaller particles, known as quantum objects. Quantum
computing is based on quantum bits (qubits), making it possible to compute millions of
possibilities in parallel, instead of one at a time as classical computers do. Quantum
computers could deliver an impressive processing power for a given class of computing
problems, namely those related to prime number factoring or to large optimisation
problems (representing a combinatorial explosion of computing possibilities). They would
make it possible to solve currently unsolvable problems such as the design of new
materials and drugs, the development of new medicines (conducting virtual drug trials or
analysing cancer cells to develop personalised treatments), cryptographic solutions,
complex logistics and scheduling problems, risk analysis calculations in finance, etc.
First quantum systems have already been built implementing up to 53 qubits. However,
several major technological advances will be needed to build a universal quantum
computer of millions of qubits, which may not exist for at least one or two decades.
Europe is investing major R&D efforts in the Quantum Technologies Flagship now
supported under Horizon 2020 and as of 2021 under Horizon Europe. Quantum computing
projects of the Flagship are developing some of the most advanced physical platforms in
the world. They are aiming to reach 50 to 100 qubits by mid-2021 and at least 500 qubits
by 2025, using different quantum technologies: trapped ion quantum computers,
superconducting quantum computers, and quantum computer prototypes based on other
technologies such as photonics and semiconductors, in combination with built-in quantum
error-correction approaches. They are also developing quantum software and
programming libraries, and applications addressing industrial use cases and solving other
concrete problems.
A good overview of the European roadmap in quantum computing technologies is
provided in the 2020 strategic research agenda of the Quantum Technologies Flagship.57
The software and applications challenge
Europe is the world leader in algorithmic and software development in many disciplines. To
33
maintain this leadership and foster scientific breakthroughs, innovations will require
significant investments in new algorithms and software optimisation tools, in order to fully
exploit the potential of a modern HPC infrastructure.
One of the most critical aspects to consider in the exascale era is that most applications will
require new methods and workflows to exploit the available resources. Some applications will
continue to scale exceptionally well, while others are converging with data-driven
applications and machine-learning-type needs. Composability is the key: new approaches will
integrate codes and software components in complex and scalable workflows. New
programming environments and frameworks will enable the development of composable
codes with portable performance, and a higher level of abstraction, reducing the costs of
integrating new paradigms and closely matching architectural features of increasingly
diversified and specialised hardware platforms to come.
A significant effort will have to be provided for new algorithmic developments. Next-
generation computing requires an ambitious programme of algorithm development integrated
with infrastructure design/development and over longer timescales. It also requires the critical
system software stack with European components (i.e. a “European Open System Stack”) to
best exploit the underlying architecture. This European Open System Stack will create an
open source environment for both hardware and software components, fostering access and
enabling the development of co-designed systems. This will also encourage additional
investments to generate new IPs in Europe (including licensed IPs), for example, open
software and repositories for scientific and industrial use, system tools and development
environments (e.g. debuggers, compilers, performance tools).
In the longer term, continued application leadership will use novel computing approaches
(e.g. quantum, neuromorphic) for which the necessary fundamental mathematical and
computer science algorithms are not in place yet. Special attention should be given to the
support of co-design applications where traditional programming environments are extended
to include new programming and performance tools and libraries for quantum computing.
3.7 Training and skills for the next decade
The Impact Assessment of the Digital Europe programme58,59
identified several systemic
issues in the ICT field, and by extension, in HPC: high demand for ICT workers, difficulties
in recruiting ICT specialists, insufficient funding for the digital reskilling, etc. Europe needs a
qualitative and quantitative leap in the education of its workforce in order to make it fit for the
digital age. There are clear indications that workforce availability and qualifications may be
the key bottleneck in the industrial and public sectors as well as in the academic system. The
efforts of PRACE since its start in 2010 have been importantxxxiii
but much more needs to be
done to alleviate the skills deficit.
Making the vision of European leadership in digital and HPC a reality will rely on attracting
the best talents, upgrading competences and skills throughout the European ecosystem, and
providing sufficient support to strengthen the knowledge base of HPC in Europe, with new
competences, skills and profiles combining software expertise with understanding in industry
xxxiii
PRACE has organised 652 training events gathering more than 16000 participants (76.2% from academia,
4.6% from industry and 19.2% from government, non-profit and supercomputing centres) totalling 1861
training days and nearly 50000 person-training days. Since 2017 PRACE has developed 4 MOOCs with 8
on-line deliveries and 15527 participants
34
of frontier research in science and innovation. For example, to take full advantage of HPC,
Europe must train more researchers: there is a lack of computational scientists choosing to
focus on HPC. We still lack a practical strategy for integrating HPC into the already crowded
scientific and engineering curricula of European universities.
Targeted outreach, training and skill development actions, organised as part of the higher
education system, are needed to attract human resources to HPC and increase the workforce
skills and engineering knowledge in the European HPC ecosystem. Such actions would need
to cover, for example, generic and domain-specific HPC knowledge, computational science as
a career choice, and application and code development.
An important factor for the success of these measures is the accessibility at local level to skills
development actions. Knowledge and expertise in advanced digital fields is not available in
all regions in Europe. The combination of activities at both European and national/local level
should ensure that access to development of such expertise is made available in every
Member State and its regions.
3.8 New political guidelines and Commission priorities for the period 2019-2024
Actions linked to HPC in the years to come will have to take into account the policy priorities
highlighted in President von der Leyen’s political guidelines60
for the period 2019-2024:
A Europe fit for the digital age – achieving technological autonomy: The HPC
strategy should contribute to the Union’s digital autonomy. It should ensure that Europe
develops an autonomous supply of critical advanced computing infrastructures,
technologies, and knowledge, and it should help provide the supercomputing and data
capacities that many key scientific and industrial applications need to fully exploit the data
revolution, in close combination with other key digital technologies, e.g. AI, HPDA,
cybersecurity and blockchain.
An economy that works for people – Digital transformation of the economy and
European leadership in the data economy: Europe’s HPC strategy should aim to
achieve excellence and maintain European world leadership in key HPC applications for
European industry (including SMEs), science, and the public sector. It should support the
next generation of industrial environments (e.g. using big data analytics, AI and IoT for
advanced digital twins) and enable industrial HPC codes, applications and software to
exploit the performance of current and future supercomputers. The strategy should also
address the digital divide, ensuring access to the European HPC ecosystem wherever users
are located across the EU and supporting Member States in providing local support for
HPC competence, knowledge and skills. This includes access to supercomputing
infrastructures, services and solutions adapted to industry’s needs (including SMEs),
easing and fostering the transition towards a wider uptake of HPC in Europe.
A European Green Deal – addressing global challenges: The strategy should contribute
to addressing the Sustainable Development Goals (SDGs) and in particular our
environmental, climate, and other big societal challenges as outlined by the
Communication on European Green Deal61
. The Destination Earth initiative5
will bring
together European scientific and industrial expertise to develop a very high precision
digital model of the Earth. This initiative will offer a digital modelling platform to
visualize, monitor and forecast natural and human activity on the planet in support of
sustainable development, again supporting Europe’s efforts for a better environment as set
35
out in the Green Deal. As required by Destination Earth, HPC-powered simulations and
applications will provide the tools to design efficient solutions transforming the increasing
number of complex environmental challenges into opportunities for social innovation and
economic growth. The EuroHPC JU is already setting the pace worldwide in the
development of low-power technologies for HPC that can be applied in larger sectors of
the ICT landscape, helping to reduce the carbon footprint of ICT solutions, for example
low-power processors and accelerators. Greener computing should be targeted with
energy-efficient supercomputers and data centres, using for example dynamic power-
saving and re-use techniques like advanced cooling and recycling of heat produced.
A stronger Europe in the world: HPC is a strategic priority for Europe and will be key
to its national security, defence and technological autonomy. Particularly in combination
with AI and cybersecurity technologies, HPC will be crucial in helping the Union respond
to diverse and unpredictable security challenges. For example, supercomputers are
essential for nuclear simulation and modelling, protection of critical infrastructures, new
cryptographic solutions, and the fight against terrorism and crime (including cyber-
criminality and cyber-war).
In addition, the Commission presented in February 2020 its ideas and actions for shaping
Europe’s digital future62
, in which the EuroHPC JU activities can have an important impact:
Europe as a trusted digital leader: The HPC strategy should contribute to the Union’s
leadership in digital technologies that work for people, for a fair and competitive
economy, and for a sustainable society, as outlined by the Commission’s communications
on “A New Industrial Strategy for Europe”63
and “An SME Strategy for a sustainable and
digital Europe”64
. HPC is a key technology contributing to the Union’s strategy to become
an innovation-driven, value-based and inclusive digital economy and society.
Europe as a leader in trustworthy Artificial Intelligence: HPC is a critical tool to
building trust in complex AI-based solutions, for example with massive simulations to
evaluate the associated risks and increase transparency and traceability of such solutions,
or with supercomputer-based support to certification and other features such as respect of
fundamental rights or non-discrimination.
Europe as a leader in the data economy: HPC is the “engine” that powers the data
revolution, and a key element to fulfil the ambition of putting Europe in the driving seat of
the global data economy as outlined by the European strategy for data1
. Next generation
HPC infrastructures and technologies are key to support trustworthy and energy efficient
cloud-based solutions and for the exploitation of European public data spaces for the
benefit of businesses, researchers and public administrations.
Finally, the Staff Working Document “Identifying Europe's recovery needs”65
accompanying
the Commission Communication “Europe's moment: Repair and Prepare for the Next
Generation”3
identifies the HPC ecosystem as one of the key digital value chains with
potential to boost productivity and innovation that requires considerable additional
investments. Such investments in the HPC ecosystem will be one of the priorities of the
European recovery instruments (“Next Generation EU”) outlined in the Communication “The
EU budget powering the recovery plan for Europe”.66
36
4. The Union’s HPC strategic approach for the next MFF (2021-2027)
4.1 Rationale for a new mission of the EuroHPC JU in the next MFF
The Union needs to continue its ambitious strategic approach in HPC to support the building
and optimal use of the digital capacities that underpin economic prosperity and social
development, and bring the benefits of digital transformation to all European citizens and
businesses across the Union territory, including in less developed areas. During the past few
years, political leaders in Europe, but also the USA and China, have recognised the ability of
leadership-class supercomputers to help transform economies, societies, and understanding of
the world. HPC has an increasing role in advancing science, boosting industrial innovation,
and improving people’s daily lives.
Europe's scientific capabilities, industrial competitiveness and technological autonomy
depend on unrestricted access to leading HPC and data technologies and full control over
world-class infrastructures and data, in order to keep pace with the growing demands and
complexity of the problems to be solved. In particular:
Impact on society: HPC is a strategic resource for policy-making. It helps understand an
ever-changing world and provide policy-makers with the tools to design efficient solutions
addressing many complex global challenges such as global warming and climate change.
HPC is an essential technology for transforming these challenges into innovation
opportunities for growth and jobs. For example, HPC can be used to find ways of
providing secure, clean and efficient energy (e.g., evaluation of carbon reduction
measures, simulators for fusion energy, design of performant photovoltaic materials or
optimising turbines for electricity production); smart, green and integrated urban planning
(water and air quality, pollution control, traffic planning); and food security, sustainable
agriculture and the bio-economy (optimising the production of food and analysing
sustainability factors such as plagues and diseases control, etc.).
Impact on economy: The convergence of HPC with AI, big data, HPDA and the cloud, is
a main innovation driver in the data economy. It creates entirely new possibilities to
extract useful and usable knowledge from the huge amount of raw data produced every
day. The computing power of HPC is the “engine” that powers the data economy. HPC is
an enabler of novel leading-edge technologies, applications and solutions that open new
opportunities for digitising European science, industry and the public administrations,
benefiting all areas of the economy in all regions of Europe. Economic sectors relying on
HPC include manufacturing, health and pharmaceuticals, automotive, oil and gas,
aviation, and chemicals: these account for 53.4% of the Union’s GDP, encompassing EUR
7.56 trillion in value.xxxiv
Impact on digital autonomy: Given the impact that digital technologies are having in our
economy and society, the EU needs to ensure its strategic digital autonomy, in ensuring
access to essential supercomputing infrastructures and state-of-the-art HPC technologies.
The availability of world-class HPC resources and technological knowledge in Europe
will encourage researchers and innovators to stay in Europe and ensure that data produced
by EU research and industry is processed here, instead of moving to regions where high
data and computing capabilities are available.
xxxiv
See Annex I “Market Analysis and Investments”.
37
Impact on industry’s innovation potential: HPC is today a mainstream technology for
the digitisation of industry. The use of HPC, in particular combined with AI and cloud
technologies, is expanding to all industries as current and future broadband networks
make it more accessible. HPC has enabled “computationally aware” industrial sectors like
engineering and manufacturing to move up into higher value products and services.
Moreover, HPC can play a radical transformative role in industry, paving the way to novel
applications, for example, a new generation of digital twins using the powerful, agile
computing capabilities of HPC to facilitate new ways of combining the virtual and
physical worlds.
Impact on science: Over the past half century, the new domain of scientific computing
has become the third pillar of modern science, extending and complementing theory and
experimentation. The applications of HPC in science are countless, and it has become an
essential component in nearly every field of scientific research. Many recent
breakthroughs would not have been possible without access to the most advanced
supercomputers, for example the Nobel Prizes for Chemistry in 2013 and Physics in 2017.
Impact on EU security, defence and national security: HPC is recognised as a national
strategic priority for the most powerful nations of the world. Supercomputers are in the
first line for nuclear simulation and modelling, cyber-criminality and cyber-security, in
particular for the protection of critical national infrastructures. Supercomputing is a new
weapon in cyber-war, and is also increasingly used in the fight against terrorism and
crime, e.g., for face recognition or for suspicious behaviour in cluttered public spaces.
Pursuing a common strategic EU approach in HPC is essential for realising the Union’s
and its Member States’ ambition to ensure a leading role and technological autonomy in
the digital economy. The EuroHPC JU will be a key instrument for implementing this
ambition.
In Chapter 2, it was shown that the EuroHPC JU has started to deliver its mission after less
than 18 months of operation. These first achievements confirm the capabilities that the Union
has when acting and pooling resources together with its Member States.
The continuation of the EuroHPC in the next MFF (2021-2027) would permit the Union and
its Member States not only to consolidate but largely amplify these first achievements to the
benefit of the whole society and economy. However, it would be necessary to adapt the JU’s
purpose to address the enormous challenges posed by the drivers for the next 10 years, as
analysed in Chapter 3. The conclusions of the assessment of the EuroHPC JU of the “Impact
Assessment Study for Institutionalised European Partnerships under Horizon Europe -
Candidate Institutionalised European Partnership in High-Performance Computing (Final
Report)”30
confirm that an Institutionalised Partnership under Art. 187 TFEU is the preferred
option for the continuation of the EuroHPC JU, showing higher overall benefits than the other
options.
4.2 The mission of the EuroHPC JU in the next MFF
The mission for the EuroHPC JU for the next decade would be: to develop, deploy, extend
and maintain in the Union a world leading federated, secure and hyper-connected
supercomputing, quantum computing, service and data infrastructure ecosystem; support the
production of innovative and competitive supercomputing systems based on a supply chain
that will ensure components, technologies and knowledge limiting the risk of disruptions and
38
the development of a wide range of applications optimised for these systems; widen the use of
this supercomputing infrastructure to a large number of public and private users, and support
the development of key skills for European science and industry.
EuroHPC JU would realise this ambitious mission, ensuring that the Union enjoys world-class
supercomputing and data capabilities according to its economic potential, matching the needs
of European users, and with the required technological autonomy in critical HPC
technologies.
The EuroHPC JU should put in place an all-encompassing approach that ensures commitment
and support of national and EU investments, with the critical participation of the Union,
Member States, Private Members, and collaboration with other key European players such as
PRACE and GEANT.
A revised Council Regulation for the continuation of the EuroHPC JU will need to be adopted
during 2020 to implement the above vision and mission by using the financial support from
relevant programmes of the next MFF without interrupting the EuroHPC JU’s activities. This
Regulation should also incorporate the lessons learnt since the establishment of the EuroHPC
JU (e.g., governance, administration, etc.) as mentioned in section 2.4 of this document.
In the revised Regulation, which covers the period 2021-2033xxxv
, the EuroHPC JU will need
a clear mandate to address the following objectives involved in its overall mission:
Infrastructure Investment:
To invest in a secure, demand-oriented and user-driven world class supercomputing,
including quantum computing, service and data infrastructure (composed of the best existing
computing and networking technologies, and if possible European) for seamlessly providing
advanced computing and data services to public and private users in Europe.
The first objective of the EuroHPC JU would be to support investments on advanced and
highly interconnected supercomputing capabilities all over Europe, from petascale to exascale
and post-exascale, and integrating novel capabilities including neuromorphic and quantum
computing approaches. In particular, the EuroHPC JU would carry out ambitious actions to
integrate quantum technologies in HPC infrastructures. By doing so, the JU will be able to
meet the needs of European users and their applications, and to encourage a thriving scientific
and industrial innovation ecosystem in Europe.
This objective has a particular focus on:
– The acquisition and deployment of a leading-class supercomputing and data
infrastructure;
– The integration of novel computing approaches and technology capacities as they
become available in hybrid infrastructures, e.g. quantum computing infrastructures;
– The hyper-connectivity of the infrastructure with state-of-the-art networking
technologies to securely interconnect all EuroHPC supercomputers and make them
widely accessible across Europe.
xxxv
This period covers the next MFF (2021-2027), the period required for depreciating the operation of any
supercomputer(s) that the JU may acquire at the very end of the MFF - typically 5 years - and the period
required for winding up the JU.
39
Federating the supercomputing and data infrastructure, interconnecting it with the
European common data spaces, and providing EU-wide services to a wide range of users
The second objective is related to the federation of the supercomputing and data infrastructure
and its secure interconnection with the European common data spaces and cloud ecosystem
for seamlessly providing advanced computing and data services to public and private users in
Europe.
This objective will have a particular focus on:
– federating the hyper-connected national and European HPC, quantum service and data
resources into a common platform, able to offer resources, tools and access services at
European level (for example, cloud-based HPC, HPDA tools, interactive/real-time
services, etc.);
– Interconnecting the federated supercomputing, quantum service and data infrastructure
with the European public data spaces and cloud ecosystem for seamless service
provisioning to a wide range of public and private users in Europe.
Technology Ecosystem development:
To further develop and maintain a competitive ecosystem in Europe contributing to the
technological autonomy of the Union in the digital economy, by supporting the development
of advanced future computing technologies and architectures and their integration on leading
supercomputing systems and by supporting advanced applications optimised for such systems.
The third objective of the EuroHPC JU would be to support R&I activities in (i) next
generation low-power supercomputing technologies, innovative software and advanced
supercomputing systems for exascale and post-exascale computing that integrate these
technologies as well as other emerging supercomputing platforms (neuromorphic or
quantum); and (ii) innovative applications for public and private users that exploit the
capabilities of the new supercomputing systems while addressing in particular the emerging
convergence of HPC with AI, HPDA and cloud technologies.
By doing so, the JU will allow a European supply chain that ensures the development of
components, technologies and knowledge limiting the risk of disruptions in a wide range of
key technology and application areas that reach beyond HPC and, in the long run, feed
broader ICT markets with EU-made technologies. It will also to support the HPC science and
user industry to undergo a digital transformation and boost its leadership and innovation
potential.
This objective has a particular focus on:
– The development and integration of technology elements in the full value chain, from
the processor components, basic software and tools, programming environments etc.
all the way to critical applications for science and industry;
– Ensuring technological autonomy in critical technologies, infrastructures and
applications (including e.g. cybersecurity and defence applications);
– Fostering a low-energy consumption approach to the development of HPC technology
and supercomputing systems;
– Ensuring European R&I activities are linked with the development, acquisition and
deployment of leading-class supercomputers and other infrastructure based on
40
European technology and components. This is related to the strong need to create a
chain that runs from R&I to the delivery and operation of world-class HPC systems
co-designed by users and suppliers in Europe;
– Using specific innovation procurement or targeted actions that combine financial
support of the necessary non-recurring engineering costs (R&I) with the acquisition of
the resulting operational supercomputers;
– Keeping in the EU the intellectual property (IP) generated by EuroHPC-funded
activities, and supporting the commercialisation and exploitation of this IP to benefit
the Union (subject to conformance with the relevant IP framework of the
corresponding funding programme);
– Fostering the development and use of scientific, industrial and public sector
applications in key domains for Europe, in particular combining HPC with other key
technologies such as AI, HPDA and cloud.
Widening HPC use and the development of key HPC skills that European science and
industry need.
The fourth objective of the EuroHPC JU would be to (i) ensure that its HPC and data
infrastructures are optimally adapted to the different needs of scientific and industrial users, in
order to ensure the wide uptake of HPC and make a major contribution to the digital
transformation of Europe; and (ii) invest in providing Europe with a knowledgeable leading
scientific community and the competences and skills critical for scientific leadership and for
the digital transformation of industry.
This objective has a particular focus on:
– Fostering the industrial access and use of HPC and data infrastructures for industrial
innovation, adapted to industrial needs (including SMEs), exploiting the current and
future HPC and data infrastructures, and easing the transition towards the wider uptake
of HPC;
– Developing the necessary skills for the digital transformation of science and industry,
taking into account synergies with other programmes and instruments, in particular the
Digital Europe programme.58
5. The main activities of EuroHPC JU in the next MFF
5.1 The new pillars of activity
The wide range and complexity of the future objectives of the EuroHPC JU require a high-
level structure to guide understanding and implementation of the JU’s current and future
activities. In particular, this structure would be a means of matching activities and objectives
with the planned funding programmes of the next MFF in the form of five pillars of activity:
HPC Infrastructure, HPC Federation and Services, HPC Technologies, HPC Applications,
and Leadership in HPC use and skills development:
41
1. Infrastructure
The objective of this pillar would be the acquisition and deployment in the Union of a world-
class secure supercomputing, quantum computing, service and data infrastructure composed
of the best existing supercomputing, quantum computing and data technologies and hyper-
connected with state-of-the-art communication (reaching terabitsxxxvi
in the backbone). Parts
of this infrastructure could be specifically dedicated for industrial use.
The infrastructure will progressively integrate the most advanced computing generation
systems: petascale, pre-exascale, exascale and post-exascale, as well as neuromorphic
technologies, quantum simulators, and quantum computers.
In quantum computing, the EuroHPC JU would invest in at least two generations of state-of-
the-art pilot quantum computers and quantum simulators and their integration in the JU’s
HPC infrastructures. These pilot systems would be based on European technologies that are
mainly funded under the Quantum Technologies Flagship (under Horizon 2020 and Horizon
Europe). They would have a proven capability to be operated and integrated in
supercomputing environments. They would be used either as stand-alone operational systems
or as computing accelerators to form “hybrid” machines, i.e. machines interconnected with
the EuroHPC JU’s supercomputers and blending quantum and classical approaches. Both
types and their software and programming tools would be openly available via the cloud, for
users to experiment and to develop future application libraries.
The following table provides the tentative roadmap for the development and deployment of a
world-class EuroHPC JU supercomputing and quantum computing infrastructure:
2021 2022 2023 2024 2025 2026 2027
HPC
systems
Several pre-exascale systems and
2 exascale HPC systems
One or more exascale and
post-exascale HPC systems
Quantum
Systems
First generation of
quantum computers
(stand-alone systems or
in hybrid systems as
accelerators of HPC )
Fully programmable
quantum simulators
interfacing with
HPC systems
Second generation of
quantum computers (stand-
alone systems and hybrid
systems integrated in HPC)
xxxvi
A communication network capable to transfer data at 1 trillion (1012
) bits per second.
Figure 7 - EuroHPC: Pillars of Activity
42
The main activities of the infrastructure pillar are the following:
2021-2024: Acquire and deploy two top leading-class exascale supercomputers owned by
the EuroHPC JU that could be built for example with technology based on the efforts of
the EPI Consortium. These supercomputers will be owned by the EuroHPC JU;
2022-2024: Acquire and deploy mid-range supercomputers complementing the top-ranked
systems above. These supercomputers will be co-owned by the EuroHPC JU and Member
States;
2021-2025: Develop and deploy hybrid supercomputing infrastructures, by integrating in
the HPC infrastructure the most advanced quantum simulators and/or future quantum
computing platforms, as follows:
– 2021-2022: start equipping major computing centres with the best available European
quantum computers, some interconnected with high-end HPC machines as accelerators
for specific applications, accessible via the cloud;
– 2023-2024: procure fully programmable quantum simulators reaching at least 1000
individual quantum units (atoms/ions);
– 2025-2026: build and deploy the second generation of quantum computers (based on
processors of at least 200 high fidelity qubits) as stand-alone systems or hybridised
with high-end HPC machines and accessible via the cloud.
2026-2027: Acquire top leading-class post-exascale EuroHPC supercomputers owned by
the EuroHPC. These supercomputers will be owned by the EuroHPC JU;
Support the acquisition and deployment of a secure supercomputing and data
infrastructure for industrial users;
Guarantee hyper-connectivity of the above EuroHPC infrastructure by securely
interconnecting all European supercomputing centres and make them widely accessible to
public and private users across Europe;
2. Federation and Services
The objective of this pillar would be to provide EU-wide access to computing and data
resources and services throughout Europe for the research and scientific community, industry
(including SMEs) and the public sector. This pillar will address the federation of EU and
national supercomputing resources and the provision of secure cloud-based services to a wide
range of application with different access and security needs, including services based on the
use of European common data spaces.
The main activities of the infrastructure pillar are the following:
Federating national and European HPC and data resources into a common platform, able
to securely offer HPC resources, tools and access services at European level (for example,
cloud-based HPC, HPDA tools, real-time simulations, etc.) for a wide range of public and
private users.
Developing and adapting the supercomputing and data infrastructure in highly flexible
configurations tailored to a wide range of application and computing needs of users from
academia, industry and the public sector, including for European Open Science Cloud
43
users. This will also address the development of interfaces to other public and private
cloud providers to offer HPC-based services with different security requirements.
Developing specific access and HPC-based services based on European common data
spaces in areas of public interest across the Member States addressing essential societal
challenges such as e.g. transport and climate change.
Interconnecting securely the federated supercomputing and data infrastructure with the
cloud ecosystem for interoperability and service provisioning to a wide range of public
and private users in Europe.
3. Technologies
The activities in the technologies pillar would be organised with the aim to ensure the
development of a source of innovative HPC technology (hardware and software) in Europe.
A major objective of the pillar will be to support an ambitious research and innovation agenda
for developing a competitive and innovative supercomputing ecosystem addressing hardware
and software technologies, and their integration into computing systems. Focus will be on
energy-efficient HPC technologies that will cover the HPC sector and also broader technology
sectors (e.g. extreme-scale, high-performance big-data and emerging applications based on
edge computing).
Another major objective of the pillar will be to develop the technologies and systems required
for the interconnection and operation of classical supercomputing systems with other, often
complementary computing technologies, in particular neuromorphic or quantum computing.
The pillar will cover the entire scientific and industrial value chain, from research to
prototyping, piloting and demonstration.
Examples of activities that this pillar will support include:
Energy-efficient exascale and post-exascale computing architectures, technologies
and systems and their integration in pilot systems. This includes:
– Development of the next generation of technology building blocks for high-end
computing, including both hardware technologies (low-power processors and
accelerators, interconnects, etc.), and the software stack (programming models and
environments, compilers, optimisation tools, operating systems, etc.).
– The establishment of specialization, heterogeneity, modularity and composability as
the dominant paradigms at all levels of computer technology to allow for customised
and cost-effective supercomputing architectures that optimise the use of resources for
a class of applications.
– Integration of technology building blocks into novel HPC architectures for exascale
and post-exascale systems, from the first level of basic elements to system integration
in prototypes and pilots (up to pre-operational environments). This includes support
for R&I in hardware and software required for building top-class exascale machines as
well as on novel cooling technologies.
Novel algorithms and software codes and tools for advanced supercomputing
systems
44
– Developing a novel generation of mathematical methods and algorithms for European
leadership in digital twin technologies, notably those relying on modelling, simulation
and optimization methods enriched by data analytics and intensive computing;
– Codes and software components following the composability approach integrated in
complex and scalable workflows, including the development of a European Open
Software. Productive programming environments and frameworks enabling the
development of composable codes with portable performance, and a higher level of
abstraction, reducing the costs of integrating new paradigms and closely matching the
architectural features of increasingly diversified and specialised future hardware
platforms, in particular based on open hardware and software.
Hybrid computing pilots, covering the developments needed to build pilot quantum
computing and simulation platforms and to interconnect them with the HPC infrastructure
and the developments needed to interconnect HPC with other computing platforms (e.g.
neuromorphic or other) and ensure their effective operation.
A co-design approach is necessary in technology development (in particular in the
prototyping and piloting phases) between suppliers and users, defining new architectures and
better computational methods and algorithms that are adapted to real application needs. Co-
design ensures that hardware and software architectures fit the needs of key relevant/mission
critical applications by applying the necessary technical trade-offs in system design. The pilot
and prototypes demonstrating the viability of technologies for exascale performance will
serve as ‘stepping stones’ towards future fully operational exascale systems. These prototypes
would be installed in supercomputing centres for wide user testing and validation.
4. Applications
This pillar would aim to achieve excellence and maintain European leadership in HPC
applications that are key for European science, industry and the public sector. Scientific and
industrial HPC codes, applications and software packages in key areas for Europe will be co-
designed, developed, ported and optimised to fully exploit the performance of current and
future computing systems. Examples of activities in this pillar include:
Support to HPC-powered codes, applications and tools in all phases (such as in co-
design, development, porting, re-structuring, optimisation, up-scaling, re-engineering,
etc.) in critical domains for extreme scale computing and data performance. This support
could be implemented through a variety of actions, e.g.:
– For scientific users: promoting Centres of Excellence in HPC applications (CoEs)xxxvii
,
in areas where user communities, in collaboration with other HPC stakeholders, can
develop or scale up existing parallel codes and applications to fully exploit future
exascale and extreme performance computing capabilities.
– For industry: large initiatives on industrialisation and deployment of HPC software
and codes, ensuring that professional industrial software codes and services (including
e.g. compilers, tools, standards, etc.) can be adapted to make full use of new HPC
performance capabilities. This includes the development of tools for modelling and
xxxvii
CoEs are user-driven focal points for application excellence in key scientific or industrial areas, and for co-
design with the European technology development to ensure that European technologies and systems fit the
needs of applications and their users.
45
simulation of complex industrial systems (such as systems of systems), for example to
simulate digital twins.
Development of large-scale industrial pilot test-beds and platforms for HPC applications
and services, including HPDA and AI-focused ones, addressing the feasibility, scaling,
and demonstration of secure HPC environments in key industrial sectors.
5. Leadership in HPC use and skills development
This pillar would aim to widen the scientific and industrial use of HPC applications, and to
provide Europe with a knowledgeable leading scientific community and skilled workforce. Its
activities should help the digital transformation of industry and strengthen the knowledge base
of HPC in Europe with new competences and skills. Examples of activities that this pillar
could support include:
Further supporting the development and coordination of national HPC Competence
Centres, and encouraging and supporting exchange of best practices, the sharing of
existing libraries of HPC codes and access to upgraded HPC application codes.
Facilitating the access to the best HPC and data intensive codes and tools in the most
innovative scientific and industrial applications available now and in the future across
Europe, notably through Centres of Excellence and Competence Centres. This includes
federating capabilities, exploiting available competences, and ensuring that application
knowledge and expertise has the widest geographical coverage in the Union.
Deployment of industrial-oriented HPC infrastructure and associated tools, software
environments and service platforms for industrial innovation. In particular, this addresses
the fair access to HPC infrastructures adapted to the needs of different industrial users,
from large industry users to SMEs, e.g. in terms of flexibility, ease of use, on-demand
capacity, trust, security and safety, confidentiality, security, dedicated storage, etc.
Specific actions for SMEs, enabling European SMEs to benefit from the use of computing
and simulation services in a fair and transparent way, e.g. similar to the current
Fortissimo37
experiments.
Supporting the development of digital skills, training and education, attracting human
resources to HPC and increasing Europe’s workforce skills and engineering knowledge:
– Empowering people working in HPC and its convergence with advanced digital
technologies such as data analytics, AI, blockchain, cybersecurity, etc. Such actions
could include for example: Master’s programmes in HPC and computational science;
short-term HPC training courses; job placements/traineeships involving the use of
HPC in real environments; HPC hackathons, hands-on schools and training through
research in advanced laboratories, etc.
– Industry-specific training, for example combined with consultancy and trial use of
HPC infrastructures through national points. For end-user SMEs, this could include
hands on training and solving real use cases, and SME-tailored courses and support
offerings like staff exchange programmes with research and academia.
Other awareness-raising and dissemination actions not specifically addressed above.
46
5.2 The supporting programmes of the next MFF
The Digital Europe Programme58
, Horizon Europe67
and Connecting Europe Facility-268
are
the main funding programmes in the next MFF (2021-2027) that could be used to finance the
EuroHPC pillars of activity described above. The Commission’s proposals for these
programmes include provisions for supporting the JU’s activities.
The Digital Europe programme (DEP) is the first EU programme specifically designed
to support the digital transformation of the European economy and society through
capacity and capability building. The Commission proposed a budget of EUR 9.2 billion
for DEP to align the next long-term EU budget with increasing digital challenges.
HPC is the biggest DEP priority area with a proposed EUR 2.7 billion budget. Additional
funding for HPC is also foreseen in the “Digital Skills” priority area, which has a total
budget of EUR 700 million. The EuroHPC JU will use DEP support for capacity building
activities, i.e. the activities described in the EuroHPC pillars on “Infrastructure” i.e., the
acquisition of both HPC and pilot quantum computing infrastructure, “Federation and
Services”, and “Leadership in HPC use and skills”. In addition, the DEP could also
support some of the activities in the “Applications” pillar.
Horizon Europe (H-E) is the new research and innovation (R&I) framework programme
for the period 2021-2027, succeeding Horizon 2020.11
The common understanding
reached with the Council on the Commission’s H-E proposal foresees support to HPC
related R&I activities under its Pillar II 'Global Challenges and Industrial
Competitiveness', cluster IV “Digital, Industry and Space”.
H-E would support the R&I activities included in the EuroHPC JU’s “Technologies”, and
“Applications” pillars. While there is no H-E-specific budget pre-allocated for HPC-
related activities (the budget to be allocated to the European Partnerships portfolio under
H-E is still to be defined), it is expected that the contribution from H-E would fund the
JU’s R&I activities. These activities do not include R&I support in quantum computing,
which are to be funded under the Quantum Technologies Flagship.
Connecting Europe Facility-2 (CEF-2) is the successor to the previous CEF programme
to promote growth, jobs and competitiveness through targeted infrastructure investment at
European level. The EuroHPC JU will use CEF-2 funds to support a leading-class
communication backbone for interconnecting the supercomputing and data infrastructures
and the European common data spaces of the “Infrastructure” and the “Federation and
Services” pillars of EuroHPC.
The following table summarises how the three above funding programmes could be used to
implement the four overall objectives of the EuroHPC JU presented in Section 4.2 above.
EuroHPC Objectives DEP H-E CEF-2
Infrastructure Investment √
(acquisition of
computing
systems)
√
(networking of
the JU’s
infrastructures)
Federation of the JU’s
Infrastructure and connection
with data spaces
√
47
Ecosystem development √
Support to HPC-
powered codes,
applications and
tools for industry
√
(R&I activities for
HPC technologies
and innovative
applications)
Widening HPC use and the
development of key HPC
skills
√
5.3 Interactions and synergies with other strategic objectives and policies
The EuroHPC JU should develop synergies and cooperation activities with other digital
strategic priorities and technologies included in the next MFF programmes. Examples
include:
Synergies in DEP: The JU should ensure synergy of its activities with the other DEP priority
areas, namely artificial intelligence, cybersecurity, advanced digital skills, and ensuring wide
use of digital technologies across the economy and society.
Artificial Intelligence: As explained in Section 3.3xxxviii
, the convergence of HPC and AI
is a critical technology and market driver for applications that rely on big data and HPDA.
In particular, EuroHPC can play a key role in the Union’s plans to promote support
centres for data sharing in the European data spacexxxix
that could accelerate the
development and uptake of AI in different application sectors. This is especially so as the
JU’s new supercomputers are designed to fit the needs of AI applications. This calls in
turn for increased co-design and balanced investments between AI algorithms,
applications and next-generation supercomputers.
Cybersecurity: HPC is essential for the state-of-the-art cybersecurity equipment and
infrastructure that the DEP will support. The HPC computing power unlocks the power of
security software and tools, usually in combination with AI-based approaches.xl
There is a
tangible need for supercomputing power in cybersecurity, as HPC minimises the impact of
the time taken for massive checks and enables advanced solutions to prevent, identify and
anticipate defensive measures against cybercrime and cyberattacks.
Advanced digital skills: The EuroHPC JU will seek synergies with the DEP priority area
of digital skills. The JU is already supporting HPC skills development for science and
engineering, e.g. through PRACE 69
and the HPC Centres of Excellence. In the future, the
HPC Competence Centres will coordinate their HPC training and skills developments
activities with the local ones in the JU’s Participating States.
Ensuring the wide use of digital technologies across the economy and society: The
EuroHPC JU is an example of the deployment of state of-the-art digital technologies,
infrastructures and services for a wide range of users. The JU will thus establish synergies
with this priority area, and in particular with the Digital Innovation Hubs supporting the
digitalization process of SMEs. For example, For example, such hubs can act locally as
xxxviii
See section 3.3 and Annex II of this document for further details on the HPC-AI convergence
xxxix
See COM(2018) 232 final and COM(2018) 237 final of 25.04.2018.
xl
See Annex II of this document for examples on the use of HPC/AI in cybersecurity.
48
antennas for national HPC Competence Centres or even offer dedicated HPC services in
synergy with Fortissimo37
, the PRACE SHAPE44
programme, the SESAMENet70
, etc.
Synergies in Horizon Europe (H-E): The EuroHPC JU should focus on R&I activities for
high-end computing technologies. The JU would develop synergies with other H-E areas and
partnerships that will support R&I activities complementary to the EuroHPC JU’s. Examples
include:
Activities of the successor of the ECSEL Joint Undertaking71
, or of the Quantum
Technologies Flagship which would support advanced computing technologies not
specifically oriented towards high-end HPC, such as low-power processors for AI or
automotive, or based on quantum-computing components, etc.
Big data technologies, methodologies and tools for privacy-preserving, data
interoperability and data provenance tracking, etc.
The European Open Science Cloud (EOSC)72
: Some of the computing capacities of the
EuroHPC systems could be offered to the EOSC research communities for supporting
their supercomputing needs. This will be done by aligning the JU’s accessibility structures
through the user interface of the EOSC portal. 73
Synergies with other programmes and initiatives: The EuroHPC JU should be developing an
open public infrastructure accessible to any public and private user. It will thereby remain
open to many other European and national programmes and initiatives focusing on climate
change, health data analysis, and crisis and emergency management situations. The JU will
strive to forge links and synergies with all these related programmes and their stakeholders. In
particular, the EuroHPC JU will contribute to the initiatives steaming from the European
Strategy for Data1
with the provision of services for users exploiting the European common
data spaces.
5.4 International Cooperation
As recommended in the Vision paper of the EuroHPC JU Industrial Advisory Board45
,
EuroHPC should promote and raise the level of international collaboration to solve global
scientific and societal challenges, while promoting competitiveness of the European HPC
supply and user ecosystem. International collaboration activities of benefit to the Union could
be established in the following areas of activity:
Access to the JU’s supercomputing and data infrastructure: The EuroHPC JU could
establish arrangements based on clearly defined rules for providing dedicated access to its
infrastructures to users from other regions of the world. Such access is crucial for
attracting and keeping talent, promoting innovation and exchanging knowledge for
science and industry in Europe. Access should be guided where this collaboration is of
clear interest for the European Union.
Applications: Most of the grand challenge applications that are going to run on exascale
platforms are developed through international scientific collaborations where European
scientists play a key role, contributing to developer’s environment tools and system
software. The EuroHPC JU can foster international collaboration in such applications
addressing global challenges, with the overall aim of achieving European leadership in
application and use of HPC.
49
Technology supply: International collaboration can help the Union address the current
dependence of the European HPC industry on non-European sources for critical
technology and especially hardware components. For example, the EuroHPC JU could
focus international cooperation on projects that enable European industry fill in the
technology and knowledge gaps in the value chain and/or help negotiate partnerships that
include IP sharing with financial return for European industry on the world market. For
promoting the latter, the EuroHPC JU could encourage the collaboration of global IT
vendors with European partners for example through the establishment of joint ventures or
the creation of joint labs or other initiatives that respect the EU model of IP sharing and
financial return.
Given the size of the investment that will be required, international collaboration would be
extremely beneficial for post-exascale systems development as well as the deployment of
heterogeneous supercomputer networks at a global scale.
50
Acronyms and abbreviations
AC Associated Country to the Horizon 2020 Programme
AI Artificial Intelligence
ASCR Advanced Scientific Computing Research
BDVA Big Data Value Association
CAGR Compound annual growth rate (CAGR)
CEF Connecting Europe Facility
CoE Centre of Excellence
cPPP Contractual Public-Private Partnership
DEP Digital Europe Programme
DL Deep Learning
DSM Digital Single Market
EC European Commission
EIB European Investment Bank
EPI European Processor Initiative
ERIC European Research Infrastructure Committee
ETP4HPC European Technology Platform for High-Performance Computing
EU European Union
Exascale Computing systems capable of 1018
Floating Point Operations per Second
FET Future and Emerging Technologies
Flop Floating Point Operations per Second
FP7 7th
EU Framework Programme for Research & Innovation
FPA Framework Partnership Agreement
GDP Gross Domestic Product
H2020 Horizon 2020 Framework Programme for Research & Innovation
HE Horizon Europe Framework Programme for Research & Innovation
HPC High-Performance Computing
HPDA High Performance Data Analytics
ICT Information and Communication Technology
INFRAG Infrastructure Advisory Group of the EuroHPC JU
IP/IPRs Intellectual Property / Intellectual Property Rights
ISV Independent Software Vendors
JU Joint Undertaking (as defined by Article 187 Treaty of the Union)
MFF Multi-annual Financial Framework
51
ML Machine Learning
MOOC Massive Open On-line Courses
MS Member State of the European Union
NSA (US) National Security Agency
PPP Public-Private Partnership
PRACE Partnership for Advanced Computing in Europe
Pre-exascale Computing power near the exascale performance (i.e. 0.1-0.6 exascale)
R&D / R&I Research and Development / Research and Innovation
RIAG Research and Innovation Advisory Group of the EuroHPC Joint Undertaking
ROI Returns on Investment
SME Small- and Medium-sized Enterprise
SRA Strategic Research Agenda
WP Work Programme
52
List of Figures
Figure 1 - Map of the EuroHPC JU Participating Countries.................................................... 9
Figure 2 - World top 500 supercomputers - regional share .................................................... 14
Figure 3 - Share of HPC systems in global top-10 per country............................................... 14
Figure 4 - Computing power of world top 10 supercomputers................................................ 14
Figure 5 - Members of Consortia in EuroHPC JU supercomputers ....................................... 15
Figure 6 - European computing power in 2020 (forecast) ...................................................... 16
Figure 7 - EuroHPC: Pillars of Activity .................................................................................. 41
Figure 8 - Return on investments (ROI) of HPC...................................................................... 53
Figure 9 - Secondary Impact of HPC on the US Economy...................................................... 54
Figure 10 - HPC Server market by region............................................................................... 55
Figure 11 - The Worldwide HPC server market ...................................................................... 56
Figure 12 – Global HPC Market by vendor shares................................................................. 56
Figure 13 - Vendors of systems installed in the EU................................................................. 58
Figure 14 – HPC in the Cloud market ..................................................................................... 60
Figure 15 - Projected Exascale Systems dates......................................................................... 62
Figure 16 - Projected Exascale and pre-exascale acceptance (2020-2025) ........................... 62
Figure 17 - R&D investments in the race to Exascale............................................................. 63
Figure 18 – Areas of contribution of HPC to Sustainable Development Goals ...................... 76
53
Annex I: Market Analysis and Investments
The economic impact of HPC
The worldwide ICT spending in 2019 is expected to near EUR 5 trillion in 2020, of which
EUR 1 trillion will correspond to new areas closely linked to HPC technologies: the Internet
of Things (IoT), cybersecurity, AI, robotics and augmented and virtual reality techniques.74
The business case behind commercial and industrial HPC use is relatively clear-cut. HPC use
has a critical impact on industries and businesses via advanced modelling, simulation, and
data analytics that address innovation challenges and support decision-making.
Return on investment (ROI) of HPC
The economic reach of HPC into the industrial infrastructure of the most developed
economies is impressive, as this was shown in the EuroHPC JU Impact Assessment4
. Among
other findings showing that HPC is a key contributor in critical industry sectors for jobs and
economic output, the EuroHPC JU Impact Assessment showed that HPC has an excellent
return on investments (ROI) in scientific and industrial projects carried out within Europe.75,76
In 2018, the results continued to indicate substantial returns for investments in HPC77
:
Figure 8 - Return on investments (ROI) of HPC
Country
Average of Profit or Cost Saving
EUR
per HPC EUR
Average of Revenue
EUR
per HPC EUR
China EUR 2.7 EUR 7.6
US EUR 35.1 EUR 336
EU EUR 43.2 EUR 260
Japan EUR 223.2 EUR 1085.1
The impact of HPC in GDP
A recent study confirms the above, showing that HPC-reliant US economic sectors contribute
almost 55% of GDP to the US economy, encompassing USD 9.8 trillion (EUR 9 trillion) in
value. 78
54
Figure 9 - Secondary Impact of HPC on the US Economy
By analogy, these sectors account for 53.4% of the EU GDP, and encompass EUR 7.56
trillion in value.79
“If the United States were to cede global competitive advantage in yet
another technology industry (i.e., HPC), it would mean stiffer economic
headwinds for the U.S. economy and slower per-capita income growth.”126
An update in the economic data confirms the growing importance of these critical sectors in
the EU GDP and jobs.80
Six of the most important economic sectors in Europe
(manufacturing, health and pharmaceuticals, automotive, oil and gas, aviation and chemicals)
depend on HPC. They represented in 2018 more than 40% of the EU’s GDP and around 80
million jobs.
Examples of the economic importance of key sectors where HPC can make a difference
The car industry is one of the most HPC-dependent sectors, which in Europe provides jobs
for 13.8 million people and accounts for 7% of the EU’s GDP (2018). HPC has enabled
the R&D process to fully abandon early prototypes that previously required costly
customised tools and machinery. Although some physical processes such as crash test
simulations have not yet been fully replaced (partially due to regulatory demands), current
HPC-supported prototypes are close to serial production. As the ICT component of cars
increases, European carmakers are expanding their efforts to build the computing capacity
they need as vehicles digitise and become driverless. In fact, they are now hiring more
information technology specialists than mechanical engineers.
Weather: the weather affects 33% of the world’s GDP.81
Every year, extreme weather
events have an estimated impact in Europe of EUR 400 billion, affecting around 5% of the
European population and causing around 3000 deaths. In the next decades, this may be
worsen and 2/3 of European citizens could be affected by weather-related disasters
annually by the period 2071-2100.82
Studies foresee that if no further action is taken to
tackle climate change, the combined negative effect on global annual GDP could be
between 1.0% and 3.3% by 2060.83
In 1998-2017, the direct economic losses of disasters
were valued at EUR 2617 billion, of which climate-related disasters caused EUR 2020.5
billion or 77% of the total. This is up from 68% (EUR 805.5 billion) of losses (EUR
1181.7 billion) reported between 1978 and 1997. Overall, reported losses from extreme
weather events rose by 151% between these two 20-year periods.
55
The health and pharmaceutical sector employs almost 26 million people in Europe and
represents 8.2% of the EU’s GDP. HPC is used in designing and simulating the effects of
new drugs and can speed-up the diagnosis and treatment of diseases including cancer,
cardiovascular diseases and Alzheimer’s disease.
The HPC market
(Unless specifically referenced otherwise, the data in this section come from different market
analyses84
and the study (Impact Assessment Study for Institutionalised European
Partnerships under Horizon Europe - Specific Part - Candidate Institutionalised European
Partnership in HPC)30
.
The worldwide market for HPC has grown from about EUR 1.8 billion in 1990 to EUR 25
billion in 2018. This includes the following categories: servers, storage, software and
technical support. The forecast is that the overall HPC market will reach c. EUR 39.6 billion
in 2023 for a CAGR of 7.2%. 2018 turned out to be an exceptionally good year for the HPC
business. In particular, the global market for HPC servers grew by 15% from 2017 to 2018,
reaching EUR 12.2 billion in revenues worldwide. This tendency is confirmed by the data of
the first half of 2019, in which HPC server sales totalled EUR 6 billion.
North America clearly leads the global market (i.e. purchases of HPC servers) with a c. 44%
share, followed by EMEA (Europe, the Middle East and Africa) with around c. 30% (c. 26%
for Europe only), and Asia/Pacific (c. 19%).
Figure 10 - HPC Server market by region
Overall, the record EUR 12.3 billion market for HPC servers in 2018 can be broken down in
Supercomputers, Divisional HPC, Departmental, and Workgroup (see Figure 11).
56
Figure 11 - The Worldwide HPC server market
The global HPC sales income by vendor in 2018 shows that on HPC supply, the USA is the
absolute world leader. The only sizeable Europe-based vendor, Bull-Atos, represents only a
total market share of 1.1%, well under a peak market share of 5%, in 2011.
Figure 12 – Global HPC Market by vendor shares
Vendor Country 2018 sales ($ million) Share %
HPE/HP US 4,766 34.8%
Dell US 2,857 20.8%
IBM US 971 7.1%
Lenovo China 957 7.0%
Inspur China 788 5.8%
Sugon (Dawning) China 462 3.4%
HPE/Cray US 313 2.3%
Fujitsu Japan 269 2.0%
Penguin US 244 1.8%
NEC Japan 201 1.5%
Bull Atos France 150 1.1%
Other - 1,728 12.6%
Total 13,706 100%
The worldwide forecast projects that HPC server revenues will grow to c. EUR 17.7 billion in
2023 with a 2018-2023 CAGR of 7.8%. This 2023 figure includes EUR 1.2 billion for
exascale supercomputers, EUR 2.4 billion for AI-dedicated HPC servers, and about EUR 4.9
billion in cloud usage fees. AI will be the fastest-growing HPC segment for HPC, with a
projected 30% CAGR during the 2018-2023 period.
Supercomputers category (systems over EUR 500K)
The supercomputer category had a particularly robust year in 2018, increasing 23% compared
with 2017 and reaching EUR 4.9 billion, the highest growing competitive segment of the HPC
market. This tendency will continue in the following years: c. EUR 8.1 billion of expenditure
are in the pipeline for pre-exascale and exascale systems scheduled to be installed between
2020 and 2025 worldwide. That will provide a big boost to this category and to the overall
HPC market over this timeframe.
57
However, in the strategically important high-end market of systems over EUR 2.25 million,
the current situation is not very satisfactory. The Union has only one supercomputer in the
top-1031
and five in the top-20 (November 2019), dropping from a peak of four and seven
systems respectively in 2012. Spending levels for these high-end supercomputers are an
important measure of HPC leadership.
Europe and the HPC Market
Integration of EU suppliers in the global HPC market is still weak. The following facts
illustrate the scale of the problem:
On HPC supply (all segments), the US is the world leader in 2018, having 67% of the
global HPC sales, followed by China (16.2%), Japan (3.5%) and the EU (c. 1.1%).
US vendors have almost 100% of the world-wide processor market, with Intel holding
more than 95% in several categories of processors (CPU, GPU, etc.).85
No European
company supplies key components like general processor or accelerators.
Participation of EU vendors in the global HPC market is still weak. Out of all top-500
supercomputers, only 28 (5.6%) are supplied by EU manufacturers. 26 were supplied by
one main EU vendor (Bull-Atos), with 19 of these 26 supercomputers purchased by
clients in the EU and only 7 by other global clients.
Out of the 79 HPC systems in the top-500 list that are located in the EU, only 21 (26.5%)
were supplied by European vendors. This means that almost 75% of the European HPC
market is being supplied by non-EU manufacturers.
In the top-500, Chinese vendors integrate around 65% of the systems. Chinese indigenous
processors are present in only a few of those systems, but it is expected that Chinese
technology for the exascale supercomputers will likely enter the market in the next few
years.
Solutions based on open-hardware (in particular RISC-Vxxvi
) are gaining momentum as a
credible alternative to the proprietary solutions for processors and accelerators across the
computing continuum. By 202186
, a billion cores are expected to ship using the RISC-V
architecture, growing to 62.4 billion cores in 2025 An interesting aspect of RISC-V is that
any company can use it without any fear of losing access in the future (for instance due to
commercial bans on technology exports).
RISC-V will create opportunities for non-US companies to break the almost monopoly
situation in chip design. China now has two of its own RISC-V industry alliances87
with
more than 185 members (including Huawei, Sanechips from ZTE, Bitmain, Alibaba, and
Xiaomi’s wearables partner Huami). The EU is supporting RISC-V solutions in the EPI
project and in other activities.
Industries operating in weaker and less dense supply chains are generally less competitive
and are more at risk of being taken advantage of by suppliers and clients, due to market
power being concentrated in fewer actors. Companies operating in these environments
also have a harder time sourcing and nurturing talent and scaling up their activities.
Historically, Europe has been strong in parallel software development and a global leader
in exploiting HPC for innovation. The European share of the worldwide commercial HPC
58
software market closely matches its share of global spending in the HPC server market (an
estimated 26% in 2018).
Figure 13 - Vendors of systems installed in the EU
Manufacturer Country N.
Lenovo China 35
Atos/Bull France 18
HPE/Cray US 10
IBM US 3
NEC Japan 2
Huawei China 2
Intel US 1
IBM/Lenovo US/China 1
Lenovo/IBM China/US 1
ClusterVision /
Hammer
Netherlands/UK 1
NEC/MEGWARE Japan/Germany 1
T-Platforms,
Intel, Dell
Russia/US 1
Total 76
Uses of HPC with AI and Cloud
HPC/HPDA and AI
Three major forces – AI, cloud and exascale – are combining to raise the HPC industry to
heights exceeding expectations. Growth has been driven primarily by new buyers from the
enterprise moving into HPC for AI-related workloads, such as fraud detection, business
intelligence, affinity marketing, personalised medicine, smart cities and IoT. The convergence
of HPC and big data analytics is being driven by HPC users and the growing contingent of
commercial firms that are adopting HPC solutions to tackle data analytics. Worldwide and
European HPC server spending dedicated to HPDA will grow robustly.
HPDA-AI is growing faster than the overall HPC market, and the AI subset is growing faster
than HPDA, though in absolute figures it will remain smaller. By 2023, the overall HPDA-AI
market for HPC servers will reach about EUR 5.76 billion, or about 32% of the EUR 18
billion worldwide market for HPC server systems with a five-year CAGR of 15.4%. The
subset of HPC-based AI (machine learning (ML), deep learning (DL) and other areas) is
expected to reach EUR 2.43 billion by 2023 for a 2018-2023 CAGR of 29.5%.
The fastest-growing workloads are in AI (ML and DL). 87% of the surveyed cloud services
providers (CSPs) and 94% of the HPC system vendors said that their fastest-growing HPC
workloads are in the AI domain, more specifically ML and DL.
Half each (50%) of the CSPs and HPC system vendors said that more investment is also
needed in simulation. For the foreseeable future, most HPC use will continue to be directed at
modelling and simulation, and that simulation will play an important role in emerging AI use
59
cases. For instance, the RAND Corporation estimates88
that 8.8 billion miles of test driving
will be needed for consumers to acquire 95% confidence in the safety of autonomous
vehicles, and that physical testing this many miles would take 400 years. Many experts
indicate that the only way to instil confidence in 5-10 years is with simulation, using AI
algorithms on high performance computers.
HPC in the Cloud
Worldwide, the proportion of sites exploiting cloud computing to address parts of their HPC
workloads has grown to over 70% in 2019 –helping the "democratisation of HPC", especially
as advances in virtualization capabilities becoming more efficient and HPC-friendly. This is
of particular relevance for the potential links with European commercial initiatives such as the
recently announced GAIA-X89
, a new European data infrastructure project that aims to grow
an autonomous and self-determined digital ecosystem in Europe.
According to 2019 research from IDC90
, the worldwide spending on public cloud services and
infrastructure is expected to reach EUR 333 billion by 202291
, a five-year CAGR of 22.5%.
Cloud has been slower to catch on in HPC circles. Hyperion Research estimates that while
70% of HPC sites run jobs in public cloud, these jobs comprise just 10% of all workloads.
Recent surveys on HPC show cloud users reporting that they run 33% of their HPC workloads
in 3rd-party clouds. The HPC community runs 20% of workloads in cloud environments.
Despite the limitations of using HPC in clouds (e.g. moving mission-critical workloads off-
premises and high costs associated with data locality where large volumes of data are
involved), 2019 is a tipping point year for a significant and long-anticipated shift in market
attitudes toward running HPC workloads in clouds, resulting in an increase in the revenue
forecast from EUR 2.7 billion to EUR 3.6 billion for 2019 and totalling EUR 6.75 billion by
2023. This reflects a compound annual growth rate (CAGR) in 2018-2023 of 24.6%. By
applications, HPC in the cloud will be led by geosciences (27.3 %), electronic design
automation (26.0%), and biosciences (25.6%) bio-sciences, followed by CAE at 24.7% and
chemical engineering at 23.9%. The breakdown also shows relatively slower cloud growth
(21.3%) for HPC performed in university/academic settings.
A study of HPC end users that are currently using public clouds confirms the growing
importance of third-party clouds from cloud services providers (CSPs) or system vendors to
run established and newer HPC workloads, such as ML and DL. On average, the surveyed
users run 33% of all their HPC work in third-party clouds; extrapolating from this group of
admitted cloud users to the whole HPC community drops that average to ~20%, representing
a major uptick from the 10% figure in Hyperion Research surveys 18 months ago.
HPC cloud computing is rounding a corner in the adoption curve. 40% of these users believe
that all their HPC jobs could be run in the cloud – pointing to substantial headroom for cloud
growth. The ultimate limiter of this growth may be data locality, the inefficiency of moving
large data volumes to third-party clouds when the data is already in the same locale as the
applications and computing resources.
60
Figure 14 – HPC in the Cloud market
The study also confirms that the cloud segment should be seen as a complement to on-premise
HPC computing, not as a threat. Most HPC work going to third-party clouds stems from pent-
up demand and users without on-premise HPC resources, not from jobs already being run on-
premise. An important new business source for HPC and for cloud computing includes
commercial enterprises whose requirements are pushing up into the HPC competency space.
CSPs and HPC system vendors have begun chasing these companies with increasing success.
As just noted, 40% of the surveyed HPC users said all of their HPC workloads could be run in
a third-party cloud environment. The remaining 60% of HPC users disagreed, saying that
some of their HPC workloads are not suitable for being run in an external cloud. A
coincidentally similar 63% of surveyed CSPs reported that there are HPC workloads they
advise customers not to run in the CSPs' cloud environments. Other recent Hyperion Research
studies and interactions with major CSPs and HPC users indicated that some sites keep
mission-critical and secure workloads on-premise as a matter of policy. There are certainly
other reasons for keeping some workloads out of external clouds, but the same sources point
to data locality as the principal long-term reason for keeping certain workloads on-premise.
HPC worldwide investments: the strategic race towards exascale computing
Unless specifically referenced otherwise, the data in this section come from different market
analyses84
.
Exascale computing: the opportunity for EU suppliers
One of the key objectives of the EuroHPC JU is to secure a European autonomous and
competitive HPC technology supply. The ambition is that such European technology may
start soon being integrated in the future European HPC infrastructure. Representative
examples of such efforts are the European Processor Initiative (EPI) and the other technology
development activities launched in Horizon 2020.
The global race towards exascale gives a new opportunity to the EU to be back in the
computing landscape. Europe has all it takes to be a global player in HPC supply: power-
efficient nano-electronics, interconnect and processor designs, middleware solutions, parallel
programing and computing resource optimisation solutions, scientific and industrial codes,
etc. Europe can exploit its strong assets in order to get European industry back as leading
61
technology supplier and reinforce its position as a world-leader in the use of HPC – Europe
consumes around 30% of the world HPC resources but it only supplies 5% of these.
A key goal of the EuroHPC JU efforts in the HPC supply is to leverage on technologies in the
computing continuum. The development of European technologies is not for the sake of
building the fastest supercomputer in the world (a "one of a kind" system), but rather to build
"first of a kind" systems with technologies that reach beyond the HPC domain and feed the
broader ICT markets in the longer run with EU-made technologies. The transition to exascale
computing represents the opportunity for the European supply industry to leverage on
technologies in the computing continuum. These technologies have a wide application area,
from smart phones, to embedded systems (for example in the future driverless cars), and to
data servers, feeding a broader ICT trillion-market within a few years of their introduction in
high-end HPC.
There is a huge potential economic effect in the mass computing market from the investments
in HPC technologies: HPC leadership can provide a “first mover” advantage; the technologies
and skills needed to design, develop, and deploy leadership class systems often lead
requirements for other computing systems by many years. Understanding and driving
innovations at the leading edge can enable a valuable learning curve for the leaders; reacting
to these innovations can be expensive and can lead to loss of markets by the incumbents to the
innovators.
Between today and 2025, major government-sponsored efforts will drive development of
about 26 near-exascale and exascale systems, with total spending of about EUR 8.1 billion (in
the range of EUR 0.9 to EUR 1.8 billion per year). China may be the first to install an
exascale system within the next 18 months, with the USA following soon afterward with the
installation of Aurora supercomputer. Japan will field the first $1 billion (EUR 0.9 billion)
supercomputer ever, the Fugaku system.
62
Figure 15 - Projected Exascale Systems dates
Figure 16 - Projected Exascale and pre-exascale acceptance (2020-2025)
The projected levels of R&D investment associated for the development of the above systems
(in addition to the purchase investments) are also summarised here (November 2019):
63
Figure 17 - R&D investments in the race to Exascale
The exascale plans for the US and China are particularly interesting, in as much as both
countries will probably be deploying the largest number of these systems over the next several
years. If the projected performance estimates are accurate, not all will achieve 1 exaflops on
Linpack. Two of the first three American exascale systems are expected to be delivered by
late next year, with acceptance in 2022. These are Aurora, comprised of the Cray Shasta
architecture with Intel Xeons and Intel Xe GPUs for Argonne National Laboratory; and
Frontier, comprised of Cray Shasta and AMD Epyc CPUs and future Radeon GPUs for Oak
Ridge National Lab. The third system, El Capitan, based on the Cray Shasta architecture, is
expected to be delivered to Lawrence Livermore National Lab within two years, with
acceptance in 2023.
64
Annex II: HPC and AI
AI is globally recognised as one of the most strategic technologies of the 21st century, thanks
to the growth in computing power, availability of data and progress in algorithms. The recent
advances in digital technologies reflect the increasing importance of the convergence of HPC,
AI and big data, representing a fundamental transforming evolution of the use of HPC for
scientific, industrial and policy-making applications. This evolution can be characterised in
the three stages of the use of HPC:
Modelling and simulation are reducing the need for costly and time-consuming
experiments or physical prototypes, and allow the study of properties that are impossible
to test experimentally.
High performance data analytics (HPDA) combine HPC with data analytics, involving
parallel processing of huge amounts of data. This provides much deeper insights into
previously unexplored areas and systems of the highest complexity.
Convergence with AI: The newest AI developments such as ML and DL are made
possible by the increasing availability of sufficient amounts of training data, huge
processing HPC power and new algorithms exploiting such computing power.
“Deep learning has been a game-changer for AI with a tremendous
improvement in performance for specific tasks such as image or speech
recognition, or machine translation. …. Significant advances in these
technologies have been made through the use of large data sets and
unprecedented computing power.”93
HPC for AI: AI needs HPC not only to execute the specific computing tasks of processing
data but also for building/feeding the computational model for AI tasksxli
. HPC generates
huge amounts of data suitable for AI training. HPC can scale up the learning phase of neural
networks providing for example the computing power for implementing unique levels of
parallelism for massive scaling of Deep Neural Network training92
, or for the auto-tuning of
the choice of models (Auto DL and ML). HPC is also critical to generate trust for AI,
providing explicability techniques and implementing tools for the coupling between formal
methods and neural networks.
AI for HPC: HPC (and HTC) need also AI. There is a wide range of AI techniques helping
HPC powered tasks and applications, for example: inferring data flows from large scale
scientific instruments (stream access, support of end to end workflows), coupling learnt
models and simulation codesxlii
aiming at cognitive simulation, for (in-situ, in-transit) post
processing of numerical simulations (optimising data movement and minimising energy), or
for better exploiting systems and computing centres (with AI driven schedulers, preventive
maintenance, optimisation of the infrastructures, etc…).
One of the challenges at medium-term will be to achieve the optimum trade where HPC
supports the efficient running of both compute-intensive and data-intensive workloads. This is
called-off between the precision of the AI models and the associated computational cost,
xli
Data from GENCI (Grand équipement national de calcul intensif), France.
xlii
The ACM Gordon Bell prices 2018 to recognise outstanding achievement in HPC awarded two HPC
applications enhanced with AI techniques.
65
especially in IoT applications or scenarios with real time requirements, where latencies are
critical.
The HPC/AI synergies are emerging in many different domains. This can be illustrated in the
following three areas:
1. Digital transformation of Europe and industrial applications
The synergy between and HPC and AI technologies is of capital importance for the
digitisation of Europe. The Commission in its Communication “Artificial Intelligence for
Europe”93
proposes the set-up of industrial data platforms offering high quality datasets in
several application areas and the development of a single access point for all users to
relevant AI resources in the EU, the "AI-on-demand platform". In addition to data, tools
and algorithms, this initiative will offer the necessary HPC power to analyse the huge
amounts of data and execute the advanced tools and algorithms necessary to fully exploit
the AI potential.
The digitisation of industry is bringing a revolution in the way business operate. The
combination of computing with AI enables the 4th
industrial revolution – “Industry 4.0”.
“We're in the midst of a significant transformation regarding the way we
produce products thanks to the digitization of manufacturing. This transition
is so compelling that it is being called “Industry 4.0” to represent the fourth
revolution that has occurred in manufacturing. From the first industrial
revolution (mechanization through water and steam power) to the mass
production and assembly lines using electricity in the second, the fourth
industrial revolution will take what was started in the third with the adoption
of computers and automation and enhance it with smart and autonomous
systems fuelled by data and machine learning.”94
AI technologies require more and more computing power and data to perform advanced
real-time analytics and create new high added-value business, services and applications.
The combination of HPC, big data and AI is the key to enhance product quality,
understand customer's behaviours, and react to unforeseen events much faster, bringing
innovation and new jobs and opportunities.
This development in the industrial arena in the last few years has been made possible
thanks to the increase of the “raw” computational force easily available, the exponential
amount of data, and the enhanced abilities of AI techniques (in particular ML and DL) to
learn and leverage information from such data. This is making supercomputers a necessity
in a very broad range of industries such as biotechnology, finance, manufacturing, or oil
and gas exploration.
This market trend is bound to continue as shown in Annex I. The technology combination
of HPC and AI is fostering the rapid development of new applications and HPC services
across multiple industrial sectors, not only in emerging markets, but also in the more
traditional parts of the economy, in particular if such services are available in secure and
easy-to-use cloud-based platforms.
2. Scientific applications
As early adopters of the HPC technology since the 1990s, the use of HPC in scientific
applications illustrates the evolution of the use of supercomputing that is being perceived
66
more recently in other industrial and policy-making areas: modelling and simulation,
HPDA, and extensive use of AI techniques.
The convergence of HPC with AI and big data is shaping a revolution in many scientific
areas –to the extent that some qualify already the “data science” as the fourth pillar of
scientific method. Again, the most visible results of this confluence are found in the life
sciences and medicine. In a few years, personalised medicine will become mainstream
medicine, with diagnosis and treatments tailored both to the patient and the state of the
disease, and it will support medical analysis and decision-making, e.g. ML algorithms
supporting the online analysis of X-rays with millions of other samples for general
practitioners, or real-time support in operations.
3. Security and Cybersecurity
Security
HPC and AI are game changing applications in security95
and both the US and China have
already linked closely HPC and AI developments in their programmes.
US: The US President Trump's executive order on Maintaining American Leadership in
Artificial Intelligence47
makes explicit the HPC-AI link, asking his administration to
prioritise the allocation of high-performance computing resources for AI-related
applications. The US 2020 budget request96
proposes cuts in many science programmes
but increases the HPC funding of the Department of Energy (DoE), in charge of the
federal HPC national laboratories that will be hosting the exascale US supercomputers and
the Exascale technology developments. The DoE budget includes substantial specific
budget lines for AI (EUR 107 million), complementing the EUR 728 million for the
Exascale Computing Initiative.
China: As the second biggest ‘player’ in general-purpose AI, China is increasingly
showing that it is capable of keeping pace with the US in this field. The overarching goal
is to “boost China's overall competence in AI”. These developments are catalysed by an
HPC industry, which is increasingly self-reliant: after the US government banned the sale
to China of Intel Xeon processors in April 2015, China was able to substitute its own,
native-built processors in the design of the Sunway Taihu Light, the world's fastest
supercomputer from 2016 to 2017.
Cybersecurity
In cybersecurity, HPC unlocks the power of security tools thanks to its capability to speed
up the AI- and ML-driven complex software. Hybrid techniques combining HPC and AI
(in particular ML techniques) are used for a more effective threat analysis and security
event correlation. Novel techniques are developed every day using these hybrid tools,
detecting strange systems behaviour, insider threats and electronic fraud; detecting and
fighting very early cyber-attack patterns (in a matter of few hours, instead of a few days)
or potential misuse of systems, allowing for automated and immediate actions even before
hostile events occur.
"By 2025, machine learning will be a normal part of security practice and
will offset some skills and staffing shortfalls." Gartner further states: "We
can't escape the fact that humans and machines complement each other, and
together they can outperform each alone. Machine learning reaches out to
67
humans for assistance to address uncertainty and aids them by presenting
relevant information."97
Hybrid HPC/AI is a security asset for companies to deal with the ever-worsening
sophisticated attacks. For example, advanced persistent threats are long-term attacks
performing continuous stealthy computer hacking. The undetected attacks linger inside
systems for weeks or months, moving across corporate infrastructure and getting past
security controls. Highly accurate and rapid event correlation and anomaly detection can
help uncover evidence of such attacks. HPC/AI-powered security tools give a long-term
advantage to organizations defending against cyberattacks.
European examples of the convergence of HPC/AI use
The combination of HPC with AI-based methods has a huge transformative impact in the
processing and the extraction of added value from massive amounts of data, in particular
using ML/DL approaches. Between AI and more traditional HPC modelling there is a positive
feedback that can further accelerate both techniques. For example, simulations produce huge
amounts of data to train the AI-based algorithms, and AI techniques accelerate the
parameters/phase space exploration to find optimal solutions to the simulated problems. There
are several examples illustrating this important transformative impact in Europe:
ANTAREX and Exscalate4CoV
ANTAREX98
is an example of how AI and HPC can substantially boost Computer Aided
Drug Discovery:
Antarex produced a platform “Exscalate” capable of speeding up 100 times faster the drug
discovery process, using AI and HPC combined techniques. ANTAREX use the Marconi
supercomputer (ranking #21 in the world) to run exascale-ready HPC/AI technologies to help
shorten the path from the discovery of a health threat to the availability of a cure.
An Italian pharma SME (Dompé) is now able to optimise molecular docking to reduce the
virtual screening process for the identification of new active substances by two orders of
magnitude. This is the most important use case to test the ANTAREX technologies, helping
to produce novel treatments against the Zika pandemic.
A total of 1.2 billion molecules (including all investigational and marketed
drugs) targeting the Zika virus were screened and virtually tested using a massive parallel
simulation in Marconi. This makes it the largest virtual screening experiment ever
launched in terms of computational threads (1 million) and database size (1.2 billion).
The new computational techniques in the Dompé software resulted in the following
savings: time to solution from 52 to 3.5 days; energy consumption from 504 to 84 MWh;
and, cost to solution from EUR 70 K to EUR 12 K.
ANTAREX helped Dompé to become worldwide competitive, running faster & greener, with
a new process that paves its way to grow from SME to large industry.
A spin-off result of ANTAREX has been the EU-funded supercomputing project
“Exscalate4CoV”99
that exploits high-performance HPC and artificial intelligence (AI)
technologies that complement traditional biology methods to find a treatment for the novel
coronavirus disease (see Annex III, section “HPC and the COVID-19 crisis” for more detail.
The Human Brain Project Flagship100
68
The aim of the “Human Brain Project” (HBP) FET Flagship is to understand the functioning
of the human brain. HBP is using large-scale simulation and multi-scale modelling to produce
a detailed 3D map of the brain derived from many thousands of histological brain slices
imaged at ultrahigh resolution with modern microscopes.
Mapping brain areas is a very time consuming, semi-automatic process that involves
analysing complex patterns of cell distributions in different independent subjects.
Scientists aim at creating a new generation of brain mapping tools that exploit the most
advanced high-throughput imaging devices, ML algorithms and HPC infrastructures available
today. They have trained a deep convolutional neural network to classify texture in
microscopic scans of brain tissue into different brain areas. The network learns precise texture
features from existing annotations in microscopic images, and combines them with
information from existing atlases. The neuroscientists and data analysts have worked closely
with the JUELICH supercomputing centre to run the application at scale on the GPU-
accelerated clusters JURECA and JURON. The use of this modern HPC infrastructure enables
the algorithm to process many large chunks of image data in order to capture both the cellular
detail and spatial context. Without HPC, running the network would be almost infeasible.
Pl@ntNet
Pl@ntNet101
is an identification system that helps you identify plants through images. It
is a research and a citizen science project, initially supported by Agropolis Foundation, and
developed since 2009 within the framework of a consortium bringing together Cirad, INRA,
INRIA, and IRD. Pl@ntNet is available for free as an app on the AppStore and on Google
Play, and is since its launch in February 2013 the application has more than 12 million users.
The Pl@ntNet system works by comparing visual patterns transmitted by users via photos of
plant organs (flowers, fruits, leaves ...) that they seek to determine. These images are analysed
and compared using ML/DL techniques to an image bank produced collaboratively and
enriched daily. The system then offers a possible list of species with its illustrations. This
research is at the frontier of several fields (botany, ecology, computer science, citizen science)
and aims in particular to contribute to the monitoring of plant biodiversity (more than 369K
species of flowering plants in the world) on a global scale, thanks to the involvement of the
citizens of the planet. They are every day more than 140,000 users of the application around
the world. The system currently works on more than 20,000 wild plants and ornamental and
cultivated plants.
The HPC resources are provided by GENCI and are helping in the improvement of the tools.
CALIOPE
CALIOPE102
is the system developed by the Barcelona Supercomputing Centre (BSC) which
offers 48-hour air quality forecasts for Spain and Europe, thanks to the combination of
69
different numerical simulation models (meteorology, emissions and photochemical transport)
executed by the MareNostrum Supercomputer and AI techniques.
CALIOPE analyses the air quality in a given area and the concentrations of the main air
pollutants (ozone, nitrogen dioxide, sulphur dioxide, and particles), providing the citizens
with a reliable forecast of air quality in the 24-48 hour range. Examples:
For Barcelona, CALIOPE provides decision makers with the information they need take
preventive action, providing forecasts that incorporate different emission reduction
scenarios such as vehicle bans;
For Mexico City, citizens have access to predictions of the presence of the main
atmospheric pollutants 24 hours in advance, and in resolutions accurate to one km2
and to
the hour. The system will be complemented with predictions of the effects of the various
measures and plans that the government may consider (emission reduction programmes,
crisis management, etc.).
CALIOPE will further develop the use of AI to combine high resolution video-based traffic
data with instantaneous emission models to improve predictions and propose better scenarios
IoTwins103
Digital twins is a prefect case for the combination of HPC, AI, IoT and Big data. IoTwins
combines complex HPC, Big Data and ML techniques to help repositioning the
manufacturing industrial processes in Europe and make them more efficient and competitive.
The testbeds under development will show the advantages of adopting digital twins in the
different application domains.
Globally more and more producer goods are planned, calculated, designed and simulated
digitally. IoTwins is a unique and flexible platform for the creation of industrial digital twins.
Within the project there will be 12 testbeds, each realizing a digital twin.
CERN104
Physicists use the 26.7-km Large Hadron Collider (LHC) tunnel to accelerate particles almost
to light speed, smash them together and analyse the resulting shower of particles. Collisions in
the LHC generate particles that often decay in complex ways into even more particles. Up to
about 1 billion particle collisions can take place every second inside the LHC experiment's
detectors. It is not possible to read out all of these events. A 'trigger' system is therefore used
to filter the data and select those events that are potentially interesting for further analysis.
Only around 0.004 percent of the total data generated is kept. Even after the drastic data
filtering, the CERN Data Centre processes on average one petabyte of data per day. The LHC
experiments produce about 90 petabytes of data per year, and an additional 25 petabytes of
data are produced per year for data from other (non-LHC) experiments at CERN.
The High-Luminosity LHC, the successor to the LHC, is planned to come online after 2025.
By this time, the total computing capacity required by the experiments is expected to be 50-
100 times greater than today, with data storage needs expected to be in the order of exabytes.
All this imposes huge computational, storage, and analytic requirements. Two examples
illustrating the combination of advanced HPC, data and AI techniques are the following:
CERN demonstrated that AI-based models have the potential to act as orders-of-
magnitude-faster replacements for computationally expensive tasks in simulation, while
maintaining a remarkable level of accuracy. The time to create an electron shower is
70
reduced from 17000 milliseconds in the full simulation to only 7 milliseconds with the AI
trained model – this has a very important impact in the LHC's worldwide distributed CPU
budget, in which most of the half a million CPU-years equivalent is dedicated to
simulation. This kind of approach could help to realise similar orders-of-magnitude-faster
speedups for computationally expensive simulation tasks used in a range of experiments
in current and future accelerators.
CERN recently adopted two new innovations employing ML for the enhanced detection
and analysis of elementary particles: ML techniques can recognise specific patterns in the
billions of particle collisions that occur every second in the LHC, with innovative
algorithms to identify the different types of quarks in the detector. These techniques can
also increase the sensitivity of the data analysis when comparing the results and thus
verify theoretical models faster, to distinguish between them and, in some cases, exclude
large numbers of new physics models from the measurement results. This permits to
understand which unknown phenomena are still overlooked today.
Weather at ECMWF and MeteoSwiss105
Climate modelling and weather prediction is a major application of HPC systems for
delivering socio-economic benefits through advanced weather and climate forecasts. The
European Centre for Medium-Range Weather Forecasts (ECMWF) uses approximately 40
million observations daily in their models. For climate modelling, a single 30-year run from a
25 km resolution model produces in the order of 10 Terabytes of multivariate data. For
numerical weather prediction (NWP) or climate modelling, the data is fitted to fill a 3-D grid
over which multiple simulations are run over time.
With constantly changing weather patterns and a warming climate, weather forecasters need
to have improved prediction capabilities that extend across time and that provide higher
resolutions going down to a few kilometres. Supercomputers play a vital role due to their
massive compute resources, but is not enough. While there is a 5x fold increase in data that is
expected to happen by 2020, a 1,000x fold increase in the model complexity is expected.
The use of ML techniques help improve the use of HPC resources to enhance parallelism.
MeteoSwiss, the Swiss meteorological office, has successfully run ML/DL techniques.
MeteoSwiss has seen a 40x performance boost and a 3x power consumption reduction, with
finer 1 km resolution and a forecast that can be updated every 3 hours. This was done by
porting traditional simulation codes to fit an accelerated GPU-based cluster that allows an
effective execution of DL and AI techniques.
71
Annex III: Applications of HPC
This section completes and illustrates in more detail section 3.1 “The increasing importance
of HPC for a wide range of applications”.
The data revolution and the strategic digital autonomy
HPC has key role in our industrial, scientific, and societal development. HPC impacts almost
every aspect of our daily life. The Impact Assessment for EuroHPC2
analysed the global use
of HPC with more than 800 applications across all scientific fields, branches of government
and virtually all industries and sectors.
The convergence of HPC, AI, big data and high performance data analytics (HPDA), and
Cloud is the main innovation driver in the “data revolution”, creating entirely new
possibilities to extract useful and usable knowledge from the huge amount of raw data
produced every day50
. By 2020, the entire digital universe is expected to reach 44 zettabytes
(1021), i.e. the equivalent of 5.6 trillion (1012) bytes per human on the planet. And, by 2025,
it's estimated that 463 exabytes (1018
bytes) of data will be created each day –the equivalent
of around 213 million DVDs per day.
HPC is at the core of the Digital Single Market strategy106
. It is the “engine” that powers the
data revolution, and a key element to fulfil the ambition of putting Europe in the driving seat
of the global data economy. HPC is the enabler of novel leading-edge technologies,
applications and solutions that open new opportunities for digitising European science,
industry and public authorities, benefiting all areas of the economy and society.
In the Impact Assessment of the EuroHPC2
an analysis was provided of the current situation
and importance of HPC for the digital autonomy. In a nutshell, the Union currently depends
ever more on foreign supply of key technological components for its supercomputing
infrastructure, making it vulnerable to changes in commercial or geostrategic policies of our
world competitors. The European HPC technology supply chain is weak and the integration of
European technologies into operational HPC machines remains insignificant. This has
important consequences:
– Lack of strategic knowledge in the Union for innovation and competitiveness;
– Data produced by EU research and industry is processed elsewhere because of lack of
corresponding capabilities in the EU.
– European researchers and innovators may move to those areas of the world where high
data and computing capacity is available.
Dependency on non-European resources and knowledge represents a clear risk for Europe's
technological autonomy and scientific and industrial leadership, with wide-ranging
consequences in security, privacy, data protection, commercial trade secrets, and ownership of
data in particular for sensitive applications.
“In many ways, control of computing equals control of information. What if
the personal email and social networks of a sizable portion of U.S. citizens
are hosted overseas? … The emerging Internet of Things has the potential for
providing a variety of innovations that could increase quality of life, but, in
doing so, exponentially increase the amount of sensitive digital information:
medical conditions from wearable diagnostic devices, audio from always-
72
listening artificial intelligence assistants, activity information from an array
of connected sensors in homes and in businesses. The use of HPC by foreign
entities to analyse the data acquired by these systems is a potential threat to
individual and societal privacy.”107
HPC and industry’s innovation potential
HPC is a mainstream technology for the digitisation of industry. The use of HPC is expanding
to all industries as it becomes more accessible with today's and future broadband networks.
HPC has traditionally enabled industrial sectors that are “computationally aware” like
manufacturing to move up into higher value products and services. In particular, the use of
HPC services over the cloud will make it significantly easier for SMEs that do not have the
necessary financial means to invest in their in-house HPC infrastructure to make use of HPC
capabilities in order to develop and produce better products and services.
At European level, there are several successful examples of programmes supporting industrial
access and collaboration based on HPC capacities. For example, the PRACE Industry
Access108
provides dedicated resources from the PRACE supercomputers to industrial
projects, aiming at increasing the industrial uptake of HPC in the Tier-0 PRACE systems. The
HPC Centre of Excellence for Engineering Applications (Excellerat)109
supports key
engineering European industries to use HPC in highly complex applications. The HPC Centre
of Excellence for Performance Optimisation and Productivity (POP2)110
provides
performance optimisation and productivity services for academic and industrial code(s) in all
domains.
HPC and digital twins
Digital twins are exact digital replica of physical entities, products, real processes, and plants
that interact with each other. They can reflect static properties as well as the evolving
behaviour. Through the collection of large amounts of data, they can simulate different
scenarios to define corrective actions, optimise efficiency and diagnose anomalies before they
occur. Digital twin can play a radical transformative role for the digitisation of European
industry:
For example, the digital twin of an automobile prototype is a digital, 3D
representation of every part of the vehicle, replicating the physical world so
accurately that a human could virtually operate the car exactly as he or she
would in the physical world and get the same responses, digitally simulated.
Companies are using these “digital twins” in a growing number of industrial sectors, making
it easier to design and operate complex products and processes ranging from wind turbines to
supermarket aisles. Three-dimensional (3D) digital twins were originally made for product
design and simulation to optimise the product lifecycle. IoT solutions were then incorporated
to generate real-time feedback between physical objects and their digital counterparts.
Digital twins can bring enormous value to companies: the time-to-market is shortened
drastically, because the digital twin can generate data throughout the complete life-cycle
before the real product is launched; and conclusions regarding the condition, usage, error
sources and more can be made and used to develop new and better products.
HPC is enabling a new class of digital twins. Many companies have already derived
immense value from digital twins. Yet, these traditional digital twins are limited in that they
cannot work on physical and virtual models simultaneously. A new generation of digital twins
73
requires the powerful, agile computing capabilities provided by HPC to facilitate global
mobility and collaboration, combining different technologies such as mixed reality tools,
cloud rendering, real-time simulation and analysis, IoT and DL/AI. These new digital twins
are able to significantly accelerate the product development and manufacturing processes, by
generating digital representations of their end-to-end business processes while providing new
ways of collaborating simultaneously in the virtual and physical world.111
Digital twin adoption and market size will continue to increase exponentially twin
adoption.112 113
Adoption of digital twins across products, machines and processes continues
to skyrocket across enterprises. Deloitte forecasts the global market for digital twin
technologies will reach EUR 35 billion by 2025. By 2022, 40% of IoT platform vendors will
integrate simulation platforms, systems and capabilities to create digital twins. 30% of global
2000 companies will be using data from digital twins of IoT connected products and assets,
achieving gains of up to 25%. 70% of manufacturers will use digital twins' to conduct
simulations and scenario evaluations, reducing equipment failure by 30%114
.
SMEs
The use of HPC resources has recently come into reach for many SMEs. Until approximately
five years ago, the use of HPC resources had been considered too complex, costly and hence
out of reach for many smaller businesses. This was largely attributed to the lack of software
integration and limited cloud capacity for running HPC applications and providing users with
easily accessible HPC services. There are, however, continuing barriers to the effective use of
HPC by SMEs due to constraints related to access to adequate software packages that suit the
specific needs of SMEs.44
The HPC specific activities for the SMEs that Horizon 2020 supports show that there is a
growing demand of SMEs as new users of HPC:
The Fortissimo and Fortissimo-2 actions37
were highly successful in attracting new SME
users to advanced cloud-based HPC solutions based on modelling and simulation and/or
HPDA. Fortissimo demonstrated the feasibility of setting up a “market place” of "one-stop
shop" where all necessary skills and services along the simulation value chain for HPC
and HPDA would be easily available and affordable on a pay-per-use basis for
manufacturing SMEs. Fortissimo estimates that about 30 000 SMEs in Europe are likely
to benefit from such a marketplace.
SHAPE44
is a pan-European programme supporting SMEs to adopt HPC and is supported
by the HPC resources provided by the HPC infrastructure of PRACE. SHAPE aims to
raise awareness and equip European SMEs with the expertise necessary to take advantage
of HPC-enabled innovation possibilities, thus increasing SMEs innovation potential.
SHAPE helps European SMEs overcome barriers to use HPC, such as cost of operation,
lack of knowledge and lack of resources, and facilitates the process of defining a workable
solution based on HPC and defining an appropriate business model.
The ambition in EuroHPC will be to develop further the strategies put forward by Fortissimo,
the HPC Competence centres, PRACE SHAPE, SESAMENet70
, Enterprise Europe Network
and Digital Innovation Hubs to enhance the HPC uptake by SMEs. Ensuring fairness, in
particular with regards to the ease of access, access time and the pricing of pay-per-use
access, will be decisive in achieving a broader uptake of HPC in SMEs.
On the other hand, evidence on excess demand from industry is somewhat ambiguous. A
study conducted by the EIB29
notes that HPC customers in Europe are primarily public
74
entities in research and academia. These are accounting for approximately 90-95% of the
operating time on Europe's highest performing systems and only the remaining 5-10% is
installed for private use. The main commercial users of HPC are large corporations while the
uptake amongst SMEs is limited, mainly due to lack of awareness of the benefits of HPC,
technical knowledge barriers, and the considerable capital costs required. Digital Innovation
Hubs and Enterprise Europe Network will have an active role to play in reaching out to SMEs
to promote HPC and its benefits and to encourage, guide, and facilitate SME access to HPC.
There is a trend towards HPC centres gradually opening up to cooperation with industry, and
some of the frontrunners have been operating successful industrial outreach programmes to
work with the private sector. These centres partly finance themselves via these activities, but
the EIB has noted that some public HPC centres lack viable business models due to legal
limitations in raising revenues from commercial activities.
Scientific leadership
HPC and scientific computing and simulation are now firmly established as the third pillar of
modern research, alongside theory and experimentation115
. Thanks to steadily increasing
computing power with the introduction of massively parallel computer systems and
widespread availability of HPC infrastructure (in particular since the 1990s), HPC has quickly
become an essential component in nearly every field of scientific research.
PRACE is the only pan-European scheme allocating high-end computational resources to
scientific computational projects with a common scientific and technical peer-review based on
excellence. The allocation of projects and resources awarded in the PRACE scheme can give
a good indication of the current areas with bigger demands for high-end computing
resources33
:
Figure 18 - Resources awarded in PRACE per area
Areas Resources
awarded
Projects
awarded
Chemical Sciences and Materials 24% 26%
Fundamental Constituents of Matter 21% 15%
Engineering 18% 18%
Universe Sciences 15% 15%
Biochemistry, Bioinformatics and Life sciences 13% 15%
Earth System Sciences 7% 7%
Mathematics and Computer Sciences 2% 3%
Today, many of the recent breakthroughs simply would not be possible without HPC.
“The exponentially increasing advances of scientific computing can easily be
taken for granted, but these accomplishments have only been realised
because of substantial, and ongoing, investments in research and
infrastructure… The reason for these advances is that science has become
interwoven with computing over the last half-century… What these advances
75
have in common is that at one point they were all considered absurdly
difficult and far beyond the capabilities of mathematics, models and available
computers – but they became possible to solve when a large number of
individuals invested decades of effort into using computing to model
problems more difficult than anybody had imagined before.” 116
Access to a leading-class HPC infrastructure with the most advanced supercomputers is
essential to address major scientific challenges that we face today. The use of supercomputers
has been instrumental in the Nobel Prizes of Chemistry in 2013 for the development of
multiscale models for complex chemical systems, and of Physics in 2017 awarded for the
discovery of gravitational waves.
“A world class European computational infrastructure will expand the
Frontiers of Fundamental Sciences, extending and complementing
experiments. …. This fundamental research advances the state-of-the-art of
scientific computing and helps attract new generations to science,
technology, engineering and mathematics.”
The applications of HPC in science are countless. For example, in fundamental physics,
advancing the frontiers of knowledge of matter in CERN experiments, or exploring the
universe with data from advanced telescopes such as Hubble or the Square Kilometre Array;
in material sciences, for the design of new components critical for the pharmaceutical or
energy sectors among many other fields; in fluid dynamics and adaptive control problems for
the design of airplanes or planning of smart cities; in recognition of natural spoken language;
in modelling the atmospheric and oceanic phenomena at planetary level, etc.
It is probably in the field of life sciences and medicine where the tremendous impact of
bioinformatics is already very visible, for example in understanding generation and evolution
of diseases (in particular cancer) and their early detection and treatment. This is made possible
to the fast identification of genetic disease variants by supercomputers, processing billions of
DNA sequences. HPC is also critical for simulating the human brain to study their structure,
from the re- or de-generation of neurons to much more complex cerebral structures and
functions, leading for example to valuable insights for prevention and cure of Alzheimer’s
disease.
Societal challenges, policy making and national security
Societal Challenges and policy making
Citizens expect sustained improvements in their everyday life, while at the same time society
is confronted with an increasing number of complex challenges – at the local urban and rural
level as well as at the planetary scale. HPC is an essential tool for transforming those
challenges into innovation and bringing opportunities for growth and jobs that the EU
economy needs.
HPC is a strategic resource for policy-making, helping us to understand our ever-changing
world, and providing a much-needed evidence for designing efficient solutions in many of the
global challenges. Given the inter-disciplinary nature of HPC and the wide range of
applications, citizens and policy makers will benefit from an increased level of computational
resources in areas such as:
Weather and Climate change: HPC underpins climate study and prediction (weather
forecast, catastrophes prevention and civil protection planning, etc.).
76
Health, demographic change and wellbeing: the development of new therapies will
heavily rely on HPC for understanding the nature of disease, discovering new drugs, and
customising therapies to the specific needs of a patient
Secure, clean and efficient energy: HPC is a critical tool in developing fusion energy, in
designing high performance photovoltaic materials or optimising turbines for electricity
production.
Smart, green and integrated urban planning: the control of large transport infrastructure
in smart cities will require the real time analysis of huge amounts of data in order to
provide multivariable decision/data analytics support in your mobile or car. In addition,
HPC can be used for monitoring of water and air quality, pollution control, etc.
Food security, sustainable agriculture, marine research and the bio-economy: HPC is
used to optimise the production of food and analyse sustainability factors (e.g. plagues and
diseases control, etc.).
Crisis management: In the last few years, a huge number of people have been forced to
leave their homes. One of the major issues is to forecast refugee movements, which would
allow decision makers and NGOs to allocate humanitarian resources accordingly. An HPC
simulation framework can help to accurately predict massive refugee movements coming
from various conflict regions of the world.
The “Impact Assessment Study for Institutionalised European Partnerships under Horizon
Europe - Candidate Institutionalised European Partnership in High-Performance Computing
(Final Report)”30
has identified eight (of 17) Sustainable Development Goals (SDG) where
next-generation HPC systems ought to make a meaningful contribution, alongside an
informed judgement as to the extent of the potential contribution to each SDG. Some of these
impacts could materialise via HPC contribution to Earth-observation services, weather
forecasting, ocean forecast and climate services, disaster prevention and crisis management
systems such as those from Copernicus (e.g. Copernicus emergency monitoring service,
Copernicus Climate change service, Copernicus marine environmental monitoring service,
and others). Furthermore, HPC can also lead to increased rapid response capabilities. For
example, EuroHPC is already discussing special access criteria for emergency access to
EuroHPC machines, to deal with disaster situations requiring computing power at a short
notice (floods, earthquakes, pollution, disease propagation, etc.)117
.
Figure 18 – Areas of contribution of HPC to Sustainable Development Goals
SDG Extent of the contribution
SDG 2 End hunger Via applications of HPC (medium)
SDG3 Good Health and Well-being Via applications of HPC (high)
SDG4 Quality Education Societal-level (medium)
SDG7 Affordable and Clean Energy Via applications of HPC (high)
SDG8 Decent Work and Economic Growth Direct contribution (high)
SDG9 Industry, Innovation, and Infrastructure Direct contribution (high)
SDG13 Climate Action Via applications of HPC (high)
SDG16 Peace, Justice, and Strong Institutions Via applications of HPC (medium)
77
EuroHPC supports world leading efforts in HPC powered simulations and applications of
direct relevance to the goals of a European Green Deal, notably through HPC Centres of
Excellence. These Centres focus on critical challenges such as Weather and Climate
(ESiWACE)118
, Energy (EoCoE-2)119
, Biomedical Applications (CompBioMed2)120
,
Biomolecular research (BioExcel-2)121
, Materials (MaX)122
, Solid Earth geophysical
monitoring (ChEESE)123
, or complex Global Challenges (HiDALGO).124
One of the most striking examples of the use of HPC for societal challenges is weather and
climate change, where exascale performance is absolutely needed to predict the size and paths
of storms and floods earlier and more accurately, saving lives and reducing the economic
impact of the increasing damaging effects of climate change:
The last twenty years have seen dramatic losses of human lives and economic
output from climate-related disasters worldwide. According to the UN Office
for Disaster Risk Reduction125
, climate-related and geophysical disasters
were responsible for 1.3 million deaths between 1998 and 2017, and a further
4.4 billion injured, homeless, displaced or in need of emergency assistance.
91% of all disasters were caused by floods, storms, droughts, heatwaves and
other extreme weather events.
National security
HPC is also essential for national security, defence and national sovereignty. HPC is
recognised as a national strategic priority in the most powerful nations. Supercomputers are in
the first line for nuclear simulation and modelling, cyber-criminality and cyber-security, in
particular for the protection of critical infrastructures. HPC is also increasingly used in the
fight against terrorism and crime, for example for face recognition or for suspicious behaviour
in cluttered public spaces.
“Leadership in high performance computing remains indispensable to a
country's industrial competitiveness, national security, and potential for
scientific discovery… Advanced, high performance computing increasingly
determines a nation's economic as well as defence security.”126
“During the past five years, political leaders in the U.S., Europe, and China
have recognized the ability of leadership-class supercomputers to help
transform their economies, their societies, and their understanding of the
natural world. In Japan and other developed countries, leadership-class
supercomputers have played a major role in advancing science, boosting
industrial competitiveness, and improving the quality of daily life for average
citizens.”127
HPC is a new weapon in cyber-war. The US has put HPC at the heart of cybersecurity
practices in the public domain,128
identifying and analysing abundant opportunities for HPC
use and elaborating concepts, planning and a roadmap toward HPC-based cybersecurity to
alleviate the cybersecurity dilemma on a national scale.
Security Roundup: Ukraine blocked a Russian hack of its critical
infrastructure: Ukrainian security services this week (July 2018) said they
stopped an attempted cyberattack against a chlorine distribution plant.
Russia has repeatedly targeted Ukraine, including devastating attacks on its
power grid. In this case, Russian hackers apparently used VPN Filter
malware-the same that infected half a million routers in May-to try to disrupt
78
the operations at the plant, which provides clean water throughout the
country. Ukraine didn't offer many details about how exactly it thwarted the
attack, but did say it headed off "possible catastrophic consequences."129
"… national security requires the best computing available, and loss of
leadership in HPC will severely compromise our national security …
National Security modelling and simulation using HPC play a vital role in
the design, development, and analysis of many – perhaps almost all – modern
weapons systems and national security systems …. Simply put, leading-edge
HPC is now instrumental to getting a world-class, large-scale engineering
system out the door …”107
The exponential rise of the economic losses associated to cybercrime reveals also the need for
secure and efficient infrastructures and for technologies that can anticipate and promptly react
to an ever increasing menace:
Every day, the AV-TEST Institute registers over 350,000 new malicious
programs (~83.4% malware and 16.6% potentially unwanted applications
PUA in 2018). 130
“Four years ago in 2015, the global cost of malware was an already-
staggering EUR 450 billion. In just a short time, however, the economic toll
of cybercrime has grown fourfold, to EUR 1.8 trillion. At the current
trajectory, the total cost will reach EUR 5.4 trillion by 2021…. January 2019
saw the release of nearly two billion hacked records (that) included data
from 202 million Chinese citizens and a database of FBI investigations… In
2018, the cost of a data breach increased by 6.4% to EUR 3.5 million. In the
US only it’s more like EUR 7.12 million… (in 2019) organizations and
individuals will pay EUR 10.35 billion, either as a cost of remediating
ransomware damage or simply as a cost of paying a ransom…. Crypto-
jacking malware steals your CPU cycles to mine cryptocurrency, and it’s
some of the fastest-growing malware out there, with 8 million attempts per
month at the beginning of 2018….”131, 132
HPC and the COVID-19 crisis
The use of HPC resources with big data sets, deep learning methods and large-scale complex
computational models is also critical to effectively support policymakers during epidemic
emergencies, by rapidly forecasting the trajectory of the spread of an infectious disease,
planning the public health policy response, as well as simulating the efficiency of different
containment measures and evaluating the different post-epidemic scenarios.
The Commission works in close collaboration with the PRACE members to mobilise
additional supercomputing resources in an urgent/priority access scheme for computational
research targeting COVID-19, with a specific call133
to provide researchers with access to
supercomputing resources for their coronavirus related activities.
The European supercomputers are boosting their efforts in search for coronavirus treatment.
The EU-funded supercomputing project “Exscalate4CoV”99
exploits high-performance HPC
and artificial intelligence (AI) technologies that complement traditional biology methods to
find a treatment for the novel coronavirus disease, with support from supercomputers,
biological institutes, research centres and pharmaceutical companies.
79
Led by Dompé, a pharmaceutical company based in Italy, the project brings together three
powerful supercomputing centres – CINECA in Italy, BSC in Spain and JSC in Germany,
several large biochemical institutes and research centres from seven European countries. E4C
will receive €3 million of EU funding over 18 months.
The project aims to identify a possible treatment for Covid-19 patients. They use a "drug
library" containing 500 billion molecules, matching them against the digitised proteins of the
virus to discover which combination of molecules would inhibit the virus. These operations
have already given promising results, with over 50 potential antiviral molecules identified
from the computer simulations so far. Biologists and biochemists are now working on the
biological screening of these identified molecules.
After this phase, the selected molecules will go through clinical testing to identify a possible
treatment for patients. The project has already started discussions with the European
Medicines Agency on the regulatory process that will be required when moving to the clinical
testing.
The success of E4C also depends on the number of active molecules for the matching
operations. In order to enlarge its “drug library”, the project, with the support of the European
Federation of Pharmaceutical Industries and Associations134
, launched an open call for
collaboration with European pharmaceutical industries.
Other initiatives such as the CompBioMed Centre of Excellence120
are using world HPC
resources to work on the following issues135
:
identifying new antiviral drugs by screening libraries of potential drugs, including
those that have already been licensed to treat other diseases
accelerating vaccine development by identifying virus proteins or parts of protein that
stimulate immunity
studying the spread of the virus within communities
analysing the origin and structure of the SARS-CoV-2 genome
studying how the SARS-CoV-2 virus interacts with human cells to turn them into
virus factories
International efforts
Other countries are also putting substantial HPC resources to tackle the COVID-19 crisis. Just
a few examples are:
The US has set up the “COVID-19 High Performance Computing Consortium”136
, a
public private partnership with the Federal government, industry, and academic
leaders coming together to provide access to high-performance computing resources in
support of COVID-19 research. This complements the US National Science
Foundation (NSF) to provisioning advanced cyberinfrastructure to further research on
COVID-19137
.
China’s supercomputer Tianhe-1138
has been dedicated to fight COVID-19,
specialising in training AI models to analyse hundreds of chest scans from patients in
a few seconds and distinguishing between cases of pneumonic patients with COVID-
19 and patients with non-COVID-19 pneumonia, dramatically outperforming early test
kits as well as human radiologists with nearly 80% accuracy.
80
The most powerful supercomputer in the world, the Japanese Fugaku139
, is also
helping combat the COVID-19 pandemic, by giving priority to research selected by
the Japanese Ministry of Education, Culture, Sports, Science and Technology.
The Joint Supercomputer Centre of the Russian Academy of Sciences (RAS)140
, is
working to develop drugs to fight against COVID-19 with massive molecular
dynamics and quantum chemistry simulations, in particular by studying the virus
“spike” protein and its interactions with the human protein ACE2, which serves as the
entry point for SARS-class viruses.
The National Supercomputing Centre (NSCC)141
in Singapore has offered its
resources to Singapore scientists to study COVID-19, issuing a fast-track call for
projects to use the ASPIRE 1 petascale supercomputer.
81
Endnotes and web references
1
Communication "A European Strategy for data" - COM(2020) 66 final
2
Communication “Shaping Europe’s Digital Future” – COM(2020) 67 final
3
Communication “Europe's moment: Repair and Prepare for the Next Generation” – COM(2020) 456 final -
4
SWD(2018) 6 final - Impact assessment accompanying the “Proposal for a Council Regulation on
establishing the EuroHPC Joint Undertaking", Annex 5, 2017
5
Communication “A European strategy for data” COM(2020) 66 final, https://ec.europa.eu/digital-single-
market/en/destination-earth-destine
6
Council Regulation establishing the Joint Undertaking on High Performance Computing (EU) 2018/1488 of
28 September 2018, OJ L252/1-34, 08.10.2018
7
White paper on Artificial Intelligence - A European approach to excellence and trust – COM(2020) 65 final -
8
PRACE (Partnership for Advanced Computing in Europe) www.prace-ri.eu
9
GEANT, the pan-European high-speed network for scientific excellence, research, education and innovation
www.geant.org
10
https://ec.europa.eu/digital-single-market/en/quantum-technologies
11
Horizon 2020 https://ec.europa.eu/programmes/horizon2020/en
12
Connecting Europe Facility (CEF) https://ec.europa.eu/inea/en/connecting-europe-facility
13
Communication "High-Performance Computing: Europe's place in a global race" - COM(2012) 45 final -
14
Communication "European Cloud Initiative – Building a competitive data and knowledge economy in
Europe" COM(2016) 178 final.
15
Communication on the Mid-Term Review of the Digital Single Market Strategy - COM(2017) 228 final.
16
Competitiveness Council adopting conclusions on the HPC Communication on 24 May 2013, Doc. 9808/13.
17
Competitiveness Council conclusions on the ECI Communication on 29-30 May 2016, doc 9357/16.
18
Conclusions of the European Council of 28 June 2016.
19
European Parliament, Report on the European Cloud Initiative (2016/2145(INI)), ITRE Committee, 26
January 2017.
20
Council conclusions on shaping Europe's digital future, 09 June 2020, doc 8711/20.
21
7th
European Framework Programme for Research and Innovation (FP7)
https://ec.europa.eu/research/fp7/index_en.cfm
22
European Technology Platform (ETP4HPC) Association http://www.etp4hpc.eu/
23
Big Data Value Association http://www.bdva.eu/
24
Digital Single Market: Europe announces eight sites to host world-class supercomputers, 7 June 2019
http://europa.eu/rapid/press-release_IP-19-2868_en.htm
25
EuroHPC Call for proposals for R&I actions 2019, https://ec.europa.eu/digital-single-
market/en/news/eurohpc-joint-undertaking-launches-first-research-and-innovation-calls
26
European Processor Initiative Framework Partnership Agreement (FPA) https://www.european-processor-
initiative.eu/
27
Topic INFRAEDI-05-2020: Centres of Excellence in exascale computing
https://ec.europa.eu/research/participants/data/ref/h2020/wp/2018-2020/main/h2020-wp1820-
infrastructures_en.pdf
28
https://ec.europa.eu/research/participants/data/ref/h2020/wp/2018-2020/main/h2020-wp1820-fet_en.pdf
29
“Financing the future of supercomputing: How to increase the investments in high performance computing in
Europe”, EIB 2017, https://www.eib.org/en/publications/financing-the-future-of-supercomputing
30
“Forthcoming external Study : “Impact Assessment Study for Institutionalised European Partnerships under
Horizon Europe - Candidate Institutionalised European Partnership in High-Performance Computing (Final
Report)”, Technopolis (2020), supported by DG RTD
82
31
Top world supercomputers, https://www.top500.org/
32
ASCR facilities available at https://science.energy.gov/user-facilities/user-facilities-at-a-glance/ascr/
33
PRACE resources http://www.prace-ri.eu/prace-resources/
34
PRACE KPIs at: http://www.prace-ri.eu/prace-kpi/
35
Horizon 2020, Annotated Grant agreement,
https://ec.europa.eu/research/participants/data/ref/h2020/grants_manual/amga/h2020-amga_en.pdf
36
https://ec.europa.eu/digital-single-market/en/digital-innovation-hubs
37
Fortissimo and Fortissimo 2 booklet - https://www.fortissimo-
project.eu/sites/default/files/Fortissimo_SS_Booklet_web_0.pdf
38
EXDCI project, https://exdci.eu/
39
Exanode project, http://exanode.eu/
40
Exanest project, https://www.exanest.eu/
41
Euroexa project, https://euroexa.eu/
42
Mango project, https://cordis.europa.eu/project/id/671668
43
Montblanc projects, https://www.montblanc-project.eu/
44
SME HPC Adoption Programme in Europe, https://prace-ri.eu/prace-for-industry/shape-access-for-smes/
45
“The EuroHPC mission and strategy for the next decade”, EuroHPC JU Industrial and Scientific Advisory
Board, internal document for the EuroHPC JU Governing Board
46
European “1+ Million Genomes” initiative launched in 2018, https://ec.europa.eu/digital-single-
market/en/european-1-million-genomes-initiative
47
US President Trump, Executive order on maintaining American leadership in AI, February 2019
https://www.whitehouse.gov/presidential-actions/executive-order-maintaining-american-leadership-artificial-
intelligence/
48
Enterprise Europe Network https://een.ec.europa.eu/
49
OpenAI analysis 2019, https://www.technologyreview.com/s/614700/the-computing-power-needed-to-train-
ai-is-now-rising-seven-times-faster-than-ever-before/
50
World Economic Forum, “How much data is generated each day?
https://www.weforum.org/agenda/2019/04/how-much-data-is-generated-each-day-cf4bddf29f/
51
Square Kilometre Array (SKA), https://www.skatelescope.org/the-ska-project/
52
ESA "https://www.esa.int/Our_Activities/Observing_the_Earth/Copernicus/Copernicus_20_years_on"
53
Genomic data challenges of the future, the Medical Futurist 2018, https://medicalfuturist.com/the-genomic-
data-challenges-of-the-future/
54
CERN computational and storage needs https://home.cern/science/computing/storage
55
Science Direct, Moore’s law https://www.sciencedirect.com/topics/computer-science/moores-law
56
Cray Wins NNSA-Livermore ‘El Capitan’ Exascale Contract, HPCWire, 13 August 2019
https://www.hpcwire.com/2019/08/13/cray-wins-nnsa-livermore-el-capitan-exascale-award/
57
https://ec.europa.eu/newsroom/dae/document.cfm?doc_id=65402
58
https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=COM%3A2018%3A434%3AFIN
59
Commission Staff Working Document – Impact Assessment accompanying the Proposal for a Regulation of
the European Parliament and of the Council establishing the Digital Europe programme for the period 2021-
2027 {COM(2018) 434 final} - {SEC(2018) 289 final} - {SWD(2018) 306 final}
60
A Union that strives for more, European Commission 2019, https://ec.europa.eu/commission/sites/beta-
political/files/political-guidelines-next-commission_en.pdf
61
Communication “European Green Deal” - COM (2019) 640 final
62
https://ec.europa.eu/info/sites/info/files/communication-shaping-europes-digital-future-feb2020_en_4.pdf
63
Communication “A New Industrial Strategy for Europe” - COM(2020) 102 final.
64
Communication “An SME Strategy for a sustainable and digital Europe” - COM(2020) 103 final.
83
65
Staff Working Document “Identifying Europe's recovery needs” - SWD(2020) 98 final -
66
Communication “The EU budget powering the recovery plan for Europe” – COM(2020) 442 final -
67
Horizon Europe (HE) - COM/2018/436 final - 2018/0225 (COD) - https://eur-lex.europa.eu/legal-
content/EN/TXT/?uri=COM%3A2018%3A436%3AFIN
68
Connecting Europe Facility (CEF-2) - COM/2018/438 final - https://ec.europa.eu/commission/sites/beta-
political/files/budget-may2018-cef-regulation_en.pdf
69
PRACE training events https://events.prace-ri.eu/category/1/
70
SesameNet https://sesamenet.eu/
71
ECSEL Joint Undertaking for Electronic Components and Systems https://www.ecsel.eu/
72
EOSC https://ec.europa.eu/research/openscience/index.cfm?pg=open-science-cloud
73
EOSC portal: https://www.eosc-portal.eu/
74
IDC Spending Forecast, 2018 https://www.idc.com/promo/global-ict-spending/forecast
75
Creating Economic Models Showing the Relationship Between Investments in HPC and the Resulting
Financial ROI and Innovation — and How It Can Impact a Nation's Competitiveness and Innovation, IDC
2013, https://www.hpcuserforum.com/ROI/
76
Study SMART 2014/0021 for the EC "High-Performance Computing in the EU: Progress on the
Implementation of the European HPC Strategy"; IDC 2015. https://publications.europa.eu/en/publication-
detail/-/publication/5a7cfd63-d59a-4211-8b28-0c72cad7068c/language-en
77
Economic Models Linking HPC and ROI, Hyperion 2018, https://www.hpcuserforum.com/ROI/
78
HPC engagement opportunities for Government, Academia and Industry, Hyperion 2017
https://www.hpcuserforum.com/presentations/Wisconsin2017/HyperionUSHPCOpportunitesforEngagement.
pdf
79
Eurostat 2018, https://ec.europa.eu/growth/index_en
80
Eurostat 2018 https://ec.europa.eu/growth/index_en, DG GROW https://ec.europa.eu/growth/index_en,
EFPIA https://www.efpia.eu/media/361960/efpia-pharmafigures2018_v07-hq.pdf, and
https://en.wikipedia.org/wiki/List_of_largest_oil_and_gas_companies_by_revenue
81
How HPC is Helping the Future of Weather Prediction, Weather Analytics 2017,
https://www.cloud28plus.com/emea/content/https---verneglobal-com-blog-how-hpc-is-helping-the-future-of-
weather-prediction
82
Extreme weather deaths in Europe 'could increase 50-fold by next century'
https://www.theguardian.com/science/2017/aug/04/extreme-weather-deaths-in-europe-could-increase-50-
fold-by-next-century
83
The Economic Consequences of Climate Change, OECD 2015, https://www.oecd-
ilibrary.org/environment/the-economic-consequences-of-climate-change_9789264235410-en
84
List of sources:
Hyperion Research Update 2019 (November): https://hyperionresearch.com/wp-
content/uploads/2019/06/Hyperion-Research-ISC19-Breakfast-Briefing-Presentation-June-2019.pdf and
https://www.hpcwire.com/2019/11/21/hyperion-ai-driven-hpc-industry-continues-to-push-growth-
projections/
Hyperion Research Update 2019 – https://hyperionresearch.com/wp-
content/uploads/2019/06/Hyperion-Research-ISC19-Breakfast-Briefing-Presentation-June-2019.pdf,
Worldwide Public Cloud Services Spending Forecast to Reach $210 Billion This Year,
https://www.idc.com/getdoc.jsp?containerId=prUS44891519, Hyperion 2019 update
https://insidehpc.com/2019/06/hpc-market-five-year-forecast-bumps-up-to-44-billion-worldwide/
HYPERION RESEARCH UPDATE: Research Highlights In HPC, HPDA-AI, Cloud Computing,
Quantum Computing, and Innovation Award Winners, 2019, https://hyperionresearch.com/wp-
content/uploads/2019/06/Hyperion-Research-ISC19-Breakfast-Briefing-Presentation-June-2019.pdf
85
WCCTech 2019, AMD and the market 2019, https://wccftech.com/amd-cpu-market-share-highest-since-
2013-ryzen-threadripper-epyc/
84
86
https://venturebeat.com/2019/12/11/risc-v-grows-globally-as-an-alternative-to-arm-and-its-license-fees/
87
https://technode.com/2019/07/24/chinas-chipmakers-risc-v-sanctions/
88
RAND,https://www.rand.org/content/dam/rand/pubs/research_reports/RR1400/RR1478/RAND_RR1478.pdf
89
GAIA-X, Dotmagazine 2019, https://www.dotmagazine.online/issues/on-the-edge-building-the-foundations-
for-the-future/gaia-x-a-vibrant-european-ecosystem
90
Worldwide Public Cloud Services Spending Forecast, IDC 2019,
https://www.idc.com/getdoc.jsp?containerId=prUS44891519
91
HPCwire October 2019, https://www.hpcwire.com/solution_content/ibm/cross-industry/spooky-hpc-cloud-
computing-stats-just-in-time-for-halloween/
92
Arm A64fx and Post-K: Game Changing CPU & Supercomputer for HPC and its Convergence of with Big
Data / AI, Satoshi Matsuoka 2019,
https://www.hpcuserforum.com/presentations/april2019/Rikenmatsuoka.pdf
93
Communication "Artificial intelligence for Europe"- COM(2018) 237 final
94
What is Industry 4.0?, Forbes 2018, https://www.forbes.com/sites/bernardmarr/2018/09/02/what-is-industry-
4-0-heres-a-super-easy-explanation-for-anyone/
95
Artificial Intelligence and the future of Defence, The Hague Centre for Strategic Studies 2017,
https://hcss.nl/sites/default/files/files/reports/Artificial%20Intelligence%20and%20the%20Future%20of%20
Defense.pdf
96
US President Trump budget request for 2020, 2019, https://www.whitehouse.gov/wp-
content/uploads/2019/03/budget-fy2020.pdf
97
Gartner Top 6 Security and Risk Management Trends for 2018, Gartner 2018,
https://www.gartner.com/smarterwithgartner/gartner-top-5-security-and-risk-management-trends/
98
ANTAREX project, http://www.antarex-project.eu/ and https://www.exscalate.eu/
99
Exscalate4CoV project, https://www.exscalate4cov.eu/
100
Human Brain project (HBP), https://www.humanbrainproject.eu/en/
101
Pl@ntnet, https://identify.plantnet.org/
102
CALIOPE project, http://www.bsc.es/caliope/es/ and http://www.aire.cdmx.gob.mx/pronostico-
aire/index.php
103
IoTwins, http://www.hpc.cineca.it/news/iotwins-project-digital-twins-industrial-plants
104
CERN, and the use of AI in CERN: : https://home.cern/science/computing/storage,
https://www.hpcwire.com/2018/08/14/cern-incorporates-ai-into-physics-based-simulations/, https://bits-
chips.nl/artikel/cern-uses-vub-ai-methods-to-decode-the-universe/
105
https://www.cray.com/sites/default/files/Tractica-White-Paper_Use-Cases-for-AI-in-HPC.pdf
106
Digital Single Market, Digitising the European industry
107
U.S. Leadership in High Performance Computing (HPC) - A Report from the NSA-DOE Technical Meeting
on High Performance Computing, December 2016,
https://www.nitrd.gov/nitrdgroups/images/b/b4/NSA_DOE_HPC_TechMeetingReport.pdf
108
PRACE Industry Access - http://www.prace-ri.eu/prace-supports-industry/
109
Excellerat Centre of Excellence, https://www.excellerat.eu/
110
POP2 Centre of Excellence, https://pop-coe.eu/
111
The cloud enables next generation digital twin, Microsoft, 2018, https://cloudblogs.microsoft.com/industry-
blog/manufacturing/2018/08/20/the-cloud-enables-next-generation-digital-twin/
112
Digital twins, Deloitte 2020 https://www2.deloitte.com/us/en/insights/focus/tech-trends/2020/digital-twin-
applications-bridging-the-physical-and-digital.html
113
Digital Twin Predictions: The Future Is Upon Us, PTC 2019, https://www.ptc.com/en/product-lifecycle-
report/digital-twin-predictions
114
IDC FutureScape: Worldwide IoT 2018 Predictions, IDC 2017,
https://www.idc.com/research/viewtoc.jsp?containerId=US43161517
85
115
https://www.researchgate.net/publication/220405901_Research_advances_by_using_interoperable_e-
science_infrastructures
116
The Scientific Case for Computing in Europe 2018 – 2026, PRACE 2018, https://www.eu-maths-in.eu/wp-
content/uploads/2018/05/MSO-vision.pdf
117
Workshop on EuroHPC Systems Access Policy, 2019. Sergi Girona 2019:
https://www.youtube.com/watch?v=DIQthdbBl_Y
118
ESIWACE Centre of Excellence, https://www.esiwace.eu/
119
EoCoE Centre of Excellence, https://www.eocoe.eu/
120
CompBioMed Centre of Excellence, https://www.compbiomed.eu/
121
BioExcel Centre of Excellence, https://bioexcel.eu/
122
MaX - Materials design at the Exascale, http://www.max-centre.eu/
123
Cheese Centre of Excellence, https://cheese-coe.eu/
124
Hidalgo Centre of Excellence, https://hidalgo-project.eu/
125
Economic losses, poverty & disasters: 1998-2017, UNDRR report 2018,
https://www.unisdr.org/we/inform/publications/61119
126
The Vital Importance of High-Performance Computing to U.S. Competitiveness, ITIF,
https://itif.org/publications/2016/04/28/vital-importance-high-performance-computing-us-competitiveness
127
Investigation of the Ripple Effects of Developing and Utilizing Leadership Supercomputers in Japan,
Hyperion https://www.r-ccs.riken.jp/r-ccssite/wp-content/uploads/2016/12/IDC-Study-for-Riken-Ripple-
Effects_final.pdf
128
National Cyber Defence High Performance Computing and Analysis: Concepts, Planning and Roadmap ()
https://prod-ng.sandia.gov/techlib-noauth/access-control.cgi/2010/104766.pdf)
129
Wired, 2018, https://www.wired.com/story/security-roundup-ukraine-blocked-a-russian-hack-of-its-critical-
infrastructure/
130
The Independent IT-Security Institute (AV-TEST), 2019, https://www.av-test.org/en/statistics/malware/
131
Malware Statistics, Trends and Facts in 2019, Safety detectives 2019,
https://www.safetydetectives.com/blog/malware-statistics/
132
21 Terrifying Cyber Crime Statistics, Data Connectors 2018, https://dataconnectors.com/technews/21-
terrifying-cyber-crime-statistics/
133
http://prace-ri.eu/prace-support-to-mitigate-impact-of-covid-19-pandemic/
134
EFPIA association, https://www.efpia.eu/
135
https://sciencebusiness.net/network-updates/ucl-researchers-are-using-worlds-most-powerful-
supercomputers-tackle-covid-19
136
https://covid19-hpc-consortium.org/
137
https://www.hpcwire.com/off-the-wire/nsf-provisioning-advanced-cyberinfrastructure-to-further-research-on-
covid-19/
138
https://www.scmp.com/news/china/society/article/3075153/coronavirus-chinese-supercomputer-uses-
artificial-intelligence
139
https://www.riken.jp/en/news_pubs/news/2020/20200407_1/index.html
140
https://www.hpcwire.com/2020/03/31/russian-supercomputer-employed-to-develop-covid-19-treatment/
141
https://www.hpcwire.com/off-the-wire/singapores-national-supercomputing-resources-joins-the-fight-
against-covid-19/