EESI2 is recognised by many instances around the world. It organises and contributes to many international meetings to lead the way in the research and development of software for efficient Exascale applications.

The worldwide outreach of EESI2 can be perceived in a major achievement, the collaboration to the creation of Big Data and Extreme-scale Computing (BDEC).

Since 2013, a series of International workshops have been organized in USA, Japan and Europe.

Other formal collaboration have been established since 2014 ETP4HPC and PRACE, two key European projects focusing on HPC.

EESI2 in its areas of expertise contributes to the  ETP4HPC roadmap. As a result, most of the 2014 EESI2 recommendations that are described in the Guidance section, have been included in the ETP4HPC draft Work programme 2016/2017 proposed to the European Commission for funding.

This section reflects the 2015 vision and the recommendations of more than 150 worldwide experts in scientific and industrial applications and in all sciences and technologies required for Exascale.

It takes into account existing strengths in the European HPC and ICT communities. It addresses key strategic areas for which there is an urgent need for funded programs of work, beyond the classical and conservative HPC approaches, to develop and improve European competitiveness and to achieve leadership.
The EESI data centric vision has been elaborated in 2014. Though very new in Europe it is essential for approaching the ultra complex and interdependent challenges of Extreme Computing and Extreme Data.

Discover the 3 pillars to be funded by European Commission to build efficient Exascale applications

Recommendations in this pillar concern programming models and methods, heterogeneity management, software engineering and cross-cutting issues like resilience, validation and uncertainty quantification with a strong focus on the specificity of Exascale in these domains.
Recommendations encompass the following programs:

    • High productivity programming models for Extreme Computing,
    • Holistic approach for extreme heterogeneity management of Exascale supercomputers,

EESI2 experts consider that Exascale will require heterogeneity at an unprecedented level . This means embedding in the same system near-data, data-parallel, function specific accelerators, general purpose multicores with multiple ISA, with same ISA but multiple energy-performance trade-offs, networking accelerators as well as volatile and non-volatile memories. Such recommendations are coherent with the European Technology Platform for High Performance Computing (ETP4HPC) vision.
This set of recommendations aim to design and develop new efficient HW/SW APIs for the integrated management of heterogeneous systems, near-data technologies and energy-aware devices, to enable exascale-ready applications.

    • Software Engineering Methods for High-Performance Computing,

To be able to cope with the complexity of exascale systems in terms of very large number of nodes, heterogeneity of resources, energy awareness, or fault tolerance, to mention just a few challenges, new productivity-enhancing methods and tools are needed to use the manpower available for the development and maintenance of HPC software more effectively, maximizing the potential of applications under given resource constraints.

    • Holistic approach to resilience,

The evolution of the software and hardware technologies will lead to an increase of fault rates that will translate into higher error and failure rates at Exascale. HPC hardware itself cannot detect and correct all errors and failures. Maintaining European resilience capabilities is mandatory to be able to develop efficient Exascale applications. But besides the development of resilience techniques, a holistic detection/recovery approach covering and orchestrating all layers from the hardware to the application appears necessary to be able to run simulations and data analytics executions to completion and produce correct results.

    • Verification Validation and Uncertainties Quantifications tools evolution,

for better exploitation of Exascale capacities
The mathematical background behind uncertainty analysis is very strong and comes from the field of statistics. Europe can claim world leading experts on these topics, but the link between this community and the HPC community must be strengthened. The recommendation aims at preparing an unified European VVUQ package for Exascale computing by identifying and solving problems limiting usability of these tools on many-core configurations; facilitating access to the VVUQ techniques to the HPC community by providing software that is ready for deployment on supercomputers; and making methodological progresses on the VVUQ methods for very large computations.

Recommendations in this area concern specific and disruptive algorithms for Exascale computing, taking a step-change beyond “traditional” HPC. It will lead to the design and implementation of extremely efficient scalable solvers for a wide range of applications.
The following recommendations are proposed for funding by the European Commission:

    • Algorithms for Communication and Data-Movement Avoidance

The objectives of this recommendation is to coordinate the multiple groups working on many different aspects and in different algorithmic areas.
The scope is to explore novel algorithmic strategies, far beyond the well-known communication hiding techniques, to minimize data movement as well as the number of communication and synchronization instances in extreme computing; minimization should happen at both local (e.g. within a multiprocessor, across a memory hierarchy) and remote (e.g. network) levels.

    • Parallel-in-Time: a fundamental step forward in Exascale Simulations (disruptive approach).

The efficient exploitation of Exascale systems will require massive increases in the parallelism of simulation codes, and today most time-stepping codes make little or no use of parallelism in the time domain; the time is right for a coordinated research program exploring the huge potential of Parallel-in-Time methods across a wide range of application domains.

This area links extreme computing and extreme data. For the transition to Exascale, current data life cycle management techniques must be fully rethought, as described in the first joined document “Software for Data Centric Approaches to Extreme Computing” which is more a vision than a concrete recommendation. This pillar gathers together key strategic issues for Exascale applications which are not enough addressed until now in Europe.
Ensuing from the EESI holistic vision of “Software for Data Centric Approaches to Extreme Computing”, this section describes the final roadmap and recommendations issued by EESI2 Experts for critical R&D challenges to be funded by European Commission in order to develop efficient Exascale applications.

    • Towards flexible and efficient Exascale software couplers,

As stated by applications experts in many recent reports, the rise of extreme computing with data intensive capacities will allow only few “hero” applications to scale out to such full systems in capability mode (one simulation scaling on billions of cores). The major potential of such upcoming architectures will rely capacity simulation based on multi-scale and multi-physics scientific codes running individually on hundreds thousands of cores and smartly coupled together with highly loaded models that exchange data with a high frequency.
Such coupled models are challenging to develop due to the need to coordinate execution of the independently developed model components while resolving both scientific and technical heterogeneities. Despite some existing specific initiatives, there is a crucial need to develop new and common European-wide coupling methodologies and tools in order to support major scientific challenges in research (evolution of the climate, astrophysics and materials) and engineering (combustion, catalysis, energy, …).
To improve the performances of coupled applications in terms of usability and scalability on Exascale machines, the recommendations are to work on coupling libraries as well as on coupled models and their environment.

    • In Situ Extreme Data Processing and better science through I/O avoidance in High-Performance Computing systems,

The ultimate goal of the in situ extreme data processing is to promote new data transformations and compressions that reduce drastically extreme raw data, generated during HPC simulations, by preserving the information required for a particular analysis while sacrificing most everything else and store the only relevant data.
All these theoretical ideas should be aligned with practical challenges of in-situ, in-transit and real-time high- performance computation where extreme data must be processed under severe communication and memory constraints.
The goal of this recommendation is to fund R&D programs in order to explore data analysis framework from a post-process centric to a close to real-time concurrent approach based on either in-situ or in-transit processing of the raw data of numerical simulations, processed as they are computed during massively parallel post-petascale and Exascale applications.

    • Declarative processing frameworks for big data analytics, extreme data fusion,

Exascale systems provide an incredible huge amount of synthetic data that need to be processed (e.g. for visualization) in order to get a full understanding of what they simulate, conversely, data acquisition system gather an incredible amount of diverse real data that need to be processed to get a better understanding of the situations they have been gathered from. Computer scientists and specialists of statistics used to manage and treat these data. The current Variety, Volume and Velocity of data imply a synergy and collaboration between different fields of science in order to extract full intelligence and knowledge from these data in close to real time.
The rise of multi petascale and upcoming Exascale HPC facilities will allow to turbulent simulations based on LES and DNS methods to address high fidelity complex problems in climate, combustion, astrophysics or fusion. These massive simulations performed on tens to hundreds thousands of threads will generate a huge volume of data, which is difficult and inefficient to post process asynchronously later after by a single researcher. The proposed approach consists on post processing this rough data on the fly by smart tools able automatically to extract pertinent turbulent flow features, store only a reduced amount of information or provide feedback to application in order to steer its behaviour.
As data mining in large-scale turbulent simulations applied climate, combustion, traditional CFD, astrophysics, fusion … will become more and more difficult because of the size and the complexity of data generated, it is mandatory to develop a complete toolbox of efficient parallel algorithms.


Not all of the recommendations are at the same level of generalization but they are complementary and linked to each other by their global common objective: enabling the emergence of a new generation of intensive data and extreme computing applications. Some of them are fully disruptive; all need to go beyond known HPC technologies and methods.
All these recommendations should be supported and funded. Some of these recommendations could be addressed in part by being strategic themes for new Centres of Excellence (CoEs).
Read full EESI 2014 recommendations.