Coordinatore | BARCELONA SUPERCOMPUTING CENTER - CENTRO NACIONAL DE SUPERCOMPUTACION
Organization address
address: Jordi Girona 29 contact info |
Nazionalità Coordinatore | Spain [ES] |
Totale costo | 3˙459˙528 € |
EC contributo | 2˙470˙000 € |
Programma | FP7-INFRASTRUCTURES
Specific Programme "Capacities": Research infrastructures |
Code Call | FP7-INFRASTRUCTURES-2010-2 |
Funding Scheme | CPCSA |
Anno di inizio | 2010 |
Periodo (anno-mese-giorno) | 2010-06-01 - 2012-08-31 |
# | ||||
---|---|---|---|---|
1 |
BARCELONA SUPERCOMPUTING CENTER - CENTRO NACIONAL DE SUPERCOMPUTACION
Organization address
address: Jordi Girona 29 contact info |
ES (Barcelona) | coordinator | 0.00 |
2 |
CENTRE NATIONAL DE LA RECHERCHE SCIENTIFIQUE
Organization address
address: Rue Michel -Ange contact info |
FR (PARIS) | participant | 0.00 |
3 |
FORSCHUNGSZENTRUM JUELICH GMBH
Organization address
address: Leo-Brandt-Strasse contact info |
DE (JUELICH) | participant | 0.00 |
4 |
FOUNDATION FOR RESEARCH AND TECHNOLOGY HELLAS
Organization address
address: N PLASTIRA STR contact info |
EL (HERAKLION) | participant | 0.00 |
5 |
IBM RESEARCH GMBH
Organization address
address: SAEUMERSTRASSE contact info |
CH (RUESCHLIKON) | participant | 0.00 |
6 |
THE UNIVERSITY OF EDINBURGH
Organization address
address: OLD COLLEGE, SOUTH BRIDGE contact info |
UK (EDINBURGH) | participant | 0.00 |
7 |
THE UNIVERSITY OF MANCHESTER
Organization address
address: OXFORD ROAD contact info |
UK (MANCHESTER) | participant | 0.00 |
8 |
UNIVERSITAET STUTTGART
Organization address
address: Keplerstrasse contact info |
DE (STUTTGART) | participant | 0.00 |
9 |
UNIVERSITAT JAUME I DE CASTELLON
Organization address
address: AVENIDA VICENT SOS BAYNAT contact info |
ES (CASTELLON DE LA PLANA) | participant | 0.00 |
10 |
UNIVERSITE DE PAU ET DES PAYS DE L'ADOUR
Organization address
address: Avenue de l'Universite contact info |
FR (PAU) | participant | 0.00 |
Esplora la "nuvola delle parole (Word Cloud) per avere un'idea di massima del progetto.
With top systems reaching the PFlop barrier, the next challenge is to understand how applications have to be implemented and be prepared for the ExaFlop target. Multicore chips are already here but will grow in the next decade to several hundreds of cores. Hundreds of thousands of nodes based on them will constitute the future exascale systems.nnTEXT is centered on the vision that the key component to support high productivity and efficient use of a system is the programming model, and we defend that MPI/SMPSs is a hybrid approach that can today be demonstrated and show the way to follow on the path to exascale. The SMPSs model provides the necessary support for asynchrony and heterogeneity as well as enabling incremental parallelization, modularity and portability of applications. By integrating it within MPI we can propagate its characteristics to the global application level. This is also a way to leverage and provide a smooth migration path for the huge number of applications today written in MPI.nnThe focus of the TEXT project is to install the MPI/SMPSs environment at several HPC facilities of partners and demonstrate how seven real and relevant applications/libraries can be improved using it. The codes fall in the areas of basic linear algebra libraries, geophysics, plasma physics, engineering and molecular dynamics. They have been selected considering the impact in their respective scientific communities.nnWe will actually validate our claim by evaluating the ported applications with different end users and collect their feedback to further improve the technology. We will also promote the use of the model to other application developers beyond what is feasible to integrate as committed project partners.nnIn order to exploit the opportunity window and really achieve global impact the project proposal is for 2 years. The TEXT proposal is composed of three networking activities, three service activities and two joint research activities.