Opendata, web and dolomites

LHCBIGDATA SIGNED

Exploiting big data and machine learning techniques for LHC experiments

Total Cost €

0

EC-Contrib. €

0

Partnership

0

Views

0

Project "LHCBIGDATA" data sheet

The following table provides information about the project.

Coordinator
ISTITUTO NAZIONALE DI FISICA NUCLEARE 

Organization address
address: Via Enrico Fermi 54
city: FRASCATI
postcode: 44
website: www.infn.it

contact info
title: n.a.
name: n.a.
surname: n.a.
function: n.a.
email: n.a.
telephone: n.a.
fax: n.a.

 Coordinator Country Italy [IT]
 Total cost 180˙277 €
 EC max contribution 180˙277 € (100%)
 Programme 1. H2020-EU.1.3.2. (Nurturing excellence by means of cross-border and cross-sector mobility)
 Code Call H2020-MSCA-IF-2017
 Funding Scheme MSCA-IF-EF-ST
 Starting year 2018
 Duration (year-month-day) from 2018-07-02   to  2020-07-01

 Partnership

Take a look of project's partnership.

# participants  country  role  EC contrib. [€] 
1    ISTITUTO NAZIONALE DI FISICA NUCLEARE IT (FRASCATI) coordinator 180˙277.00

Map

 Project objective

Large international scientific collaborations will face in the near future unprecedented computing and data challenges. The analysis of multi-PetaByte datasets at CMS, ATLAS, LHCb and Alice, the four experiments at the Large Hadron Collider (LHC), requires a global federated infrastructure of distributed computing resources. The HL-LHC, the High Luminosity upgrade of the LHC, is expected to deliver 100 times more data than the LHC, with corresponding increase of event sizes, volumes and complexity. Modern techniques for big data analytics and machine learning (ML) are needed to cope with such unprecedented data stream. Critical areas that will strongly benefit from ML are data analysis, detector operation including calibration and monitoring, and computing operations. Aim of this project is to provide the LHC community with the necessary tools to deploy ML solutions through the use of open cloud technologies such as the INDIGO-DataCloud services. Heterogeneous technologies (systems based on multi-cores, GPUs, ...) and opportunistic resources will be integrated. The developed tools will be experiment-independent to promote the exchange of common solutions among the various LHC experiments. The benefits of such an approach will be demonstrated in a real world use case, the optimization of the computing operations for the CMS experiment. In addition, once available, the tools to deploy ML as a service can be easily transferred to other scientific domains that have the need to treat large data streams.

Are you the coordinator (or a participant) of this project? Plaese send me more information about the "LHCBIGDATA" project.

For instance: the website url (it has not provided by EU-opendata yet), the logo, a more detailed description of the project (in plain text as a rtf file or a word file), some pictures (as picture files, not embedded into any word file), twitter account, linkedin page, etc.

Send me an  email (fabio@fabiodisconzi.com) and I put them in your project's page as son as possible.

Thanks. And then put a link of this page into your project's website.

The information about "LHCBIGDATA" are provided by the European Opendata Portal: CORDIS opendata.

More projects from the same programme (H2020-EU.1.3.2.)

Migration Ethics (2019)

Migration Ethics

Read More  

LiquidEff (2019)

LiquidEff: Algebraic Foundations for Liquid Effects

Read More  

ROAR (2019)

Investigating the Role of Attention in Reading

Read More