PAGALINNET

Parallel Grid-aware Library for Neural Networks Training

 Coordinatore Ternopil National Economic University 

 Organization address address: "11, Lvivska str"
city: Ternopil
postcode: 46020

contact info
Titolo: Prof.
Nome: Anatoly
Cognome: Sachenko
Email: send email
Telefono: -436010
Fax: -436326

 Nazionalità Coordinatore Ukraine [UA]
 Totale costo 15˙000 €
 EC contributo 15˙000 €
 Programma FP7-PEOPLE
Specific programme "People" implementing the Seventh Framework Programme of the European Community for research, technological development and demonstration activities (2007 to 2013)
 Code Call FP7-PEOPLE-2007-4-2-IIF
 Funding Scheme MC-IIFR
 Anno di inizio 2011
 Periodo (anno-mese-giorno) 2011-08-01   -   2012-07-31

 Partecipanti

# participant  country  role  EC contrib. [€] 
1    Ternopil National Economic University

 Organization address address: "11, Lvivska str"
city: Ternopil
postcode: 46020

contact info
Titolo: Prof.
Nome: Anatoly
Cognome: Sachenko
Email: send email
Telefono: -436010
Fax: -436326

UA (Ternopil) coordinator 15˙000.00

Mappa


 Word cloud

Esplora la "nuvola delle parole (Word Cloud) per avere un'idea di massima del progetto.

efficiency    grid    experimentally    network    networks    pattern    parallelization    heterogeneous    host    grids    parallel    computational    library    training    barrier    software    return    architecture    enhanced    algorithms    matching    single    batch    neural   

 Obiettivo del progetto (Objective)

'The proposed research is focused on the software library development for parallel neural networks training on computational Grids. The main scientific reason of the proposed research is to develop enhanced parallel neural network training algorithms which provide better parallelization efficiency on heterogeneous computational Grids in the contrast to existing algorithms. The objectives of the proposed research are: 1. to adapt the computational cost model of parallel neural network training algorithms within single pattern, batch pattern and modular approaches to heterogeneous computational Grid resources of host institution; 2. to develop enhanced single pattern and batch pattern parallel neural network training algorithms based on improved communication and barrier functions; 3. to develop a method of automatic matching of parallelization strategy to architecture of appropriate parallel computing system; 4. to develop parallel Grid-aware library for neural networks training capable to use heterogeneous computational resources; 5. to test experimentally parallel Grid-aware library for neural networks training on heterogeneous computational Grid system of host institution within the tasks of one of its active projects; 6. to deploy parallel Grid-aware library for neural networks training on the computational Grid of return host; 7. to test experimentally parallel Grid-aware library on computational systems of both host institution and return host. The cost models of the algorithms will be developed using computational complexity approaches, improved barrier and reducing function will be adapted to neural network parallelization schemes, optimization strategies will be used to find best matching “architecture of parallel system – neural network parallelization scheme”, software library will be implemented on C programming language and MPI parallelization, the efficiency of parallel algorithm will be assessed in comparison with sequential implementation.'

Altri progetti dello stesso programma (FP7-PEOPLE)

IMPSCORE (2013)

Introducing stacking and halogen bonding effects into ligand-target interaction energy calculations

Read More  

GRAND-CRU (2012)

Game-theoretic Resource Allocation for wireless Networks based on Distributed and Cooperative Relaying Units

Read More  

COPRICOMP (2009)

Improving coherence between private law and competition law

Read More