Coordinatore | WEIZMANN INSTITUTE OF SCIENCE
Organization address
address: HERZL STREET 234 contact info |
Nazionalità Coordinatore | Israel [IL] |
Totale costo | 100˙000 € |
EC contributo | 100˙000 € |
Programma | FP7-PEOPLE
Specific programme "People" implementing the Seventh Framework Programme of the European Community for research, technological development and demonstration activities (2007 to 2013) |
Code Call | FP7-PEOPLE-2013-CIG |
Funding Scheme | MC-CIG |
Anno di inizio | 2013 |
Periodo (anno-mese-giorno) | 2013-10-01 - 2017-09-30 |
# | ||||
---|---|---|---|---|
1 |
WEIZMANN INSTITUTE OF SCIENCE
Organization address
address: HERZL STREET 234 contact info |
IL (REHOVOT) | coordinator | 100˙000.00 |
Esplora la "nuvola delle parole (Word Cloud) per avere un'idea di massima del progetto.
'Machine learning was born in an era when most datasets were small, low-dimensional, and used carefully hand-crafted features. However, recent years have seen a dramatic change in the nature of typical machine learning tasks: These are now routinely performed on huge, web-scale datasets, with data quantity no longer being a major bottleneck. On the flip side, the large-scale and automated data-gathering methods used to create such massive datasets often go hand-in-hand with mediocre quality of individual data items. This data quality problem can hamper standard learning algorithms, despite the availability of more data. A related issue is the quality of available features: with more data, we are in a position to tackle harder tasks - particularly in AI-related areas such as computer vision and natural language processing. However, it is also becoming increasing hard to hand-craft good features for such tasks, and much recent research is devoted to automatically learn higher-quality, multi-level representations of the data.
The objective of the proposed research is to study how increasing data quantity can be used to improve or compensate for poor data quality, provably and efficiently. In particular, we wish to study how to use large-scale, low-quality datasets, to achieve the same learning performance as if we had a high-quality, yet more moderately sized dataset. We plan to explore several important settings where we believe such a trade-off can be obtained, using a theoretically principled approach. These include (1) Learning deep data representations, which capture complex and high-level features; (2) Learning from incomplete data, where some or even most of the data is missing; and (3) bandit learning and optimization, which capture learning and decision making under uncertainty. Our research plan builds on concrete preliminary results and several novel ideas, which are outlined as part of the proposal.'
New Generation of Functional Cellulose Fibre Based Packaging Materials for Sustainability
Read MoreModelling Joint Development: Integrating Biological and Mechanical Influences
Read MoreTrancriptional control of dendritic arbors morphology in pathogeny and therapy of neuropsychiatric diseases
Read More