Coordinatore | TECHNION - ISRAEL INSTITUTE OF TECHNOLOGY
Spiacenti, non ci sono informazioni su questo coordinatore. Contattare Fabio per maggiori infomrazioni, grazie. |
Nazionalità Coordinatore | Israel [IL] |
Totale costo | 1˙500˙000 € |
EC contributo | 1˙500˙000 € |
Programma | FP7-IDEAS-ERC
Specific programme: "Ideas" implementing the Seventh Framework Programme of the European Community for research, technological development and demonstration activities (2007 to 2013) |
Code Call | ERC-2012-StG_20111012 |
Funding Scheme | ERC-SG |
Anno di inizio | 2013 |
Periodo (anno-mese-giorno) | 2013-01-01 - 2017-12-31 |
# | ||||
---|---|---|---|---|
1 |
TECHNION - ISRAEL INSTITUTE OF TECHNOLOGY
Organization address
address: TECHNION CITY - SENATE BUILDING contact info |
IL (HAIFA) | hostInstitution | 1˙500˙000.00 |
2 |
TECHNION - ISRAEL INSTITUTE OF TECHNOLOGY
Organization address
address: TECHNION CITY - SENATE BUILDING contact info |
IL (HAIFA) | hostInstitution | 1˙500˙000.00 |
Esplora la "nuvola delle parole (Word Cloud) per avere un'idea di massima del progetto.
'Learning how to act optimally in high-dimensional stochastic dynamic environments is a fundamental problem in many areas of engineering and computer science. The basic setup is that of an agent who interacts with an environment trying to maximize some long term payoff while having access to observations of the state of the environment. A standard approach to solving this problem is the Reinforcement Learning (RL) paradigm in which an agent is trying to improve its policy by interacting with the environment or, more generally, by using different sources of information such as traces from an expert and interacting with a simulator. In spite of several success stories of the RL paradigm, a unified methodology for scaling-up RL has not emerged to date. The goal of this research proposal is to create a methodology for learning and acting in high-dimensional stochastic dynamic environments that would scale up to real-world applications well and that will be useful across domains and engineering disciplines. We focus on three key aspects of learning and optimization in high dimensional stochastic dynamic environments that are interrelated and essential to scaling up RL. First, we consider the problem of structure learning. This is the problem of how to identify the key features and underlying structures in the environment that are most useful for optimization and learning. Second, we consider the problem of learning, defining, and optimizing skills. Skills are sub-policies whose goal is more focused than solving the whole optimization problem and can hence be more easily learned and optimized. Third, we consider changing the natural reward of the system to obtain desirable properties of the solution such as robustness, adversity to risk and smoothness of the control policy. In order to validate our approach we study two challenging real-world domains: a jet fighter flight simulator and a smart-grid short term control problem.'