Opendata, web and dolomites

IMAGINE SIGNED

IMAGINE – Informing Multi-modal lAnguage Generation wIth world kNowledgE

Total Cost €

0

EC-Contrib. €

0

Partnership

0

Views

0

Project "IMAGINE" data sheet

The following table provides information about the project.

Coordinator
UNIVERSITEIT VAN AMSTERDAM 

Organization address
address: SPUI 21
city: AMSTERDAM
postcode: 1012WX
website: www.uva.nl

contact info
title: n.a.
name: n.a.
surname: n.a.
function: n.a.
email: n.a.
telephone: n.a.
fax: n.a.

 Coordinator Country Netherlands [NL]
 Total cost 232˙393 €
 EC max contribution 232˙393 € (100%)
 Programme 1. H2020-EU.1.3.2. (Nurturing excellence by means of cross-border and cross-sector mobility)
 Code Call H2020-MSCA-IF-2018
 Funding Scheme MSCA-IF-GF
 Starting year 2019
 Duration (year-month-day) from 2019-06-14   to  2022-03-13

 Partnership

Take a look of project's partnership.

# participants  country  role  EC contrib. [€] 
1    UNIVERSITEIT VAN AMSTERDAM NL (AMSTERDAM) coordinator 232˙393.00
2    NEW YORK UNIVERSITY US (NEW YORK) partner 0.00

Map

 Project objective

Deep neural networks have caused lasting change in the fields of natural language processing and computer vision. More recently, much effort has been directed towards devising machine learning models that bridge the gap between vision and language (V&L). In IMAGINE, I propose to lead this even further and to integrate world knowledge into natural language generation models of V&L. Such knowledge is easily taken for granted and is necessary to perform even simple human-like reasoning tasks. For example, in order to properly answer the question “What are the children doing?” about an image which shows parents with children playing in a park, a model should be able to (a) tell children from parents (e.g. children are considerably shorter), and infer that (b) because they are in a park, laughing, and with other children, they are very likely playing. Much of this knowledge is presently available in large-scale machine-friendly multi-modal knowledge bases (KBs) and I will leverage these to improve multiple natural language generation (NLG) tasks that require human-like reasoning abilities. I will investigate (i) methods to learn representations for KBs that incorporate text and images, as well as (ii) methods to incorporate these KB representations to improve multiple NLG tasks that reason upon V&L. In (i) I will research how to train a model that learns KB representations (e.g. learning that children are young adults and likely do not work) jointly with the component that understands the image content (e.g. identifies people, animals, objects and events in an image). In (ii) I will investigate how to jointly train NLG models for multiple tasks together with the KB entity linking, so that these models benefit from one another by sharing parameters (e.g. a model that answers questions about an image benefits from the training data of a model that describes the contents of an image), and also benefit from the world knowledge representations in the KB.

Are you the coordinator (or a participant) of this project? Plaese send me more information about the "IMAGINE" project.

For instance: the website url (it has not provided by EU-opendata yet), the logo, a more detailed description of the project (in plain text as a rtf file or a word file), some pictures (as picture files, not embedded into any word file), twitter account, linkedin page, etc.

Send me an  email (fabio@fabiodisconzi.com) and I put them in your project's page as son as possible.

Thanks. And then put a link of this page into your project's website.

The information about "IMAGINE" are provided by the European Opendata Portal: CORDIS opendata.

More projects from the same programme (H2020-EU.1.3.2.)

Migration Ethics (2019)

Migration Ethics

Read More  

EcoSpy (2018)

Leveraging the potential of historical spy satellite photography for ecology and conservation

Read More  

SOUTHWEST (2020)

The politeness system and the emergence of a Sprachbund

Read More