Explore the words cloud of the CompDB project. It provides you a very rough idea of what is the project "CompDB" about.
The following table provides information about the project.
Coordinator |
TECHNISCHE UNIVERSITAET MUENCHEN
Organization address contact info |
Coordinator Country | Germany [DE] |
Project website | https://db.in.tum.de/research/projects/CompDB/index.shtml |
Total cost | 1˙918˙750 € |
EC max contribution | 1˙918˙750 € (100%) |
Programme |
1. H2020-EU.1.1. (EXCELLENT SCIENCE - European Research Council (ERC)) |
Code Call | ERC-2016-COG |
Funding Scheme | ERC-COG |
Starting year | 2017 |
Duration (year-month-day) | from 2017-06-01 to 2022-05-31 |
Take a look of project's partnership.
# | ||||
---|---|---|---|---|
1 | TECHNISCHE UNIVERSITAET MUENCHEN | DE (MUENCHEN) | coordinator | 1˙918˙750.00 |
Two major hardware trends have a significant impact on the architecture of database management systems (DBMSs): First, main memory sizes continue to grow significantly. Machines with 1TB of main memory and more are readily available at a relatively low price. Second, the number of cores in a system continues to grow, from currently 64 and more to hundreds in the near future. This trend offers radically new opportunities for both business and science. It promises to allow for information-at-your-fingertips, i.e., large volumes of data can be analyzed and deeply explored online, in parallel to regular transaction processing. Currently, deep data exploration is performed outside of the database system which necessitates huge data transfers. This impedes the processing such that real-time interactive exploration is impossible. These new hardware capabilities now allow to build a true computational database system that integrates deep exploration functionality at the source of the data. This will lead to a drastic shift in how users interact with data, as for the first time interactive data exploration becomes possible at a massive scale.
Unfortunately, traditional DBMSs are simply not capable to tackle these new challenges. Traditional techniques like interpreted code execution for query processing become a severe bottleneck in the presence of such massive parallelism, causing poor utilization of the hardware. I pursue a radically different approach: Instead of adapting the traditional, disk-based approaches, I am integrating a new just-in-time compilation framework into the in-memory database that directly exploits the abundant, parallel hardware for large-scale data processing and exploration. By explicitly utilizing cores, I will be able to build a powerful computational database engine that scales the entire spectrum of data processing - from transactional to analytical to exploration workflows - far beyond traditional architectures.
year | authors and title | journal | last update |
---|---|---|---|
2018 |
Timo Kersten, Viktor Leis, Alfons Kemper, Thomas Neumann, Andrew Pavlo, Peter Boncz Everything you always wanted to know about compiled and vectorized queries but were afraid to ask published pages: 2209-2222, ISSN: 2150-8097, DOI: 10.14778/3275366.3275370 |
Proceedings of the VLDB Endowment 11/13 | 2019-04-18 |
Are you the coordinator (or a participant) of this project? Plaese send me more information about the "COMPDB" project.
For instance: the website url (it has not provided by EU-opendata yet), the logo, a more detailed description of the project (in plain text as a rtf file or a word file), some pictures (as picture files, not embedded into any word file), twitter account, linkedin page, etc.
Send me an email (fabio@fabiodisconzi.com) and I put them in your project's page as son as possible.
Thanks. And then put a link of this page into your project's website.
The information about "COMPDB" are provided by the European Opendata Portal: CORDIS opendata.