The technical objectives read:• Advance methods to accelerate HOM for unsteady simulations of LES and future DNS on unstructured grids. • Advance methods to accelerate LES and future DNS methodology by multilevel, adaptive, fractal and similar approaches on unstructured...
The technical objectives read:
• Advance methods to accelerate HOM for unsteady simulations of LES and future DNS on unstructured grids.
• Advance methods to accelerate LES and future DNS methodology by multilevel, adaptive, fractal and similar approaches on unstructured grids.
• Use existent HPC networks and research projects, targeting applications of several tens of thousands of cores, bringing industrial applications of LES/DNS close(r) to daily practice.
• Target a range of hardware platforms including clusters of multi-core CPUs, potentially combined with integrated many-core processors (e.g. Xeon Phi), GPGPUs, ARM or FPGA as co-processing or main-processing units.
• Get as close as possible to the current status of DNS runs for academic problems. Extend LES/DNS to industrially relevant applications using HOMs on unstructured grids for unsteady flows aiming at a reduction of 2-3 orders of magnitude in CPU time.
• Provide grid generation methods for HOM on unstructured grids, with emphasis on valid curvilinear meshes for complex geometries including boundary layer and hybrid meshes.
• Provide suitable I/O and interactive co- and post-processing tools for large datasets “(datability)â€.
• Demonstration of multi-disciplinary capabilities of HOM for LES in the area of aero-acoustics.
NUMECA:
Has invested the effort in this work package to improve the efficiency of the explicit Flux Reconstruction solver
As first step to the development of a multi-level approach within the Flux Reconstruction solver for LES simulations, the extension of the explicit FR solver to handle non-conforming grids, both in geometrical (h-) and spectral (p-) senses, has been undertaken.
The Flux Reconstruction solver contains in-house tools to generate high-order elements near solid surfaces.
The parallel efficiency of the FR solver has been improved.
For IO postprocessing, the first task was the parallelization of the output subroutines.
DLR:
Contributed to WP 1 (General management) of TILDA by setting up and administrating the public and internal TILDA websites and a TILDA-wide email address.
In Task 2.1 (Implicit methods), DLR implemented SDIRK (single coefficient diagonally implicit Runge-Kutta) methods of order 2, 3, and 4.
In Task 3.2, DLR extended its scale-resolving DG code to locally refined meshes with hanging nodes.
ONERA:
Has presented a synthesis of the TILDA contribution during the CEAA workshop organized in Svetlogorsk (Russia) in September 2016.
Time implicit integration using inexact Newton methods have been developed initially in Aghora.
An adaptive strategy for the explicit VMSDG discretization of compressible Navier-Stokes equations is under development.
Local space p-adaptive method has been implemented.
The code Aghora is being optimized.
New computations for shock-boundary layer interactions on the 2D laminar and 3D turbulent transonic cases have been analyzed.
DASSAV:
Began implementing a VMS subgrid scale model based on an explicit filtering at the element level.
Started modifying the DES model to be used as a wall model in the VMS context.
Ran TC-F2 (Taylor Green Vortex) with DNS and existing LES models. Preliminary results were also obtained with the VMS model.
SAFRAN:
In order to analyze the flow solutions & the causes of numerical instabilities, new co-processing solutions have been implemented in ArgoDGM.
The work performed is based on ArgoDGM, a Discontinuous Galerkin software developed at CENAERO.
T106: Large Eddy Simulation of turbomachinery configurations with ArgoDGM started in 2016.
RO37: The meshing process has been applied to the fully three dimensional NASA Rotor 37 test case.
CERFACS:
The current involvement reads:
• D2.2-12: Validation of the non-conformal hp−adaptation
• D3.2-24b: Definition and validation of sensors for hp−adaptation
• D4.3-18: Extension of Antares to handle high-order solutions
• D5-18-P1: Intermediate report on TC-P1 (Jet with/without micro-jets – fluidic injection)
• D5-36-P1: Final, consolidated report on TC-P1 (Jet with/without micro-jets – fluidic injection)
CENAERO:
Task 2.2: Cenaero’s focus is on the capture of wake emanating from turbine blades.
Task 3.2: An indicator based on the balance of turbulent kinetic energy (TKE) has been devised.
Task 3.3: Application of wall-functions in the context of discontinuous Galerkin discretizations.
Task 4.1: Extension of our open-source mesh database MAdLib to conformal grid-adaptation on curvilinear grids.
UCL:
Firstly, a new shape quality measure has been designed for quantifying the error on the gradient of the finite element solution on linear simplices.
Secondly, three shape quality measure designed for linear elements has been generalized to curved elements. The first is the new quality measure.
In addition, developments have been made on curve mesh generation.
UNIBG:
Activities within Task 2.1 were mainly focused on a comprehensive assessment of the implicit time integration methods.
Investigated the impact on the accuracy and efficiency of a DNS (TC-F2) simulation of a low-Mach treatment.
Developed and implemented strategies for the initialization of the linear and non-linear iterative solvers.
Development and implementation of a p-adaptation strategy suited for unsteady simulations wit
As pointed out above and listed, new/advanced methods have been developed being clearly beyond the state-of-the-art.
Further validation will be carried out to underline the efficiency of the derived methods.
Besides the improvement of methods and approaches, and related to the (main) objective, namely to target applications of several tens of thousands of cores, considerable progress has been achieved by ICL.
ICL has demonstrated peta-scale performance of PyFR on various large-scale GPU clusters, including Piz Daint at the Swiss National Supercomputing Center, and Titan at Oak Ridge National Laboratory with performance at up to 13.7 DP-PFLOP/s sustained on 18,000 K20K GPUs of Titan (over 48 million CUDA cores).
More info: http://www.dlr.de/as/desktopdefault.aspx/tabid-5219/8763_read-15645/.