The eTRANSAFE project works to improve the efficiency of translational safety assessment during the drug discovery and development pipeline. The specific objectives are:1. Identification and analysis of differences in the mechanisms of toxicity between different species, to...
The eTRANSAFE project works to improve the efficiency of translational safety assessment during the drug discovery and development pipeline. The specific objectives are:
1. Identification and analysis of differences in the mechanisms of toxicity between different species, to increase the ability of preclinical computational, in vitro and in vivo studies to predict clinical outcomes.
2. To use the knowledge gained by use of these analyses to redesign preclinical testing and have an impact on animal welfare, as well as time and cost reductions in drug development.
3. To develop a sustainable business plan for promoting the wide adoption of the eTRANSAFE tools and methodologies, ensuring long-term success beyond the grant period.
In order to achieve these objectives, the eTRANSAFE project is focussing on 4 key areas: Data, Analysis, System Architecture and Influence/Integration
Data:
1. To implement a toolkit for importing, managing and exporting preclinical data in SEND format and complementing the standard SEND format with additional safety information not currently captured in current SEND files.
2. To apply technological and organisational procedures for optimising IPR protection for the data provided by EFPIA companies (including an experienced partner acting as “honest brokerâ€).
3. To achieve seamless integration of information by federation of data repositories and interoperability services.
Analysis:
1. To build multi-stage and multi-scale models based on quantitative Adverse Outcome Pathway (AOP) networks for hazard prediction.
2. To integrate PBPK models to predict exposure and provide risk assessment.
3. To develop and implement tools supporting the identification and analysis of differences in the mechanisms of toxicity between different species, to increase the ability of preclinical studies to predict clinical outcomes.
System Architecture:
1. To establish strategies and tools (including ontologies), for the gathering, QC and integration of data (private and public; preclinical and clinical) relevant for translational safety assessment.
2. To implement a highly flexible and adaptive information technology architecture, suitable for running in internal and external computing environments, able to be easily adapted to evolving user requirements and technological progress.
3. To apply a modular software design supporting the independent development of computational tools as self-contained components and their delivery as integrated and customisable applications.
4. To develop an ecosystem of applications for exploiting the aforementioned data infrastructure, including methods and tools for read-across, visualisation and analysis, biomarkers discovery, translational analysis and predictive modelling.
5. To implement iterative software development strategies involving the end users in the design and testing of the applications from very early stages.
Influence:
1. To establish synergies with other key projects in the field.
2. To create and mobilise a network of stakeholders and contributors through the setup of forums for scientific discussion and clear mechanisms for channelling contributions.
3. To liaise with regulators and other authorities for the elaboration of data sharing guidelines.
Some highlights of the first two years of the project:
-Development of project guidelines for sharing of preclinical safety data between industry/academia and guidelines for safety model validation.
-Development of the strategy for EFPIA legacy preclinical data donation and the user requirements for the preclinical database platform
-Development of data transformation rules for non-SEND legacy preclinical study data to the SEND-like data format
-Collection of legacy data, SEND data and off-target data
-Development of an annotated corpus for text mining
-Alignment and harmonisation of different preclinical and clinical terminologies through the mapping of ontologies
-Development of a clinical repository
-Development of a modelling framework: Flame (https://github.com/phi-grib/flame), which allows to implement models and tools as self-contained modules which can be integrated within the software platform.
-Establishment of a bioinformatic pipeline to define target organ adversity based on off-target effects.
-Development of web-based tools to uncover co-regulated gene network to get insight in mode-of-action of toxicity, using large transcriptomics datasets from public domain.
-Development and advancements of the Knowledge Hub and all its software pieces, including visualisation.
The eTRANSAFE actions aim to generate significant impact on how preclinical studies are run and how the industry designs these studies. This can only be achieved by developing an integrative data infrastructure and innovative computational methods (innovative read-across approaches and multi-level, multi-scale modelling methods) and tools that aim to drastically improve the analysis and reliability of translational safety assessment during the drug development process. This infrastructure will be underpinned by development of open standards and robust policies widely accepted by stakeholders, including regulatory agencies and international organisations.
The systematic and sustainable accumulation, analysis and exploitation of safety data can only be done if the legal, ethical and technical challenges are addressed simultaneously. During the second year of the eTRANSAFE consortium, a workshop on knowledge sharing and research reproducibility raised awareness of the Consortium members and the workshop participants including guests from the regulatory domain on social, ethical, and technical challenges surrounding the issue of sensitive and dynamic data sharing, and knowledge sustainability. The dialogue at the venue also served as the materials of consultancy with the regulatory stakeholders and emphasised the importance of extending communications beyond the eTRANSAFE Consortium for the global harmonisation of standards and policies around predictive drug safety research.
eTRANSAFE is not only a ‘Big Data’ project, but a ‘Big Analysis’ project. In this period, the ambition of creating the largest integrated preclinical data network seems more feasible as donations have increased by 82%, with a total number of 13,312 data files cumulatively containing hundreds of millions of individual datapoints from all 5 categories of data type were donated in the second year. The collation, storage, organisation and standardisation of a large body of pre-clinical data is central to the success of the eTRANSAFE project as a whole, and continued efforts towards this are of great importance. The development of data mapping rules and workflows to convert legacy data to a common SEND-like structure will have a huge impact on the value added by the project. The eTRANSAFE project has already extended the SEND standard to include ‘Study Results’. Finally, the eTRANSAFE Knowledge Hub and the ecosystem of services produced around it will be of high impact to the community.
No data integration and analysis of this type has ever been performed. We envisage first gains for the scientific community in the late stage of the project life when the advanced tools for correlation analysis of preclinical to clinical safety prediction, including identification of biomarkers, and query on- vs. off-target mechanisms of predictivity will be available.
More info: http://etransafe.eu/.