In the European Union and worldwide the number of people affected by modern civilization diseases like cancer increases continuously. As a consequence, high rise in treatment related costs is noticed, and a specific work up and care of patients is greatly needed...
In the European Union and worldwide the number of people affected by modern civilization diseases like cancer increases continuously. As a consequence, high rise in treatment related costs is noticed, and a specific work up and care of patients is greatly needed. Unfortunately, the early-stage diagnostics is hampered by significant inter- and intraobserver variability resulting in an adequate treatment delay and the confirmation of the diagnosis. A possible solution is the emerging field of medical imaging.
Medical imaging aims at processing and analyzing medical scans to further support physicians in their daily routines. However, there are still many obstacles that limit the wide usage of medical imaging, such as: low number of digitalized cases, large size of images and low availability of annotated images (an image with a description), complicated patters in medical scans. Within the DeeBMED project I proposed to tackle these problems by utilizing a probabilistic framework called Variational Auto-Encoders (VAEs). VAEs allow to model relationships among quantities like an image, x, a disease label, y, and a latent factors, z, using probability distributions and statistical (Bayesian) inference. See the diagrams for details.
The DeeBMED project consisted of two main lines of research, namely, the development of the probabilistic framework, and the development of deep learning techniques for medical imaging. Within the first research direction I aimed at exploring possible extensions of the encoder and the prior in order to properly model data representation. The second line of research was focused on adapting deep learning methods to large images like medical scans.
\"In order to increase flexibility of the VAE we proposed to utilize the idea of the normalizing flow. First, we used a series of Householder transformations to obtain a deep structure of latent variables. Next, we extended the linear Inverse Autoregressive flow by using a convex combination of lower-triangular matrices. Further, we proposed a non-linear normalizing flow by utilizing the Sylvester\'s theorem. The main idea of the approach was to parameterize weights of the transformations using orthogonal matrices by applying Householder matrices, a numerical procedure and a permutation matrices.
In the VAE framework the Gaussian distribution is a default option for both the prior and posterior. However, we hypothesized that this could fail for different latent topologies, especially for a latent hyperspherical structure. To address this issue we proposed to use a von Mises-Fisher distribution instead. Through a series of experiments we showed how such a hyperspherical VAE is more suitable in discovering a latent structure and it is able to outperform a vanilla VAE on the image dataset and citation network datasets.
Our next approach was to extend the VAE framework by using a new type of prior (\"\"Variational Mixture of Posteriors\"\" prior, VampPrior). The VampPrior consists of a mixture distribution with components given by variational posteriors conditioned on learnable pseudo-inputs. We further extended this prior to a two layer hierarchical model and showed that this architecture learns significantly better models. We provided empirical studies on six image datasets and showed that our approach delivers either the best results or performs on par with state-of-the-art results on all datasets. Next, we utilized the VampPrior in the fair classification setting. Fairness is a statistical property that is very important in many practical applications including medicine. We proposed a two-level hierarchical VAE with a class label and a sensitive variable.
Training a whole slide imaging tool requires relatively large amount of computational resources and providing pixel level annotations is extremely time-consuming. In order to overcome these issues, we proposed to apply multi-instance learning combined with deep learning to histopathology classification. Our goal was to utilize weakly-labaled data to train deep learning models in an end-to-end fashion. We discussed different permutation-invariant operators and proposed a new one basing on the attention mechanism. We applied the newly developed techniques to four cancer data, namely, breast cancer, colon cancer, esophagus cancer and prostate cancer. The obtained results were of great clinical potential.
Beside the international conferences and two seminars at the University of Amsterdam, the project was dissemination through different channels and media. I had invited talks at the Summer School on Data Science (Split, Croatia, 2017) and in multiple institutions (CERN, CWI in Amsterdam, TU/e in Eindhoven, MPI in Tuebingen). Moreover, I took part in Open Day at Science Park (Amsterdam, 2017), one of the biggest events for participants in every age. I gave also three interviews (on Polish radio, on Polish TV, in Polish “Pryzmat†magazine) and one short interview for a Dutch university magazine. Additionally, the DeeBMED was described in the newsletter of the Wroclaw Center of Technology Transfer. Last but not least, I have launched a Twitter account where I included my thoughts and successes of the project (observed by >1000 users), a GitHub account (followed by >80 users) and a project’s website (https://jmtomczak.github.io/deebmed). The dataset developed and used during the project is publicly available (https://zenodo.org/record/1205024#.W6_oBnUzbCI) and was viewed ~280 times and downloaded >30 times.\"
The results obtained within the project set new SOTA on many datasets , namely, benchmark image data and medical data (colon cancer, breast cancer, esophagus cancer and prostate cancer). Additionally, the scores of the medical data reached a level that could be treated as clinically significant. Our collaborators from the Academic Medical Center (AMC) in Amsterdam and the Cedars-Sinai Medical Center in Los Angeles aim at turning the developed methods into prototypes and ultimately real products. This established bridge between the medical group and the AI group gives the highest chance of developing medical imaging tools that will have a huge impact on the society.
During the fellowship I have published 4 conference papers and 7 short and workshop papers. All implementations of the developed methods are freely available online. As a result, I have impacted the scientific community with new methods. Moreover, I have been contacted by companies from the EU, the USA and Canada (by e-mail) that either informed me about success of my methods in real-life applications or asked me about possible extensions of my approaches. Except people from the UvA, I collaborated with the following academic institutes: the Univ. of Oxford, the Univ. of Cambridge, Technical University of Eindhoven, the Cedars-Sinai Medical Center, the AMC, the Max Planck Institute in Tuebingen, and the following companies: Scyfer, Bosch, Philips. The time spent in Scyfer, where I had a chance to talk to experienced AI and software engineers working on medical imaging, gave me a better overview how academic developments could be used in real-life applications. The fact that Scyfer was acquired by Qualcomm in October 2017 confirmed it was a great place to learn about medical imaging tools.
More info: https://jmtomczak.github.io/deebmed.html.