[home] [Personal Program] [Help]
tag
16:00
0 mins
Augmentation of medical images with unsymmetrical intensity distribution
Rajarajeswari Ganesan, Antonino Amedeo La Mattina, Frans van de Vosse, WOUTER HUBERTS
Session: Poster Session 1 (Even numbers)
Session starts: Thursday 26 January, 16:00
Presentation starts: 16:00



Rajarajeswari Ganesan (EINDHOVEN UNIVERSITY OF TECHNOLOGY)
Antonino Amedeo La Mattina (UNIVERSITY OF BOLOGNA)
Frans van de Vosse (EINDHOVEN UNIVERSITY OF TECHNOLOGY)
WOUTER HUBERTS (EINDHOVEN UNIVERSITY OF TECHNOLOGY)


Abstract:
Background: In Silico Clinical Trials (ISCT) display promising advances over Human Clinical Trials for the assessment of safety, efficacy and usability of new diagnostic/interventional procedures or medical devices. One of the reasons for this is ISCTs allow for data augmentation by scaling up both the number of patients and different patient phenotypes present in the population. In our work, we aim for augmentation of Hip Fracture Scans using deep generative model. Method: We propose a novel Generative Adversarial Networks (GANs) architecture. In GANs, the Generator aims to generate images like real images and the Discriminator labels them as realistic/unrealistic. In this architecture, we have introduced Neural Conditional Random Fields (NCRF) within the Deep Convolutional Network(DCN) to capture the non-symmetrical intensity relations present in Hip Fracture CT Scans. These non-symmetric intensity variations distribute the entire image into different regions of the femur. The sharp variations determine the fine structure. In addition, there is a relation between the neighbouring intensity and the fine structure. In the DCN, the adversarial pair learns hierarchy of representations from main edges to fine structures in an unsupervised manner. In our method, the generator with NCRF contemplates the spatial relations within the neighbouring patches. This NCRF layer is directly integrated with each convolutional layer. This consolidated feature map of each layer is fed as an input for the next layer. This architecture helps the network learn the spatial relation at every level. Thereby, the generator learns the features along with the spatial relation. This approach helps us to retain the intensity relation. Results: Though the fine structure in the images are still indistinguishable, the quantitative metrics such as Fréchet Inception Distance (FID), Structural Similarity Index (SSIM) and Peak Signal-to-Noise Ratio (PSNR) match the predefined standard for the generated images. Conclusion: The network captures the spatial relation but acute structures generated by the network are unclear. Therefore, we will work towards improvement of the network for more realistic results