Brief Bio

I am a final year undergraduate student majoring in Biomedical Engineering with a minor in Artificial Intelligence. My master thesis proposed an interpretability method for deep learning models from a causal standpoint. Previously I was a visiting student researcher at Stanford University and a research intern at Siemens. Most of my previous work revolves around the applications of deep learning, meta learning, knowledge graphs, and deep reinforcement learning on biomedical images. Recently I’ve been working on causation and interpretability of deep learning and find it quite fascinating. I also write about my thoughts in blogs here.

I’ve previously submitted my works at Frontieres, MedIA, and TMI journals, along with MICCAI, ICONIP, Neurips, and AAAI conferences. Please feel free to connect if you want to discuss about AI, neuroscience, or anything in general.

Optimization methods

Blog

Blog about AI and Neuroscience Research.

Read More

Biomedical Image Analysis and Model Interpretability

Research

Deep learning for medial image analysis.

Read More

Projects

Deep learning based Deb/pip packages.

Read More

News

  • January 2020: [Segmentation and Classification in Digital Pathology for Glioma Research: Challenges and Deep Learning Approaches] Paper accepted in Frontiers in Neuroscience, Brain Imaging Methods

  • January 2020: [Demystifying Brain Tumour Segmentation Networks: Interpretability and Uncertainty Analysis] Paper accepted in Frontiers in Computational Neuroscience

  • January 2020: [Release of PyPi] Released python package for Neural Field Modelling (nfm)

  • January 2020: [A Generalized Deep Learning Framework for Whole-Slide Image Segmentation and Analysis] Paper submitted to Medical Image Analysis (MedIA), Computational Pathology