Hello! I am Jay Patravali. I work at the Autonome Intelligente Systeme lab, University of Freiburg and I am fortunate to be supervised by Prof. Wolfrom Burgard. Previously, I worked at Qure.ai
and the Robotics Institute
at Carnegie Mellon University. I hold a Bachelor's degree in Electronics and Communication Engineering from Vellore Institute of Technology.
My research interests are in Computer Vision, Robotics and Machine Learning. In the past, I have worked on Semantic Segmentation for AI-based Diagnosis, Stereo Visual Odometry and Robotic Exploration Planning.
The focus of my current work is on Perception and State Estimation for Self-Driving Vehicles.
Google Scholar  /
Landmarks based Localization using Deep Convolutional Neural Networks
Working on a Vision-based localization system for the Samsung self-driving car project. We use deep CNN's for detecting pole-like features for particle filter based localization. Ongoing research is to develop better learning representations for computing full motion estimates.
2D-3D Fully Convolutional Neural Networks for Cardiac MR Segmentation
Conference Poster |
Jay Patravali, Shubham Jain, Sasank Chilamkurthy
IEEE Transactions on Medical Imaging
Medical Image Computing and Computer Assisted Intervention (MICCAI)-STACOM, 2017
In this work, I developed a 2D and 3D segmentation pipelines for fully automated cardiac MR image segmentation using Deep Convolutional Neural Networks (CNN). Our models are trained end-to-end from scratch using the ACD Challenge 2017 dataset comprising of 100 studies, each containing Cardiac MR images in End Diastole and End Systole phase. We show that both our segmentation models achieve near state-of-the-art performance scores in terms of distance metrics and have convincing accuracy in terms of clinical parameters.
Finding better Wide-baseline Stereo Solutions using Feature Quality.
Spotlight Presentation |
RISS 2016 Poster
Stephen Nuske, Jay Patravali
Field and Service Robotics, 2017
Robotics Institute Summer Scholars Symposium, Carnegie Mellon Univeristy 2016
Many robotic applications that involve relocalization or 3D scene reconstruction, have a need of finding geometry between camera images captured from widely different viewpoints. Computing epipolar geometry between wide baseline image pairs is difficult because often there are many more outliers than inliers computed at the feature correspondence stage. We present a new method called UNIQSAC for ng weights for features to guide the random solutions towards high quality features, helping find good solutions. We also present a new method to evaluate geometry solutions that is more likely to find correct solutions. We demonstrate in a variety of different outdoor environments using both monocular and stereo image-pairs that our method produces better estimates than existing robust estimation approaches.
Hierarchical Information Theoretic Exploration Planning.
Project page |
Robotics Institute Technical Reports, Carnegie Mellon University, 2015
We consider the problem of robot exploration in unknown environments. We test
current exploration strategies in different map geometries and identify the failure cases
of these algorithms when subjected to different exploration parameters and sensor pa-
rameters. Based on evaluation of our results we generalize the robot behaviour in each
of the cases. In addition to this, we also propose a novel exploration algorithm that
exhibits improved robot exploration in terms of significantly shorter path length and
lesser energy consumption.