Ravi Teja Mullapudi

I am a third year Ph.D student working with Kayvon Fatahalian and Deva Ramanan. I am broadly interested in computer vision and high performance computing. My current work focuses on techniques and models for enabling efficient visual understanding.

I did my Masters at Indian Institute of Science, where I was advised by Uday Bondhugula. Before my masters I worked at NVIDIA and did my bachelors at IIIT Hyderabad.

Email  /  CV  /  Google Scholar  

HydraNets: Specialized Dynamic Architectures for Efficient Inference

HydraNets explore semantic specialization as a mechanism for improving the computational efficiency (accuracy-per-unit-cost) of inference in the context of image classification. Specifically, we propose a network architecture template called HydraNet, which enables state-of-the-art architectures for image classification to be transformed into dynamic architectures which exploit conditional execution for efficient inference. HydraNets are wide networks containing distinct components specialized to compute features for visually similar classes, but they retain efficiency by dynamically selecting only a small number of components to evaluate for any one input image. On CIFAR, applying the HydraNet template to the ResNet and DenseNet family of models reduces inference cost by 2-4x while retaining the accuracy of the baseline architectures. On ImageNet, applying the HydraNet template improves accuracy up to 2.5% when compared to an efficient baseline architecture with similar inference cost.

HydraNets: Specialized Dynamic Architectures for Efficient Inference
Ravi Teja Mullapudi, William R.Mark, Noam Shazeer, Kayvon Fatahalian
CVPR 2018

Automatic scheduling of Halide programs

The Halide image processing language has proven to be an effective system for authoring high-performance image processing code. Halide programmers need only provide a high-level strategy for mapping an image processing pipeline to a parallel machine (a schedule), and the Halide compiler carries out the mechanical task of generating platform-specific code that implements the schedule. Unfortunately, designing high-performance schedules for complex image processing pipelines requires substantial knowledge of modern hardware architecture and code-optimization techniques. In this paper we provide an algorithm for automatically generating high-performance schedules for Halide programs. Our solution extends the function bounds analysis already present in the Halide compiler to automatically perform locality and parallelism-enhancing global program transformations typical of those employed by expert Halide developers. The algorithm does not require costly (and often impractical) auto-tuning, and, in seconds, generates schedules for a broad set of image processing benchmarks that are performance-competitive with, and often better than, schedules manually authored by expert Halide developers on server and mobile CPUs, as well as GPUs.

Automatically Scheduling Halide Image Processing Pipelines
Ravi Teja Mullapudi, Andrew Adams, Dillon Sharlet, Jonathan Ragan-Kelley, Kayvon Fatahalian

Automatic Optimization for Image Processing Pipelines

Image processing pipelines are ubiquitous and demand high-performance implementations on modern architectures. Manually implementing high performance pipelines is tedious, error prone and not portable. For my masters thesis, I focused on the problem of automatically generating efficient multi-core implementations of image processing pipelines from a high-level description of the pipeline algorithm. I leveraged polyhedral representation and code generation techniques to achieve this goal. PolyMage is a domain-specific system built for evaluating and experimenting with techniques developed during the course of my masters.

PolyMage: Automatic Optimization for Image Processing Pipelines
Ravi Teja Mullapudi, Vinay Vasista, Uday Bondhugula
Architectural Support for Programming Languages and Operating Systems (ASPLOS), 2015

Compiling Affine Loop Nests for Dataflow Runtimes

Designed and evaluated a compiler and runtime to automatically extract coarse-grained dataflow parallelism in affine loop nests to target shared and distributed memory systems. As part of the evaluation, we implemented a set of benchmarks using the CnC (Intel Concurrent Collections) programming model to serve as a comparision to our system. Implementation of the Floyd-Warshall All-Pairs-Shortest-Paths algorithm used in the evaluation is now part of Intel CnC samples.

website template stolen from here