Skip to main content
SHARE
Publication

Reinforcement Learning as a Parsimonious Alternative to Prediction Cascades: A Case Study on Image Segmentation...

by Bharat Srikshan, Anika Tabassum, Srikanth Allu, Ramakrishnan Kannan, Nikhil Muralidhar
Publication Type
Conference Paper
Book Title
Proceedings of the AAAI Conference on Artificial Intelligence
Publication Date
Page Numbers
15066 to 15074
Volume
38
Issue
13
Publisher Location
Washington, District of Columbia, United States of America
Conference Name
AAAI Conference on Artificial Intelligence
Conference Location
British Columbia, Canada
Conference Sponsor
AAAI
Conference Date
-

Deep learning architectures have achieved state-of-the-art (SOTA) performance on computer vision tasks such as object detection and image segmentation. This may be attributed to
the use of over-parameterized, monolithic deep learning architectures executed on large datasets. Although such large architectures lead to increased accuracy, this is usually accompanied by a larger increase in computation and memory requirements during inference. While this is a non-issue in traditional machine learning (ML) pipelines, the recent confluence of machine learning and fields like the Internet of Things (IoT) has rendered such large architectures infeasible for execution in low-resource settings. For some datasets, large monolithic pipelines may be overkill for simpler inputs. To address this problem, previous efforts have proposed decision cascades where inputs are passed through models of increasing complexity until the desired performance is achieved. However, we argue that cascaded prediction leads to sub-optimal throughput and increased computational cost due to wasteful intermediate computations. To address this, we propose PaSeR (Parsimonious Segmentation with Reinforcement Learning) a non-cascading, cost-aware learning pipeline as an efficient alternative to cascaded decision architectures. Through experimental evaluation on both real-world and standard datasets, we demonstrate that PaSeR achieves better accuracy while minimizing computational cost relative to cascaded models. Further, we introduce a new metric IoU/GigaFlop to evaluate the balance between cost and performance. On the real-world task of battery material phase segmentation, PaSeR yields 179% improvement over SOTA MatPhase model and a 196% improvement over IDK Cascades under the IoU/GigaFlop metric. We also demonstrate PaSeR’s adaptability to complementary models trained on a noisy MNIST dataset, where it outperforms all baselines on IoU/GigaFlop by an average of 44%.