Tanushree Banerjee

I am a graduating senior studying Computer Science at Princeton University, where I work on explainable 3D perception via inverse rendering under Prof. Felix Heide in the Princeton Computational Imaging Lab.

Previously, I have also worked on research projects under Prof. Karthik Narasimhan in the Princeton Natural Language Processing Group and Prof. Olga Russakovsky in the Princeton Visual AI Lab.

Email  /  CV  /  GitHub  /  LinkedIn  /  Google Scholar

profile photo

News

Research

I am broadly interested in computer vision, inverse rendering, deep learning, and optimization. My current research focuses on explainable 3D perception via inverse rendering. Some research questions that inspire my work:

  • Can we leverage generative models to learn priors that can help interpret 3D information from 2D images?
  • How can we reformulate visual perception as an inverse rendering problem?

project image Inverse Neural Rendering for Explainable Multi-Object Tracking
Julian Ost*, Tanushree Banerjee*, Mario Bijelic, Felix Heide
ArXiv Preprint, 2024
[project page] [paper] [supplement] [arXiv]

We propose to recast 3D multi-object tracking from RGB cameras as an Inverse Rendering (IR) problem by optimizing via a differentiable rendering pipeline over the latent space of pre-trained 3D object representations and retrieving the latents that best represent object instances in a given input image. Our method is not only an alternate take on tracking; it enables examining the generated objects, reasoning about failure situations, and resolving ambiguous cases.

Other Independent Research Projects

These include research I conducted earlier during my undergraduate studies. Areas I explored during this time:

project image LLMs are Superior Feedback Providers: Bootstrapping Reasoning for Lie Detection with Self-Generated Feedback
Tanushree Banerjee, Richard Zhu, Runzhe Yang, Denis Peskov, Brandon Stewart, Karthik Narasimhan
[paper] [slides]

We investigated a bootstrapping framework that leverages self-generated feedback for detecting deception in Diplomacy games by collecting a novel dataset of human feedback on initial predictions and comparing the modification stage performance when using human feedback rather than LLM-generated feedback. Our LLM-generated feedback-based approach achieved superior performance, with a 39% improvement over the zero-shot baseline in lying-F1 without any training required.

project image Reducing Object Hallucination in Visual Question Answering
Tanushree Banerjee, Olga Russakovsky
Independent Work Project, Spring 2023
[paper] [code]

This paper proposes several approaches to identify questions unrelated to an image to prevent object hallucination in VQA models. The best approach achieved a 40% improvement over the random baseline.

project image Bias in Skin Lesion Classification
Tanushree Banerjee, Olga Russakovsky
Independent Work Project, Spring 2022
[paper] [slides]

This paper analyzes the bias in a model against underrepresented skin tones in the training dataset on the skin lesion classification task.


Course Projects

These include unpublished research-related work done as part of semester-long course projects.

project image Counterfactual Analysis for Spoken Dialogue Summarization
Tanushree Banerjee*, Kiyosu Maeda*, Sanjeev Arora
Fundamentals of Deep Learning Course Project, Fall 2023 (Graduate Course)
[paper] [code]

We conduct experiments to understand the effect of errors in speaker diarization and speech recognition on an LLM’s summarization performance. We conduct counterfactual analysis by automatically injecting speaker diarization or speech recognition errors into spoken dialogue.

project image Towards Efficient Frame Sampling Strategies for Video Action Recognition
Tanushree Banerjee*, Ameya Vaidya*, Brian Lou*, Olga Russakovsky
Computer Vision Course Project, Spring 2023
[paper] [code]

We propose and evaluate two dataset and model-agnostic frame sampling strategies for computationally efficient video action recognition: one based on the norm of the optical flow of frames and the other based on the number of objects in frames.

project image What Makes In-Context Learning Work On Generative QA Tasks?
Tanushree Banerjee*, Simon Park*, Beiqi Zou*, Danqi Chen
Understanding Large Language Models Course Project, Fall 2022 (Graduate Course)
[paper] [code]

We empirically analyze what aspects of the in-context demonstrations contribute to improvements in downstream task performance, extending the work of Min et al., 2022 to multiple choice and classification tasks.

project image [Re] Double-Hard Debias: Tailoring Word Embeddings for Gender Bias Mitigation
Tanushree Banerjee*, Jessica Ereyi*, Kevin Castro*, Danqi Chen, Karthik Narasimhan
Natural Language Processing Course Project, Spring 2022
[paper] [poster]

We reproduce the results of “Double-Hard Debias: Tailoring Word Embeddings for Gender Bias” (Wang et al.,2020) to reduce the gender bias present in pre-trained word embeddings. Additionally, we evaluate the proposed technique on Spanish GloVe embeddings to assess whether these debiasing methods generalize to non-English languages.

Teaching & Outreach

I have been incredibly fortunate to receive tremendous support from faculty mentors while at Princeton to pursue a research career in computer science. I am keen to use my position to help empower others to pursue careers in computer science. This section includes my teaching and outreach endeavors to help address the acute diversity crisis in this field.

project image Undergraduate Course Assistant (UCA): Independent Work Seminar on AI for Engineering and Physics
Spring 2024
[website]

As a UCA, I hold office hours for students in the seminar, helping them debug their code and advising them on their semester-long independent work projects.

project image Instructor: Princeton AI4ALL
Summer 2022
[website]

As an instructor, I developed workshops and Colab tutorials leading up to an NLP-based capstone project for this ML summer camp for high school students from underrepresented backgrounds, focusing on the potential for harm perpetuated by large language models. I also organized guest speaker talks to expose students to diverse applications of ML in non-traditional fields.


Source code adapted from Leonid Keselman's Jekyll fork of Jon Barron's public academic website.
Updated May 2024.