Tanushree Banerjee

Hi there! đź‘‹ I'm a research assistant at the Princeton Computational Imaging Lab working on explainable 3D perception via inverse generation under Prof. Felix Heide.

I recently graduated from Princeton University with a BSE in Computer Science. I conducted my senior thesis research under Prof. Felix Heide, for which I received the Outstanding Computer Science Senior Thesis Prize.

Earlier during my undergraduate studies at Princeton, I worked under Prof. Karthik Narasimhan in the Princeton Natural Language Processing Group and Prof. Olga Russakovsky in the Princeton Visual AI Lab.

Email  |  CV  |  GitHub  |  LinkedIn  |  Google Scholar

profile photo

News

Research

My research lies at the intersection of computer vision, computer graphics, machine learning, and optimization, focusing on explainable 3D perception via inverse generation. Some research questions that inspire my current work:

  • Can we leverage priors learned in generative models to interpret 3D information from everyday 2D videos and photographs?
  • How can we reformulate visual perception as an inverse generation problem?

* indicates equal contribution. Representative projects are highlighted.

project image Inverse Neural Rendering for Explainable Multi-Object Tracking
Julian Ost*, Tanushree Banerjee*, Mario Bijelic, Felix Heide
* denotes equal contribution
arXiv preprint (under review), 2024
[project page] [paper] [supplement] [arXiv]

We propose to recast 3D multi-object tracking from RGB cameras as an Inverse Rendering (IR) problem by optimizing via a differentiable rendering pipeline over the latent space of pre-trained 3D object representations and retrieving the latents that best represent object instances in a given input image. Our method is not only an alternate take on tracking; it enables examining the generated objects, reasoning about failure situations, and resolving ambiguous cases.

project image LLMs are Superior Feedback Providers: Bootstrapping Reasoning for Lie Detection with Self-Generated Feedback
Tanushree Banerjee, Richard Zhu, Runzhe Yang, Karthik Narasimhan
arXiv preprint, 2024
[paper] [arXiv] [slides]

We investigated a bootstrapping framework that leverages self-generated feedback for detecting deception in Diplomacy games by collecting a novel dataset of human feedback on initial predictions and comparing the modification stage performance when using human feedback rather than LLM-generated feedback. Our LLM-generated feedback-based approach achieved superior performance, with a 39% improvement over the zero-shot baseline in lying-F1 without any training required.

Selected Independent Work Reports

Earlier during my undergraduate studies at Princeton I worked on bias in visual understanding pipelines under Prof. Olga Russakovsky in the Princeton Visual AI Lab.

project image Reducing Object Hallucination in Visual Question Answering
Tanushree Banerjee. Advisor: Olga Russakovsky
Independent Work Project, Spring 2023
[paper] [code]

This paper proposes several approaches to identify questions unrelated to an image to prevent object hallucination in VQA models. The best approach achieved a 40% improvement over the random baseline.

project image Bias in Skin Lesion Classification
Tanushree Banerjee. Advisor: Olga Russakovsky
Independent Work Project, Spring 2022
[paper]

This paper analyzes the bias in a model against underrepresented skin tones in the training dataset on the skin lesion classification task.


Selected Course Projects

These include unpublished research-related work as part of semester-long course projects during my undergraduate studies at Princeton.

project image [Re] METRA: Scalable Unsupervised RL with Metric-Aware Abstraction
Tanushree Banerjee*, Tao Zhong*, Tyler Benson*. Advisors: Benjamin Eysenbach, Mengdi Wang
Reinforcement Learning Course Project, Spring 2024
[paper] [code]

We reproduce the results of “METRA: Scalable Unsupervised RL with Metric-Aware Abstraction” (Park et al.,2024), which proposes a novel unsupervised RL objective, called Metric-Aware Abstraction (METRA) (Park et al., 2024), which learns diverse, useful behaviors, as well as a compact latent space that can be used to solve various downstream tasks in a zero-shot manner. We validate the claims of this paper in our reproduction study, along with additional figures to confirm their claims.

project image Counterfactual Analysis for Spoken Dialogue Summarization
Tanushree Banerjee*, Kiyosu Maeda*. Advisor: Sanjeev Arora
Fundamentals of Deep Learning Course Project, Fall 2023 (Graduate Course)
[paper] [code]

We conduct experiments to understand the effect of errors in speaker diarization and speech recognition on an LLM’s summarization performance. We conduct counterfactual analysis by automatically injecting speaker diarization or speech recognition errors into spoken dialogue.

project image Towards Efficient Frame Sampling Strategies for Video Action Recognition
Tanushree Banerjee*, Ameya Vaidya*, Brian Lou*. Advisor: Olga Russakovsky
Computer Vision Course Project, Spring 2023
[paper] [code]

We propose and evaluate two dataset and model-agnostic frame sampling strategies for computationally efficient video action recognition: one based on the norm of the optical flow of frames and the other based on the number of objects in frames.

project image What Makes In-Context Learning Work On Generative QA Tasks?
Tanushree Banerjee*, Simon Park*, Beiqi Zou*. Advisor: Danqi Chen
Understanding Large Language Models Course Project, Fall 2022 (Graduate Course)
[paper] [code]

We empirically analyze what aspects of the in-context demonstrations contribute to improvements in downstream task performance, extending the work of Min et al., 2022 to multiple choice and classification tasks.

project image [Re] Double-Hard Debias: Tailoring Word Embeddings for Gender Bias Mitigation
Tanushree Banerjee*, Jessica Ereyi*, Kevin Castro*. Advisors: Danqi Chen, Karthik Narasimhan
Natural Language Processing Course Project, Spring 2022
[paper] [poster]

We reproduce the results of “Double-Hard Debias: Tailoring Word Embeddings for Gender Bias” (Wang et al.,2020) to reduce the gender bias present in pre-trained word embeddings. Additionally, we evaluate the proposed technique on Spanish GloVe embeddings to assess whether these debiasing methods generalize to non-English languages.

Teaching & Outreach

I have been incredibly fortunate to receive tremendous support from faculty mentors while at Princeton to pursue a research career in computer science. I am keen to use my position to help empower others to pursue careers in computer science. This section includes my teaching and outreach endeavors to help address the acute diversity crisis in this field.

project image Undergraduate Course Assistant (UCA): Independent Work Seminar on AI for Engineering and Physics
Spring 2024
[website]

As a UCA, I held office hours for students in the seminar, helping them debug their code and advising them on their semester-long independent work projects.

project image Instructor: Princeton AI4ALL
Summer 2022
[website]

As an instructor, I developed workshops and Colab tutorials leading up to an NLP-based capstone project for this ML summer camp for high school students from underrepresented backgrounds, focusing on the potential for harm perpetuated by large language models. I also organized guest speaker talks to expose students to diverse applications of ML in non-traditional fields.


Design and source code from Leonid Keselman's Jekyll fork of Jon Barron's website.
Updated June 2024.