Tanushree Banerjee

Tanushree Banerjee

I'm Tanushree (she/her), a research assistant at the Princeton Computational Imaging Lab working on explainable 3D perception via inverse generation under Prof. Felix Heide.

I recently graduated from Princeton University with a BSE in Computer Science. I conducted my senior thesis research under Prof. Felix Heide, for which I received the Outstanding Computer Science Senior Thesis Prize.

Earlier during my undergraduate studies at Princeton, I worked under Prof. Karthik Narasimhan in the Princeton Natural Language Processing Group and Prof. Olga Russakovsky in the Princeton Visual AI Lab.

Research

My research lies at the intersection of computer vision, computer graphics, machine learning, and optimization, focusing on explainable 3D perception via inverse generation.

Some research questions that inspire my current work:

  • Can we leverage priors learned in generative models to interpret 3D information from everyday 2D videos and photographs?
  • How can we reformulate visual perception as an inverse generation problem?

* indicates equal contribution. Representative projects are highlighted

Inverse Neural Rendering for Explainable Multi-Object Tracking

Inverse Neural Rendering for Explainable Multi-Object Tracking

Julian Ost*, Tanushree Banerjee*, Mario Bijelic, Felix Heide
* denotes equal contribution

arXiv preprint (under review), 2024
3D multi-object tracking explainability inverse rendering

We propose to recast 3D multi-object tracking from RGB cameras as an Inverse Rendering (IR) problem by optimizing via a differentiable rendering pipeline over the latent space of pre-trained 3D object representations and retrieving the latents that best represent object instances in a given input image. Our method is not only an alternate take on tracking; it enables examining the generated objects, reasoning about failure situations, and resolving ambiguous cases.

LLMs are Superior Feedback Providers: Bootstrapping Reasoning for Lie Detection with Self-Generated Feedback

LLMs are Superior Feedback Providers: Bootstrapping Reasoning for Lie Detection with Self-Generated Feedback

Tanushree Banerjee, Richard Zhu, Runzhe Yang, Karthik Narasimhan

arXiv preprint, 2024
LLM Self-Refinement Human-in-the-Loop ML

We investigated a bootstrapping framework that leverages self-generated feedback for detecting deception in Diplomacy games by collecting a novel dataset of human feedback on initial predictions and comparing the modification stage performance when using human feedback rather than LLM-generated feedback. Our LLM-generated feedback-based approach achieved superior performance, with a 39% improvement over the zero-shot baseline in lying-F1 without any training required.

Teaching & Outreach

I have been incredibly fortunate to receive tremendous support from faculty mentors while at Princeton to pursue a research career in computer science. I am keen to use my position to help empower others to pursue careers in computer science. This section includes my teaching and outreach endeavors to help address the acute diversity crisis in this field.

Undergraduate Course Assistant (UCA): Independent Work Seminar on AI for Engineering and Physics, Spring 2024

As a UCA, I held office hours for students in the seminar, helping them debug their code and advising them on their semester-long independent work projects.

Instructor: Princeton AI4ALL, Summer 2022

As an instructor, I developed workshops and Colab tutorials leading up to an NLP-based capstone project for this ML summer camp for high school students from underrepresented backgrounds, focusing on the potential for harm perpetuated by large language models. I also organized guest speaker talks to expose students to diverse applications of ML in non-traditional fields.