About Me

I am a Research Scientist at Google Research in Atlanta. I recently completed my PhD in Computer Science at the Georgia Institute of Technology where I was advised by James M. Rehg. At Georgia Tech I also had the pleasure of working closely with Dhruv Batra and Devi Parikh. At Georgia Tech I have also collaborated with Peter Anderson and Stefan Lee. My research interests are primarily focused on multi-modal modeling of vision and natural language for applications in artificial intelligence. My long-term research goal is to develop multi-modal systems capable of supporting robotic or AR assistants that can seemlessly interact with humans. My research currently revolves around training embodied agents (in simulation) to perform complex semantic grounding tasks.

In the summer of 2020, I was an intern at FAIR working with Abhinav Gupta. In the summer of 2019, I was a research intern at Facebook Reality Labs (FRL) working with James Hillis and Dhruv Batra. In the summer of 2018, I was a research intern at NEC Labs working with Asim Kadav and Hans Peter Graf.

As an undergraduate at Emory University, I worked in a Natural Language Processing lab for two years under Dr. Jinho Choi. I also spent a summer working with Dr. Mubarak Shah at University of Central Florida.


Education

  • Georgia Institute of Technology - Presidential PhD Fellowship (2016 - 2020)
  • Ph.D in Computer Science, Georgia Institute of Technology, July 2022
  • B.S. in Computer Science and Mathematics, Emory University, 2016

Publications

Which way is `right'?: Uncovering Limitations of Vision-and-Language Navigation Models

Meera Hahn, James M. Rehg
Submitted to AAMAS 2023 [Paper]

Transformer-based Localization from Embodied Dialog with Large-scale Pre-training

Meera Hahn, James M. Rehg
AACL 2022 [Paper] [Video] [Code]

No RL, No Simulation: Learning to Navigate without Navigating

Meera Hahn, Devendra Chaplot, Shubham Tulsiani, Mustafa Mukadam, James M. Rehg, Abhinav Gupta.
Neural Information Processing Systems 2021 [Paper] [Website] [Data] [Code]

Where Are You? Localization from Embodied Dialog

Meera Hahn, Jacob Krantz, Dhruv Batra, Devi Parikh, James M. Rehg, Stefan Lee, and Peter Anderson.
EMNLP 2020 [Paper] [Website] [Data] [Code] [Leaderboard]

Learning a Visually Grounded Memory Assistant

Meera Hahn, Kevin Carlberg, Ruta Desai, and James Hillis.
Arxiv 2020 [Paper] [Code]

Tripping through time: Efficient Localization of Activities in Videos

Meera Hahn, Asim Kadav, James M. Rehg, and Hans Peter Graf.
BMVC 2020 [Paper]

Action2Vec: A Crossmodal Embedding Approach to Action Learning

Meera Hahn, Andrew Silva, and James M. Rehg.
CVPR-W 2019 [Paper]

Localizing and Aligning Fine-Grained Actions to Sparse Instructions

Meera Hahn, Nataniel Ruiz, Jean-Baptiste Alayrac, Ivan Laptev, and James M. Rehg.
CVPR-W 2018 [Paper]

Situated Bayesian Reasoning Framework for Robots Operating in Diverse Everyday Environments

Sonia Chernova, Vivian Chu, Angel Daruna, Haley Garrison, Meera Hahn, Priyanka Khante, Weiyu Liu, and Andrea Thomaz.
Robotics Research 2020, AAAI 2018 [Paper]

Deep Tracking: Visual Tracking Using Deep Convolutional Networks

Meera Hahn, Si Chen, and Afshin Dehghan.
Arxiv 2015 [Paper]

Advances in Methods and Evaluations for Distributional Semantic Models

Meera Hahn, and Jinho Choi.
Emory Undergraduate Thesis 2016 [Paper]

Talks

No RL, No Simulation: Learning to Navigate without Navigating

Neurips 2021


Where Are You? Localization from Embodied Dialog

EMNLP 2020



Teaching

  • Spring 2020: CS 8803 Machine Learning with Limited Supervision, TA site
  • Fall 2019: CS 6476 Computer Vision, Head TA site