About Me
I am currently a Principal Research Scientist at Intel Labs.
Broadly, my research focuses on developing systems and AI that collect and reason about complex multimodal sensor data. Results from my research have been published in a number of top conferences including IMWUT, AAAI, UbiComp, MobiSys, ISWC, and SenSys. According to Google Scholar,
my work has garnered more than 11,100 citations, and my h-index is 30.
With over 10 years of experience, my expertise spans Machine/Deep learning, mobile computing, mobile sensing, context awareness, and AI for manufacturing.
My previous assignments at Intel involved developing sensing systems and machine learning techniques to reason about human context, activities, and social interactions in real-world environments.
My work has been productized in various Intel commercial products and deployed in Intel manufacturing facilities.
I received my Ph.D. degree from the Department of Computer Science at Dartmouth College,
under the guidance of Prof. Andrew T. Campbell and Prof. Tanzeem Choudhury.
As part of my doctoral work, I created the SoundSense system, the first ubiquitous sound understanding system for smartphones.
StressSense, which uses cell phone audio to detect stress from human voice, won the 2022 ACM UbiComp 10-year impact award.
Currently, I serve as Associate Editor for IMWUT.
I have served in various roles in the academic community, including Industrial Relationship Chair of UbiComp 2021, Registrations Chair of MobiSys 2017, Posters/Demos Chair of MobiSys 2014, PC member of IPSN 2017, UbiComp 2015, IEEE MASS 2013, Nokia Mobile Data Challenge (MDC), and MobiSense 2011.
Press & News
  StressSense won the 2022 ACM UbiComp 10-year impact award: "By convincingly showing how smartphone microphones could be used to unobtrusively recognize stress from the user's voice, this work paved the way for other numerous efforts in the area of stress detection from sensory data, a topic that, 10 years later, is still very relevant for both academia and industry".
  CenceMe received the 2019 ACM SIGMOBILE Test of Time Award for "inspiring a huge body of research and commercial endeavors that has continued to increase the breadth and depth of mobile sensing".
  CenceMe recognized for "pioneering machine learning across mobile phones and servers" with the 2018 ACM SenSys Test of Time Award.
  Your phone can recognize you by the way you walk, Sept 2013.
  StressSense is highlighted in The Economist, Microphones as sensors: Teaching old microphones new tricks, June 2013.
  StressSense is featured in the New Scientist, Smartphone
that feels your strain, August 2012.
  StressSense and Bewell are covered by Atlantic Cities article, 3
Next-Gen Apps for the Stressed-Out Urbanite Cities, August 2012.
  Voice-Stress
Software Is Put to the Test, on PhysOrg
and ACM Tech, August 2012.
    the New York Times Magazine article on The
Little Voice in Your Head
January 2012.
  Fast Company's Co.Exist reports our work on mobile phone stress detection and Bewell app: Get
Some Therapy From An App That Reads Your Feelings Through Your
Voice, November 2011.
  NeuroPhone is featured as part of the cover story on The next step in bionics, CBS News Sunday Morning, October 2011.
    NeuroPhone is featured as part of the NYTimes Magazine
article on The Cyborg in us all, September 2011.
  Smartphone app monitors your every move, ACM TechNews, December 2010.
  Nokia toys with context-aware smartphone settings switch, Jigsaw provides better context for apps like this, Engadget front page story, Nov 2010.
 
Jigsaw continuous sensing engine covered by New Scientist, Smartphone
app monitors your everymove, Nov 2010.
  Write-up about NeuroPhone project: Mobile Phone Mind Control, MIT Technology Review, April 2010.
  SoundSense featured on SIGMOBILE Annual Report 2009, 2009.
  SoundSense featured on slashdot front page, Cell Phones That Learn the Sounds of Your Life, July 2009.
  SoundSense featured on MIT Technology Review,
Cell phones that listen and learn, June 2009.
|