My research lies at the intersection of software engineering, big data, natural language processing and human-machine interaction. I co-direct the Terascale Allsensing Research Studio (TARS) at Clarkson University with Dr. Natasha Banerjee. I am specifically interested in the following:
Understanding Human Behavior in Software Repositories
Open source repositories, such as Mozilla, RedHat and Eclipse, allow users and developers to report on observed failures with their software. As software engineers, we can address questions such as: "who should fix this problem?", "is this problem new or a duplicate of an existing report?" and "which existing report best describes this problem?" My own prior research demonstrated that there is no "silver bullet" approach to solving any of these problems. I am interested in focusing on the human users and extracting the social dynamics of repositories to understand how users, software and repositories evolve over time. By understanding how humans interact with repositories, we can not only help users write informative problem reports but also help triagers efficiently triage each new report.
Visualizing Software Repositories
It is said that a picture is worth a thousand words, yet the average problem report in a repository contains several hundred words. Moreover, user experience and native language can greatly impact how well a user can communicate an observed failure. Images on the other hand are universal. While I may not know what computer translates to in French, I can certainly show a person a picture of a computer and they would understand what I wanted to say. Can we extend this philosophy to software repositories?
Augmenting Behavioral Biometrics for Identification and Recognition
Mobile devices are ubiquitous. Behavioral biometrics, such as keystrokes and gestures, can be utilized to differentiate between genuine and imposter users. Our prior work demonstrated that habituation plays a key role in classifier performance. I am interested in augmenting behavioral biometrics with other soft biometrics, such as posture, gait, and writing style.
Large Scale Multi-Sensor Multi-Modal Capture Systems
I am interested in designing multi-sensor multi-modal systems to facilitate interdisciplinary work in understanding human behavior and interactions. For example, the effect of age and gender in cooperation vs. competition or the physical effects of task habituation (e.g. in keystroke dynamics).