Grace Hopper Annual Conference Schedule


CRA-W DREU Posters

October 14 6:30 pm-9:00 pm
Halls A-C Level One GRBCC
TRACK: Posters
An Improved Algorithm for a Relaxed Queue
6:30 PM - 9:00 PM

Cloud computing has increased interest in implementing shared data objects in message-passing distributed environments. An important feature of these systems is linearizability. Linearizability can be an expensive consistency condition to implement. Previously, linearizable message-passing implementations of relaxed queues have been considered and proven to improve the average time complexity from that of a standard FIFO queue. This work expands on this idea and presents an algorithm that implements a k-relaxed lateness queue in a message-passing system of n processes that tightens the upper bound for the average time complexity of operations in such a system.

Identifying Patterns of Conformity on a Social Network for Children
6:30 PM - 9:00 PM

Cyberbullying can be greatly affected by groupthink. To study the conformity effects in young children, we asked users to complete details of a story via sketching in a children's social network. By varying exposure, we found that access to their peer's responses and identity indicated conformity factors influenced by social relationships. This work's analysis of children's patterns of online conformity will inform and inspire the development of algorithms to identify cyberbullying activity and make the Internet safer for children.

Engaging in Identity Exploration and Retaining Participation on an Online Social Network for Children
6:30 PM - 9:00 PM

Online social networks experience difficulty retaining an audience. We explored several activities on a children’s social network site in order to gain and sustain longitudinal participation. The children in our social network are of the age of identity exploration (7-12), as such we hypothesized that the inclusion of personality quizzes would increase our retention rate. Thus far, children have responded positively to the quizzes, making remarks consistent with identity exploration, and retention and activity has increased significantly.

Using Synthesized Speech to Improve Speech Recognition for Low-Resource Languages
6:30 PM - 9:00 PM

Building good automatic speech recognizers (ASRs) for low-resource languages (LRLs) such as Zulu or Amharic is difficult due to the limited computational and transcribed training data needed to create acoustic models for ASR. We explore data augmentation by synthesizing web data using Text-to-Speech (TTS) synthesis to produce additional transcribed speech. Pilot studies of American English show that even small amounts of synthetic speech improve ASR’s word error rate with up to 0.93% absolute points for English. We analyze differences in ASR performance by varying the amount of synthetic speech, genres of web data, and different strategies for creating the TTS system.

Automatic Voice Activity Detection in the Multilingual UTEP-ICT Cross-Cultural Multi-party Multi-modal Dialog Corpus
6:30 PM - 9:00 PM

We present a novel approach to speech identification in group interactions with individual lapel microphones, extending Moattar and Homayounpour’s algorithm (2009) to identify individual speech even when background speakers are present. Using this approach, we annotated speech across the three language groups of the UTEP-ICT Corpus. Comparisons with human-coded ground truth using Cohen’s Kappa for inter-rater reliability resulted in a mean score of .77 (s=.23). This method enables individual speech detection in multi-party interactions without directional microphones.

BrainTrack: Concussion Monitoring and Recovery
6:30 PM - 9:00 PM

We are exploring the use of wearable technology to improve the recovery of children with concussions. Our formative work reveals issues of patient adherence to recovery regimens and communication with their healthcare providers. To address these issues, we use sensors in the Microsoft Band to monitor their physical activity and we offer instructive therapy through our mobile application. Future work will include working closely with the local children’s hospital and testing our prototype in a clinical setting.

Computer Graphics for Connecting Facial Motion to Emotional Intent
6:30 PM - 9:00 PM

Despite advances in computer graphics, generating realistic, expressive, real-time facial animations remains a challenging problem. This is partly because there is little work analyzing how dynamic elements of facial motion connect to emotional intent. Here, we present a new approach for the creation and analysis of synthetic facial expressions directed at understanding the emotional content of various expressions. We discuss how the results can help improve techniques for improving children's emotional processing, and assisting in developing new facial reconstruction techniques that maximize functional expressiveness post surgery.

TTS and Data Selection: Improving Systems for Low-Resource Languages
6:30 PM - 9:00 PM

We discuss our work with applying data selection techniques to improve text-to-speech systems for low-resource languages (LRLs). We hypothesize that quality synthetic voices can be created using subsets of a larger dataset as training material. To evaluate the intelligibility and naturalness of synthesized voices we use Amazon’s Mechanical Turk. By comparing our subset voices with voices trained on all available data, we are able to identify key acoustic and prosodic features that produce quality synthetic speech.

Exploring Visuo-Haptic Illusions Using Virtual/Augmented Reality
6:30 PM - 9:00 PM

Haptic perception is a key factor in making virtual environments more immersive and useful. Our research explores the limits of visual dominance in multisensory virtual reality environments equipped with passive haptic props. In a between-subjects experiment, we ask participants to make judgments about the lengths of surreptitiously-resized wooden blocks that they can see themselves touching, under three different conditions of bodily self-representation (invisible/generic computer-modeled avatar/video-see-through self-avatar). We are seeking to test the hypothesis that with increasingly realistic self-representation, people will be more likely to believe that the size of block they are seeing is an accurate representation of the size of block they are feeling.