UCI Study Links Selfies, Happiness

Regularly snapping selfies with your smartphone and sharing photos with your friends can help make you a happier person, according to computer scientists at the University of California, Irvine. In a first-of-its-kind study published just before back-to-school season, the authors found that students can combat the blues with some simple, deliberate actions on their mobile devices.

By conducting exercises via smartphone photo technology and gauging users’ psychological and emotional states, the researchers found that the daily taking and sharing of certain types of images can positively affect people. The results of the study out of UCI’s Donald Bren School of Information & Computer Sciences were published recently in the Psychology of Well-Being.

“Our research showed that practicing exercises that can promote happiness via smartphone picture taking and sharing can lead to increased positive feelings for those who engage in it,” said lead author Yu Chen, a postdoctoral scholar in UCI’s Department of Informatics. “This is particularly useful information for returning college students to be aware of, since they face many sources of pressure.”

These stressors – financial difficulties, being away from home for the first time, feelings of loneliness and isolation, and the rigors of coursework – can negatively impact students’ academic performance and lead to depression.

“The good news is that despite their susceptibility to strain, most college students constantly carry around a mobile device, which can be used for stress relief,” Chen said. “Added to that are many applications and social media tools that make it easy to produce and send images.”

The goal of the study, she said, was to help researchers understand the effects of photo taking on well-being in three areas: self-perception, in which people manipulated positive facial expressions; self-efficacy, in which they did things to make themselves happy; and pro-social, in which people did things to make others happy.

Chen and her colleagues designed and conducted a four-week study involving 41 college students. The subjects – 28 female and 13 male – were instructed to continue their normal day-to-day activities (going to class, doing schoolwork, meeting with friends, etc.) while taking part in the research.

But first each was invited to the informatics lab for an informal interview and to fill out a general questionnaire and consent form. The scientists helped students load a survey app onto their phones to document their moods during the first “control” week of the study. Participants used a different app to take photos and record their emotional states over the following three-week “intervention” phase.

Subjects reported their moods three times a day using the smartphone apps. In evening surveys, they were asked to provide details of any significant events that may have affected their emotions during the course of the day.

The project involved three types of photos to help the researchers determine how smiling, reflecting and giving to others might impact users’ moods. The first was a selfie, to be taken daily while smiling. The second was an image of something that made the photo taker happy. The third was a picture of something the photographer believed would bring happiness to another person (which was then sent to that person). Participants were randomly assigned to take photos of one type.

Researchers collected nearly 2,900 mood measurements during the study and found that subjects in all three groups experienced increased positive moods. Some participants in the selfie group reported becoming more confident and comfortable with their smiling photos over time. The students taking photos of objects that made them happy became more reflective and appreciative. And those who took photos to make others happy became calmer and said that the connection to their friends and family helped relieve stress.

“You see a lot of reports in the media about the negative impacts of technology use, and we look very carefully at these issues here at UCI,” said senior author Gloria Mark, a professor of informatics. “But there have been expanded efforts over the past decade to study what’s become known as ‘positive computing,’ and I think this study shows that sometimes our gadgets can offer benefits to users.”

About the University of California, Irvine: Founded in 1965, UCI is the youngest member of the prestigious Association of American Universities. The campus has produced three Nobel laureates and is known for its academic achievement, premier research, innovation and anteater mascot. Led by Chancellor Howard Gillman, UCI has more than 30,000 students and offers 192 degree programs. It’s located in one of the world’s safest and most economically vibrant communities and is Orange County’s second-largest employer, contributing $5 billion annually to the local economy. For more on UCI, visit www.uci.edu.

Media access: Radio programs/stations may, for a fee, use an on-campus ISDN line to interview UCI faculty and experts, subject to availability and university approval. For more UCI news, visit news.uci.edu. Additional resources for journalists may be found at communications.uci.edu/for-journalists.

MRI Scanner Sees Emotions Flickering Across An Idle Mind

As you relax and let your mind drift aimlessly, you might remember a pleasant vacation, an angry confrontation in traffic or maybe the loss of a loved one.

And now a team of researchers at Duke University say they can see those various emotional states flickering across the human brain.

“It’s getting to be a bit like mind-reading,” said Kevin LaBar, a professor of psychology and neuroscience at Duke. “Earlier studies have shown that functional MRI can identify whether a person is thinking about a face or a house. Our study is the first to show that specific emotions like fear and anger can be decoded from these scans as well.”

The data produced by a functional MRI hasn’t changed, but the group is applying new multivariate statistics to the scans of brain activity to see different emotions as networks of activity distributed across areas of the conscious and unconscious brain.

These networks were first mapped by the team in a March 2015 paper in the journal Social, Cognitive and Affective Neuroscience. They identified seven different patterns of brain activity reflecting contentment, amusement, surprise, fear, anger, sadness and neutrality.

To build these maps, they had put 32 research subjects into the scanner and exposed them to two music clips and two film clips that had been shown to induce each of the seven emotions. The subjects also completed self-report questionnaires on their mood states for further validation.

Analytical software called a machine learning algorithm was then presented with some of the subjects’ data and tasked with finding a pattern that concurred with each emotional stimulus. Having learned what each of the seven states ought to look like, the algorithm was then presented with the scans of the rest of the study group and asked to identify their emotional states without knowing which emotion prompt they received.

LaBar said the model performed better than chance at this task, despite differences in brain shapes and arousal levels between subjects. “And it proved fairly sensitive,” he said.

The latest study, appearing Sept. 14 in Plos Biology, followed up by scanning 21 subjects who were not offered stimuli, but were encouraged to let their minds wander. Every thirty seconds, they responded to a questionnaire about their current emotional state.

“We tested whether these seven brain maps of emotions occurred spontaneously while participants were resting in the fMRI scanner without any emotional stimuli being presented,” LaBar said.

Data for the whole brain was collected every 2 seconds and each of these individual scans was compared to the seven patterns. The team examined the scanner data for the ten seconds previous to each self-report of mood, and found that the algorithm accurately predicted the moods the subjects self-reported.

LaBar said another source of validation is the indication that there’s a significant signal of anxiety at the beginning of each subjects’ data as they enter the confined, noisy MRI for the first time. “That’s what you’d expect to see for most people when they first enter the machine.”

In a second group of 499 subjects being scanned for the Duke Neurogenetics Study, the researchers had them rest in the scanner for nearly 9 minutes, and then asked them how depressed and anxious they felt after the scanning session. “We found that the cumulative presence of our ‘sad’ emotion map, summed over time, predicted their depression scores, and the cumulative presence of our ‘fear’ emotion map predicted their anxiety scores,” LaBar said.

This larger group was also tested for personality measures of depression, anxiety, and angry hostility. Again, the maps for depression and anxiety closely mirrored these measures. “We also showed that the cumulative presence of our ‘angry’ emotion map predicted individuals’ angry hostility traits,” LaBar said.

Aside from being an interesting proof of concept, LaBar thinks these new maps of emotional states could be useful in studying people who have poor insight into their emotional status, and might be used in clinical trials to test the effectiveness of treatments to regulate emotions.

LaBar says their conclusions about characteristic networks of brain areas governing emotional states also challenges prevailing theories about how emotions are formed. He adds that a Finnish team led by Lauri Nummenmaa has made similar maps of emotional networks from their brain scanning studies.

In further research, the Duke group will be pursuing a better understanding of the timing of emotional states and the transitions between them, as these may be relevant for understanding affective disorders.

The Plos Biology study was funded NIH grant R21 MH098149 to Kevin LaBar and R01 DA033369 to Ahmad Hariri. Philip Kragel, who was a graduate student at the time, was supported by an E. Bayard Halsted Fellowship from Duke.

CITATIONS: “Multivariate Neural Biomarkers of Emotional States Are Categorically Distinct,” Philip Krangel, Kevin LaBar. Social, Cognitive and Affective Neuroscience, March 25, 2015. DOI: 10.1093/scan/nsv032

“Decoding the Nature of Emotion in the Brain,” Philip Kragel, Kevin LaBar.” Trends in Cognitive Neuroscience, June 2016.

“Decoding Spontaneous Emotional States in the Human Brain,” Philip Kragel, Annchen Knodt, Ahmad Hariri, Kevin LaBar. Plos Biology, Sept. 14, 2016. DOI: 10.1371/journal.pbio.2000106

Automated Screening For Childhood Communication Disorders

For children with speech and language disorders, early-childhood intervention can make a great difference in their later academic and social success. But many such children — one study estimates 60 percent — go undiagnosed until kindergarten or even later.

Researchers at the Computer Science and Artificial Intelligence Laboratory at MIT and Massachusetts General Hospital’s Institute of Health Professions hope to change that, with a computer system that can automatically screen young children for speech and language disorders and, potentially, even provide specific diagnoses.

This week, at the Interspeech conference on speech processing, the researchers reported on an initial set of experiments with their system, which yielded promising results. “We’re nowhere near finished with this work,” says John Guttag, the Dugald C. Jackson Professor in Electrical Engineering and senior author on the new paper. “This is sort of a preliminary study. But I think it’s a pretty convincing feasibility study.”

The system analyzes audio recordings of children’s performances on a standardized storytelling test, in which they are presented with a series of images and an accompanying narrative, and then asked to retell the story in their own words.

“The really exciting idea here is to be able to do screening in a fully automated way using very simplistic tools,” Guttag says. “You could imagine the storytelling task being totally done with a tablet or a phone. I think this opens up the possibility of low-cost screening for large numbers of children, and I think that if we could do that, it would be a great boon to society.”

Subtle signals

The researchers evaluated the system’s performance using a standard measure called area under the curve, which describes the tradeoff between exhaustively identifying members of a population who have a particular disorder, and limiting false positives. (Modifying the system to limit false positives generally results in limiting true positives, too.) In the medical literature, a diagnostic test with an area under the curve of about 0.7 is generally considered accurate enough to be useful; on three distinct clinically useful tasks, the researchers’ system ranged between 0.74 and 0.86.

To build the new system, Guttag and Jen Gong, a graduate student in electrical engineering and computer science and first author on the new paper, used machine learning, in which a computer searches large sets of training data for patterns that correspond to particular classifications — in this case, diagnoses of speech and language disorders.

The training data had been amassed by Jordan Green and Tiffany Hogan, researchers at the MGH Institute of Health Professions, who were interested in developing more objective methods for assessing results of the storytelling test. “Better diagnostic tools are needed to help clinicians with their assessments,” says Green, himself a speech-language pathologist. “Assessing children’s speech is particularly challenging because of high levels of variation even among typically developing children. You get five clinicians in the room and you might get five different answers.”

Unlike speech impediments that result from anatomical characteristics such as cleft palates, speech disorders and language disorders both have neurological bases. But, Green explains, they affect different neural pathways: Speech disorders affect the motor pathways, while language disorders affect the cognitive and linguistic pathways.

Telltale pauses

Green and Hogan had hypothesized that pauses in children’s speech, as they struggled to either find a word or string together the motor controls required to produce it, were a source of useful diagnostic data. So that’s what Gong and Guttag concentrated on. They identified a set of 13 acoustic features of children’s speech that their machine-learning system could search, seeking patterns that correlated with particular diagnoses. These were things like the number of short and long pauses, the average length of the pauses, the variability of their length, and similar statistics on uninterrupted utterances.

The children whose performances on the storytelling task were recorded in the data set had been classified as typically developing, as suffering from a language impairment, or as suffering from a speech impairment. The machine-learning system was trained on three different tasks: identifying any impairment, whether speech or language; identifying language impairments; and identifying speech impairments.

One obstacle the researchers had to confront was that the age range of the typically developing children in the data set was narrower than that of the children with impairments: Because impairments are comparatively rare, the researchers had to venture outside their target age range to collect data.

Gong addressed this problem using a statistical technique called residual analysis. First, she identified correlations between subjects’ age and gender and the acoustic features of their speech; then, for every feature, she corrected for those correlations before feeding the data to the machine-learning algorithm.

“The need for reliable measures for screening young children at high risk for speech and language disorders has been discussed by early educators for decades,” says Thomas Campbell, a professor of behavioral and brain sciences at the University of Texas at Dallas and executive director of the university’s Callier Center for Communication Disorders. “The researchers’ automated approach to screening provides an exciting technological advancement that could prove to be a breakthrough in speech and language screening of thousands of young children across the United States.”

New Online Game Invites Public To Help Fight Alzheimer’s

A new online science game allows the general public to directly contribute to Alzheimer’s disease research and help scientists search for a cure.

The game, called Stall Catchers, was developed by the Human Computation Institute, in collaboration with UC Berkeley and other institutions, as part of the EyesOnALZ citizen science project. Stall Catchers will allow participants to look at movies of real blood vessels in mouse brains and search for clogged capillaries, or stalls, where blood is no longer flowing, Previous research suggests that capillary stalls could be a key culprit in Alzheimer’s disease.

The citizen science approach for Stall Catchers was developed by physicist Andrew Westphal, a senior fellow at the UC Berkeley Space Sciences Laboratory. The approach was first used in a project called Stardust@home, in which more than 30,000 amateur scientists have carried out more than 100 million searches to identify interstellar dust in collectors returned by the NASA Stardust comet sampling mission. Stardust@home led to the discovery of seven particles of likely interstellar origin, reported in the journal Science in 2014.

“We are optimistic that this citizen science approach will be similarly successful in accelerating research aimed at finding a cure for Alzheimer’s disease,” Westphal said.

Data analysis in Alzheimer’s disease research, such as searching for stalls, is a time-consuming task that can cause a single research question to take up to a year to answer in the lab. The EyesOnALZ project aims to accelerate the analysis of these data with the help of citizen scientists playing Stall Catchers so that scientists can find targets for treatment of Alzheimer’s faster.

“Today, we have a handful of lab experts putting their eyes on the research data,” said Pietro Michelucci, principal investigator for EyesOnALZ. “If we can enlist thousands of people to do that same analysis by playing an online game, then we have created a huge force multiplier in our fight against this dreadful disease.”

Stall Catchers will be open to everyone and is free to play on a laptop, tablet or smartphone. The game has already been tested by more than 100 people, including volunteers from well-known citizen science projects, including Stardust@home. The game has also been demonstrated at science and citizen science events, such USA Science and Engineering Festival, European Citizen Science Conference 2016 and smaller community events.

“Stall Catchers will not only be a breakthrough in how we do Alzheimer’s research, but it will also empower anyone to directly contribute to fighting a disease that affects them or their loved ones,” Michelucci said.

Stall Catchers is funded by a grant from the BrightFocus Foundation. Scientists from Cornell and Princeton universities are also involved with the project.

Mobile Eye-Tracking System Used To Study Anxiety In Children

Every child experiences anxiety and fear at one time or another, but some children seem to experience fear more frequently than others. As part of a new project being funded by the National Institutes of Health, Penn State researchers are looking into emerging evidence of a link between fearfulness and anxiety, or lingering apprehension, in young children.

Koraly Pérez-Edgar, associate professor of psychology and principal investigator on the project, and her research team will conduct studies on children ages five to six using cutting-edge mobile eye-tracking technology.

The project is a continuation of previous research funded by Penn State’s Center for Online Innovation in Learning (COIL). Results from past research indicated early fearfulness, when accompanied with an elevated attention to threat, can lead to subsequent social withdrawal or anxiety symptoms as the child grows.

“The initial COIL-funded study showed that we can capture children’s attention as they navigate their environments,” said Pérez-Edgar, who also runs the University’s Cognition, Affect, and Temperament Lab. “For our new study, we’ll be testing a larger sample utilizing a more advanced mobile eye-tracking system to capture eye-gaze information.”

The research team will measure attention bias to threat, which causes people to focus on the thing they are fearful of while simultaneously ignoring other details. “It is a reliable risk factor for anxiety, particularly social anxiety,” Pérez-Edgar explained. “We know that anxiety is a developmental disorder that typically emerges in early childhood, and is less amenable to intervention with age. The hope is the earlier we can predict, recognize and diagnose anxiety in children, the earlier we can assist parents, teachers and doctors help them to overcome it.”

The mobile eye-tracking system, which resembles a visor with a camera attached, records eye tracking information, such as where the person is looking and for how long, via an infrared light and optical sensors. The researchers, including graduate student Xiaoxue Fu and research technologist Phillip Galinsky, both in the department of psychology, worked with German technology firm Pupil Labs to develop and modify the visor before it was created via a 3D printer.

The eye-tracker is connected to a wireless tablet computer worn in a backpack by the child, which sends eye-tracking information to a computer so researchers can view live video footage of what the child is seeing. A red dot indicates the exact point of the child’s gaze, so researchers can see what the child is fixated on and for how long.

The eye-tracking system will be used during episodes designed to elicit fearful behavior in social and non-social settings. The project will focus on kindergarten-aged children, because this is the age window that typically experiences the onset of anxiety disorders, and where social withdrawal is more apparent for fearful children.

“The children will come into the lab for two short visits, once alone and once with an unfamiliar peer,” said Pérez-Edgar. “We will use episodes from the Laboratory Temperament Assessment Battery, a well-established, standardized protocol tailored for assessing fearful temperament in preschool children.”

According to Pérez-Edgar, the mobile eye-tracking technology has broad implications, as it may be adaptable to other socioemotional impairments such autism and used for real-time monitoring of social interaction in intervention settings. “We are pushing for transparency and keeping our research data accessible. We believe our work has the potential to impact a broad range of research across disciplines and is part of the process in keeping science moving forward.”