The Difference Between Meeting Someone in Person and on Screen

Recognizing the face of a family member or close friend occurs relatively instantaneously and without effort. However, things can be different when we see a picture of a person on screen. Years of exposure to loved ones’ features have trained the brain to recognize them in a crowd despite differences in haircuts and emotional expressions.

The Difference Between Meeting Someone in Person and on ScreenBut what about someone you don’t know well, like the guy you sat next to on a plane or the grocery store cashier? What is the brain’s performance when it comes to recognizing a stranger’s face? The quick answer is that it does not go well.

“We aren’t very excellent at distinguishing unknown faces,” said Géza Gergely Ambrus, a postdoctoral researcher at Friedrich Schiller University Jena in Germany. “It’s very tough to recognize someone we don’t know on screen when you take two images with different cameras and the person shaved or isn’t wearing glasses in one of them.”

About the Research

The huge gap in our detection of unfamiliar and familiar faces prompted Ambrus and his colleagues to ponder how the brain’s response changes when a person goes from unknown to known. Their research, which was published in The Journal of Neuroscience, indicated that getting to know strangers by categorizing images on a screen of two strangers into different identities did not elicit significant signals of familiarity in brain scans. Watching films of new people or interacting with them in person, on the other hand, resulted in increased brain activation when those people’s faces were later seen. In contrast to watching movies of them, in-person engagement resulted in significantly stronger signals of familiarity.

The findings highlight the importance of face-to-face encounters, while virtual, one-way observations had a lower impact on facial recognition.

A previous study using electrodes linked to the scalp to measure electrical activity in the brain, a method known as EEG, has revealed areas that light up consistently while viewing any human face. The pattern and strength of signals, on the other hand, vary depending on the familiarity of the face, with stronger signals associated with well-known faces.


For the photo sorting task, Ambrus and his colleagues compiled EEG records from 42 participants, 24 for the video exposure task, and 23 for the in-person contact task. The researchers then utilized a machine-learning algorithm to determine the EEG patterns that correspond to various levels of facial familiarity, and they used this model to analyze their experimental results.

Photo Sorting Test on a Screen?

Subjects were asked to sort 30 images on a screen of two previously unknown women into two piles, each representing a distinct person, in the photo sorting test. They found images of the woman, as well as other persons who looked similar, shortly afterward. The EEG pattern did not alter significantly when the women were viewed vs strangers, implying that simply gazing at images in this manner does not establish familiarity.

During a baseline EEG test, subjects were shown images on a screen of actors they had never seen previously in the video exposure trial. They then watched an entire season of a TV show in which the actors played leading parts. After they finished the season, an EEG test revealed a robust signal of familiarity, implying that video screen exposure, rather than the photo sorting task, leads to trustworthy representations of face recognition.

During a baseline EEG test, participants were shown screen images of four unfamiliar women in the previous trial. Then, over the next few days, they met with two of these women in person to get to know them better.

“For three days in a row, they spent an hour in our lab, drinking coffee and playing a quiz game. They met in person and were free to discuss whatever they wanted “Ambrus stated. “For the same amount of time, all of the participants met at the same location with the same persons.”

An EEG test the next day, which included photographs of all four ladies, revealed a significant difference from the baseline EEG. The two women’s faces were recognized as familiar by the participants’ brains, and the signals were substantially stronger than those seen in the video exposure task. When it comes to our brains and face recognition, an in-person connection is certainly important.

The researchers went beyond the conventional approaches used in face perception research by having people meet and engage in person, according to Acunzo, who researches sensory perception and visual attention.