ResearchPad - face https://www.researchpad.co Default RSS Feed en-us © 2020 Newgen KnowledgeWorks <![CDATA[Effect of aging and body characteristics on facial sexual dimorphism in the Caucasian Population]]> https://www.researchpad.co/article/elastic_article_14542 The aim of this study was to quantify gender-specific facial characteristics in younger and older adults and to determine how aging and body characteristics, such as height and body-mass index (BMI), influence facial sexual dimorphism.MethodsThe cohort study included 90 younger adults of Caucasian origin (average age of 45 females 23.2 ± 1.9 and 45 males 23.7 ± 2.4 years) and 90 older adults (average age of 49 females 78.1 ± 8.1 and 41 males 74.5 ± 7.7 years). Three-dimensional facial scans were performed with an Artec MHT 3D scanner. The data were analyzed using the software package Rapidform®. The parameters to evaluate facial symmetry, height, width, profile, facial shape, nose, eyes and mouth characteristics were determined based on 39 facial landmarks. Student’s t-test was used to calculate the statistical differences between the genders in the younger and older adults and a multiple-linear-regression analysis was used to evaluate the impact of gender, age, body-mass index and body height.ResultsWe found that the female faces were more symmetrical than the male faces, and this was statistically significant in the older adults. The female facial shape was more rounded and their faces were smaller, after normalizing for body size. The males had wider mouths, longer upper lips, larger noses and more prominent lower foreheads. Surprisingly, we found that all the gender-dependent characteristics were even more pronounced in the older adults. Increased facial asymmetry, decreased facial convexity, increased forehead angle, narrower vermilions and longer inter-eye distances occurred in both genders during aging. An increased BMI was associated with wider faces, more concave facial profiles and wider noses, while greater body height correlated with increased facial heights and wider mouths.ConclusionFacial sexual dimorphism was confirmed by multiple parameters in our study, while the differences between the genders were more pronounced in the older adults. ]]> <![CDATA[The effects of age and sex on cognitive impairment in schizophrenia: Findings from the Consortium on the Genetics of Schizophrenia (COGS) study]]> https://www.researchpad.co/article/elastic_article_13860 Recently emerging evidence indicates accelerated age-related changes in the structure and function of the brain in schizophrenia, raising a question about its potential consequences on cognitive function. Using a large sample of schizophrenia patients and controls and a battery of tasks across multiple cognitive domains, we examined whether patients show accelerated age-related decline in cognition and whether an age-related effect differ between females and males. We utilized data of 1,415 schizophrenia patients and 1,062 healthy community collected by the second phase of the Consortium on the Genetics of Schizophrenia (COGS-2). A battery of cognitive tasks included the Letter-Number Span Task, two forms of the Continuous Performance Test, the California Verbal Learning Test, Second Edition, the Penn Emotion Identification Test and the Penn Facial Memory Test. The effect of age and gender on cognitive performance was examined with a general linear model. We observed age-related changes on most cognitive measures, which was similar between males and females. Compared to controls, patients showed greater deterioration in performance on attention/vigilance and greater slowness of processing social information with increasing age. However, controls showed greater age-related changes in working memory and verbal memory compared to patients. Age-related changes (η2p of 0.001 to .008) were much smaller than between-group differences (η2p of 0.005 to .037). This study found that patients showed continued decline of cognition on some domains but stable impairment or even less decline on other domains with increasing age. These findings indicate that age-related changes in cognition in schizophrenia are subtle and not uniform across multiple cognitive domains.

]]>
<![CDATA[Emotional facial perception development in 7, 9 and 11 year-old children: The emergence of a silent eye-tracked emotional other-race effect]]> https://www.researchpad.co/article/elastic_article_7635 The present study examined emotional facial perception (happy and angry) in 7, 9 and 11-year-old children from Caucasian and multicultural environments with an offset task for two ethnic groups of faces (Asian and Caucasian). In this task, participants were required to respond to a dynamic facial expression video when they believed that the first emotion presented had disappeared. Moreover, using an eye-tracker, we evaluated the ocular behavior pattern used to process these different faces. The analyses of reaction times do not show an emotional other-race effect (i.e., a facility in discriminating own-race faces over to other-race ones) in Caucasian children for Caucasian vs. Asian faces through offset times, but an effect of emotional face appeared in the oldest children. Furthermore, an eye-tracked ocular emotion and race-effect relative to processing strategies is observed and evolves between age 7 and 11. This study strengthens the interest in advancing an eye-tracking study in developmental and emotional processing studies, showing that even a “silent” effect should be detected and shrewdly analyzed through an objective means.

]]>
<![CDATA[Tsinghua facial expression database – A database of facial expressions in Chinese young and older women and men: Development and validation]]> https://www.researchpad.co/article/Nf679a1e8-67cb-47b3-95b4-f3d293b80761

Perception of facial identity and emotional expressions is fundamental to social interactions. Recently, interest in age associated changes in the processing of faces has grown rapidly. Due to the lack of older faces stimuli, most previous age-comparative studies only used young faces stimuli, which might cause own-age advantage. None of the existing Eastern face stimuli databases contain face images of different age groups (e.g. older adult faces). In this study, a database that comprises images of 110 Chinese young and older adults displaying eight facial emotional expressions (Neutral, Happiness, Anger, Disgust, Surprise, Fear, Content, and Sadness) was constructed. To validate this database, each image was rated on the basis of perceived facial expressions, perceived emotional intensity, and perceived age by two different age groups. Results have shown an overall 79.08% correct identification rate in the validation. Access to the freely available database can be requested by emailing the corresponding authors.

]]>
<![CDATA[Towards a fully automated surveillance of well-being status in laboratory mice using deep learning: Starting with facial expression analysis]]> https://www.researchpad.co/article/N201121b9-bfe0-423d-91d1-e349ea424365

Assessing the well-being of an animal is hindered by the limitations of efficient communication between humans and animals. Instead of direct communication, a variety of parameters are employed to evaluate the well-being of an animal. Especially in the field of biomedical research, scientifically sound tools to assess pain, suffering, and distress for experimental animals are highly demanded due to ethical and legal reasons. For mice, the most commonly used laboratory animals, a valuable tool is the Mouse Grimace Scale (MGS), a coding system for facial expressions of pain in mice. We aim to develop a fully automated system for the surveillance of post-surgical and post-anesthetic effects in mice. Our work introduces a semi-automated pipeline as a first step towards this goal. A new data set of images of black-furred laboratory mice that were moving freely is used and provided. Images were obtained after anesthesia (with isoflurane or ketamine/xylazine combination) and surgery (castration). We deploy two pre-trained state of the art deep convolutional neural network (CNN) architectures (ResNet50 and InceptionV3) and compare to a third CNN architecture without pre-training. Depending on the particular treatment, we achieve an accuracy of up to 99% for the recognition of the absence or presence of post-surgical and/or post-anesthetic effects on the facial expression.

]]>
<![CDATA[Cutaneous leishmaniasis and co-morbid major depressive disorder: A systematic review with burden estimates]]> https://www.researchpad.co/article/5c7d95d9d5eed0c484734dd0

Background

Major depressive disorder (MDD) associated with chronic neglected tropical diseases (NTDs) has been identified as a significant and overlooked contributor to overall disease burden. Cutaneous leishmaniasis (CL) is one of the most prevalent and stigmatising NTDs, with an incidence of around 1 million new cases of active CL infection annually. However, the characteristic residual scarring (inactive CL) following almost all cases of active CL has only recently been recognised as part of the CL disease spectrum due to its lasting psychosocial impact.

Methods and findings

We performed a multi-language systematic review of the psychosocial impact of active and inactive CL. We estimated inactive CL (iCL) prevalence for the first time using reported WHO active CL (aCL) incidence data that were adjusted for life expectancy and underreporting. We then quantified the disability (YLD) burden of co-morbid MDD in CL using MDD disability weights at three severity levels. Overall, we identified 29 studies of CL psychological impact from 5 WHO regions, representing 11 of the 50 highest burden countries for CL. We conservatively calculated the disability burden of co-morbid MDD in CL to be 1.9 million YLDs, which equalled the overall (DALY) disease burden (assuming no excess mortality in depressed CL patients). Thus, upon inclusion of co-morbid MDD alone in both active and inactive CL, the DALY burden was seven times higher than the latest 2016 Global Burden of Disease study estimates, which notably omitted both psychological impact and inactive CL.

Conclusions

Failure to include co-morbid MDD and the lasting sequelae of chronic NTDs, as exemplified by CL, leads to large underestimates of overall disease burden.

]]>
<![CDATA[Cyborg groups enhance face recognition in crowded environments]]> https://www.researchpad.co/article/5c89773bd5eed0c4847d2790

Recognizing a person in a crowded environment is a challenging, yet critical, visual-search task for both humans and machine-vision algorithms. This paper explores the possibility of combining a residual neural network (ResNet), brain-computer interfaces (BCIs) and human participants to create “cyborgs” that improve decision making. Human participants and a ResNet undertook the same face-recognition experiment. BCIs were used to decode the decision confidence of humans from their EEG signals. Different types of cyborg groups were created, including either only humans (with or without the BCI) or groups of humans and the ResNet. Cyborg groups decisions were obtained weighing individual decisions by confidence estimates. Results show that groups of cyborgs are significantly more accurate (up to 35%) than the ResNet, the average participant, and equally-sized groups of humans not assisted by technology. These results suggest that melding humans, BCI, and machine-vision technology could significantly improve decision-making in realistic scenarios.

]]>
<![CDATA[Do professional facial image comparison training courses work?]]> https://www.researchpad.co/article/5c6dca04d5eed0c48452a6c3

Facial image comparison practitioners compare images of unfamiliar faces and decide whether or not they show the same person. Given the importance of these decisions for national security and criminal investigations, practitioners attend training courses to improve their face identification ability. However, these courses have not been empirically validated so it is unknown if they improve accuracy. Here, we review the content of eleven professional training courses offered to staff at national security, police, intelligence, passport issuance, immigration and border control agencies around the world. All reviewed courses include basic training in facial anatomy and prescribe facial feature (or ‘morphological’) comparison. Next, we evaluate the effectiveness of four representative courses by comparing face identification accuracy before and after training in novices (n = 152) and practitioners (n = 236). We find very strong evidence that short (1-hour and half-day) professional training courses do not improve identification accuracy, despite 93% of trainees believing their performance had improved. We find some evidence of improvement in a 3-day training course designed to introduce trainees to the unique feature-by-feature comparison strategy used by facial examiners in forensic settings. However, observed improvements are small, inconsistent across tests, and training did not produce the qualitative changes associated with examiners’ expertise. Future research should test the benefits of longer examination-focussed training courses and incorporate longitudinal approaches to track improvements caused by mentoring and deliberate practice. In the absence of evidence that training is effective, we advise agencies to explore alternative evidence-based strategies for improving the accuracy of face identification decisions.

]]>
<![CDATA[Does picture background matter? Peopleʼs evaluation of pigs in different farm settings]]> https://www.researchpad.co/article/5c6c75c8d5eed0c4843d0185

Pictures of farm animals and their husbandry systems are frequently presented in the media and are mostly connected to discussions surrounding farm animal welfare. How such pictures are perceived by the broader public is not fully understood thus far. It is presumable that the animalsʼ expressions and body languages as well as their depicted environment or husbandry systems affect public perception. Therefore, the aim of this study is to test how the evaluation of a picture showing a farmed pig is influenced by portrayed attributes, as well as participants’ perceptions of pigs’ abilities in general, and if connection to agriculture has an influence. In an online survey, 1,019 German residents were shown four modified pictures of a pig in a pen. The pictures varied with regards to facial expression and body language of the pig (ʽhappyʼ versus ʽunhappyʼ pig) and the barn setting (straw versus slatted floor pen). Respondents were asked to evaluate both the pen and the welfare of the pig. Two Linear Mixed Models were calculated to analyze effects on pig and pen evaluation. For the pictures, the pen had the largest influence on both pig and pen evaluation, followed by the pigʼs appearance and participants’ beliefs in pigs’ mental and emotional abilities, as well as their connection to agriculture. The welfare of both the ʽhappyʼ and the ʽunhappyʼ pig was assessed to be higher in the straw setting compared to the slatted floor setting in our study, and even the ʽunhappy pigʼ on straw was perceived more positively than the ʽhappy pigʼ on slatted floor. The straw pen was evaluated as being better than the slatted floor pen on the pictures we presented but the pens also differed in level of dirt on the walls (more dirt in the slatted floor pen), which might have influenced the results. Nevertheless, the results suggest that enduring aspects of pictures such as the husbandry system influence perceptions more than a momentary body expression of the pig, at least in the settings tested herein.

]]>
<![CDATA[Facial cues to age perception using three-dimensional analysis]]> https://www.researchpad.co/article/5c6dc9add5eed0c484529fdc

To clarify cues for age perception, the three-dimensional head and face forms of Japanese women were analyzed. It is known that age-related transformations are mainly caused by changes in soft tissue during adulthood. A homologous polygon model was created by fitting template meshes to each study participant to obtain three-dimensional data for analyzing whole head and face forms. Using principal component analysis of the vertices coordinates of these models, 26 principal components were extracted (contribution ratios >0.5%), which accounted for more than 90% of the total variance. Among the principal components, five had a significant correlation with the perceived ages of the participants (p < 0.05). Transformations with these principal components in the age-related direction produced aged faces. Moreover, the older the perceived age, the larger the ratio of age-manifesting participants, namely participants who had one or more age-related principal component score greater than +1.0 σ in the age-related direction. Therefore, these five principal components were regarded as aging factors. A cluster analysis of the five aging factors revealed that all of the participants fell into one of four groups, meaning that specific combinations of factors could be used as cues for age perception in each group. These results suggest that Japanese women can be classified into four groups according to age-related transformations of soft tissue in the face.

]]>
<![CDATA[Visual cues that predict intuitive risk perception in the case of HIV]]> https://www.researchpad.co/article/5c76fe53d5eed0c484e5b8b2

Field studies indicate that people may form impressions about potential partners’ HIV risk, yet lack insight into what underlies such intuitions. The present study examined which cues may give rise to the perception of riskiness. Towards this end, portrait pictures of persons that are representative of the kinds of images found on social media were evaluated by independent raters on two sets of data: First, sixty visible cues deemed relevant to person perception, and second, perceived HIV risk and trustworthiness, health, and attractiveness. Here, we report correlations between cues and perceived HIV risk, exposing cue-criterion associations that may be used to infer intuitively HIV risk. Second, we trained a multiple cue-based model to forecast perceived HIV risk through cross-validated predictive modelling. Trained models accurately predicted how ‘risky’ a person was perceived (r = 0.75) in a novel sample of portraits. Findings are discussed with respect to HIV risk stereotypes and implications regarding how to foster effective protective behaviors.

]]>
<![CDATA[The effects of heated humidification to nasopharynx on nasal resistance and breathing pattern]]> https://www.researchpad.co/article/5c648cc9d5eed0c484c817b1

Background

Mouth breathing could induce not only dry throat and eventually upper respiratory tract infection, but also snoring and obstructive sleep apnea, while nasal breathing is protective against those problems. Thus, one may want to explore an approach to modify habitual mouth breathing as preferable to nasal breathing. The aim of this study was to investigate the physiological effects of our newly developed mask on facilitation of nasal breathing.

Methods

Thirty seven healthy male volunteers were enrolled in a double blind, randomized, placebo-controlled crossover trial. Participants wore a newly developed heated humidification mask or non-heated-humidification mask (placebo) for 10-min each. Subjective feelings including dry nose, dry throat, nasal obstruction, ease to breathe, relaxation, calmness, and good feeling were asked before and after wearing each mask. In addition, the effects of masks on nasal resistance, breathing pattern, and heart rate variability were assessed.

Results

Compared with the placebo mask, the heated humidification mask improved all components of subjective feelings except for ease to breathe; moreover, decreased nasal resistance and respiratory frequency accompanied a simultaneous increase in a surrogate maker for tidal volume. However, use of the heated humidification mask did not affect heart rate variability

Conclusion

Adding heated humidification to the nasopharynx could modulate breathing patterns with improvement of subjective experience and objective nasal resistance.

]]>
<![CDATA[Abstract social categories facilitate access to socially skewed words]]> https://www.researchpad.co/article/5c61e920d5eed0c48496f83a

Recent work has shown that listeners process words faster if said by a member of the group that typically uses the word. This paper further explores how the social distributions of words affect lexical access by exploring whether access is facilitated by invoking more abstract social categories. We conduct four experiments, all of which combine an Implicit Association Task with a Lexical Decision Task. Participants sorted real and nonsense words while at the same time sorting older and younger faces (exp. 1), male and female faces (exp. 2), stereotypically male and female objects (exp. 3), and framed and unframed objects, which were always stereotypically male or female (exp. 4). Across the experiments, lexical decision to socially skewed words is facilitated when the socially congruent category is sorted with the same hand. This suggests that the lexicon contains social detail from which individuals make social abstractions that can influence lexical access.

]]>
<![CDATA['Nasal mask’ in comparison with ‘nasal prongs’ or ‘rotation of nasal mask with nasal prongs’ reduce the incidence of nasal injury in preterm neonates supported on nasal continuous positive airway pressure (nCPAP): A randomized controlled trial]]> https://www.researchpad.co/article/5c5ca2add5eed0c48441e87c

Background

With increasing use of nCPAP, the safety and comfort associated with nCPAP have come into the forefront. The reported incidence of nasal injuries associated with the use of nCPAP is 20% to 60%. A recent meta-analysis concluded that the use of nasal masks significantly decreases CPAP failure and the incidence of moderate to severe nasal injury and stress the need for a well powered RCT to confirm their findings.

Methods

In this Open label, 3 arms, sequential, stratified randomized controlled trial, we evaluated the incidence and severity of nasal injury at removal of nCPAP when using two different nasal interfaces and in three groups (i.e. rotation group, mask continue group, prong continue group). Preterm infants with gestation ≤ 30 weeks and respiratory distress within the first 6 hours of birth and in need of CPAP were eligible for the study.

Results

Among the 175 newborns included in the study, incidence of nasal injury in mask continue group [n = 19/57 (33.3%)] was significantly less as compared to prong continue group [n = 55/60 (91.6%)] and rotation group [33/ 58 (56.9%), p value <0.0001]. Median maximum nasal injury score was significantly less in Mask continue group as compared to Prong continue group and Rotation group [Injury Score 0 (IQR 0–1) vs. Injury Score 3 (IQR 2–5) vs. Injury Score 1 (IQR 0–2), p value = <0.0001] respectively. The proportion of infants failing nCPAP was similar across the three groups.

Conclusion

nCPAP with nasal masks significantly reduces nasal injury in comparison with nasal prongs or rotation of nasal prongs and nasal masks. However, the type of interface did not affect the nCPAP failure rates.

]]>
<![CDATA[Using computer-vision and machine learning to automate facial coding of positive and negative affect intensity]]> https://www.researchpad.co/article/5c633970d5eed0c484ae6711

Facial expressions are fundamental to interpersonal communication, including social interaction, and allow people of different ages, cultures, and languages to quickly and reliably convey emotional information. Historically, facial expression research has followed from discrete emotion theories, which posit a limited number of distinct affective states that are represented with specific patterns of facial action. Much less work has focused on dimensional features of emotion, particularly positive and negative affect intensity. This is likely, in part, because achieving inter-rater reliability for facial action and affect intensity ratings is painstaking and labor-intensive. We use computer-vision and machine learning (CVML) to identify patterns of facial actions in 4,648 video recordings of 125 human participants, which show strong correspondences to positive and negative affect intensity ratings obtained from highly trained coders. Our results show that CVML can both (1) determine the importance of different facial actions that human coders use to derive positive and negative affective ratings when combined with interpretable machine learning methods, and (2) efficiently automate positive and negative affect intensity coding on large facial expression databases. Further, we show that CVML can be applied to individual human judges to infer which facial actions they use to generate perceptual emotion ratings from facial expressions.

]]>
<![CDATA[Automatic classification of human facial features based on their appearance]]> https://www.researchpad.co/article/5c59ff05d5eed0c484135990

Classification or typology systems used to categorize different human body parts have existed for many years. Nevertheless, there are very few taxonomies of facial features. Ergonomics, forensic anthropology, crime prevention or new human-machine interaction systems and online activities, like e-commerce, e-learning, games, dating or social networks, are fields in which classifications of facial features are useful, for example, to create digital interlocutors that optimize the interactions between human and machines. However, classifying isolated facial features is difficult for human observers. Previous works reported low inter-observer and intra-observer agreement in the evaluation of facial features. This work presents a computer-based procedure to automatically classify facial features based on their global appearance. This procedure deals with the difficulties associated with classifying features using judgements from human observers, and facilitates the development of taxonomies of facial features. Taxonomies obtained through this procedure are presented for eyes, mouths and noses.

]]>
<![CDATA[A method for automatic forensic facial reconstruction based on dense statistics of soft tissue thickness]]> https://www.researchpad.co/article/5c521825d5eed0c484797560

In this paper, we present a method for automated estimation of a human face given a skull remain. Our proposed method is based on three statistical models. A volumetric (tetrahedral) skull model encoding the variations of different skulls, a surface head model encoding the head variations, and a dense statistic of facial soft tissue thickness (FSTT). All data are automatically derived from computed tomography (CT) head scans and optical face scans. In order to obtain a proper dense FSTT statistic, we register a skull model to each skull extracted from a CT scan and determine the FSTT value for each vertex of the skull model towards the associated extracted skin surface. The FSTT values at predefined landmarks from our statistic are well in agreement with data from the literature. To recover a face from a skull remain, we first fit our skull model to the given skull. Next, we generate spheres with radius of the respective FSTT value obtained from our statistic at each vertex of the registered skull. Finally, we fit a head model to the union of all spheres. The proposed automated method enables a probabilistic face-estimation that facilitates forensic recovery even from incomplete skull remains. The FSTT statistic allows the generation of plausible head variants, which can be adjusted intuitively using principal component analysis. We validate our face recovery process using an anonymized head CT scan. The estimation generated from the given skull visually compares well with the skin surface extracted from the CT scan itself.

]]>
<![CDATA[Using deep-learning algorithms to derive basic characteristics of social media users: The Brexit campaign as a case study]]> https://www.researchpad.co/article/5c6448e7d5eed0c484c2f17d

A recurrent criticism concerning the use of online social media data in political science research is the lack of demographic information about social media users. By employing a face-recognition algorithm to the profile pictures of Facebook users, the paper derives two fundamental demographic characteristics (age and gender) of a sample of Facebook users who interacted with the most relevant British parties in the two weeks before the Brexit referendum of 23 June 2016. The article achieves the goals of (i) testing the precision of the algorithm, (ii) testing its validity, (iii) inferring new evidence on digital mobilisation, and (iv) tracing the path for future developments and application of the algorithm. The findings show that the algorithm is reliable and that it can be fruitfully used in political and social sciences both to confirm the validity of survey data and to obtain information from populations that are generally unavailable within traditional surveys.

]]>
<![CDATA[Compassionate faces: Evidence for distinctive facial expressions associated with specific prosocial motivations]]> https://www.researchpad.co/article/5c5217edd5eed0c484795a2a

Compassion is a complex cognitive, emotional and behavioural process that has important real-world consequences for the self and others. Considering this, it is important to understand how compassion is communicated. The current research investigated the expression and perception of compassion via the face. We generated exemplar images of two compassionate facial expressions induced from two mental imagery tasks with different compassionate motivations (Study 1). Our kind- and empathic compassion faces were perceived differently and the empathic-compassion expression was perceived as best depicting the general definition of compassion (Study 2). Our two composite faces differed in their perceived happiness, kindness, sadness, fear and concern, which speak to their underling motivation and emotional resonance. Finally, both faces were accurately discriminated when presented along a compassion continuum (Study 3). Our results demonstrate two perceptually and functionally distinct facial expressions of compassion, with potentially different consequences for the suffering of others.

]]>
<![CDATA[Fixed or flexible? Orientation preference in identity and gaze processing in humans]]> https://www.researchpad.co/article/5c64495ed5eed0c484c2faf2

Vision begins with the encoding of contrast at specific orientations. Several works showed that humans identify their conspecifics best based on the horizontally-oriented information contained in the face image; this range conveys the main morphological features of the face. In contrast, the vertical structure of the eye region seems to deliver optimal cues to gaze direction. The present work investigates whether the human face processing system flexibly tunes to vertical information contained in the eye region when processing gaze direction. Alternatively, face processing may invariantly rely on the horizontal range, supporting the domain specificity of orientation tuning for faces and the gateway role of horizontal content to access any type of facial information. Participants judged the gaze direction of faces staring at a range of lateral positions. They additionally performed an identification task with upright and inverted face stimuli. Across tasks, stimuli were filtered to selectively reveal horizontal (H), vertical (V), or combined (HV) information. Most participants identified faces better based on horizontal than vertical information confirming the horizontal tuning of face identification. In contrast, they showed a vertically-tuned sensitivity to gaze direction. The logistic functions fitting the “left” and “right” response proportion as a function of gaze direction were indeed steeper when based on vertical than on horizontal information. The finding of a vertically-tuned processing of gaze direction favours the hypothesis that visual encoding of face information flexibly switches to the orientation channel carrying the cues most relevant to the task at hand. It suggests that horizontal structure, though predominant in the face stimulus, is not a mandatory gateway for efficient face processing. The present evidence may help better understand how visual signals travel the visual system to enable rich and complex representations of naturalistic stimuli such as faces.

]]>