ResearchPad - face-recognition https://www.researchpad.co Default RSS Feed en-us © 2020 Newgen KnowledgeWorks <![CDATA[The effects of age and sex on cognitive impairment in schizophrenia: Findings from the Consortium on the Genetics of Schizophrenia (COGS) study]]> https://www.researchpad.co/article/elastic_article_13860 Recently emerging evidence indicates accelerated age-related changes in the structure and function of the brain in schizophrenia, raising a question about its potential consequences on cognitive function. Using a large sample of schizophrenia patients and controls and a battery of tasks across multiple cognitive domains, we examined whether patients show accelerated age-related decline in cognition and whether an age-related effect differ between females and males. We utilized data of 1,415 schizophrenia patients and 1,062 healthy community collected by the second phase of the Consortium on the Genetics of Schizophrenia (COGS-2). A battery of cognitive tasks included the Letter-Number Span Task, two forms of the Continuous Performance Test, the California Verbal Learning Test, Second Edition, the Penn Emotion Identification Test and the Penn Facial Memory Test. The effect of age and gender on cognitive performance was examined with a general linear model. We observed age-related changes on most cognitive measures, which was similar between males and females. Compared to controls, patients showed greater deterioration in performance on attention/vigilance and greater slowness of processing social information with increasing age. However, controls showed greater age-related changes in working memory and verbal memory compared to patients. Age-related changes (η2p of 0.001 to .008) were much smaller than between-group differences (η2p of 0.005 to .037). This study found that patients showed continued decline of cognition on some domains but stable impairment or even less decline on other domains with increasing age. These findings indicate that age-related changes in cognition in schizophrenia are subtle and not uniform across multiple cognitive domains.

]]>
<![CDATA[Emotional facial perception development in 7, 9 and 11 year-old children: The emergence of a silent eye-tracked emotional other-race effect]]> https://www.researchpad.co/article/elastic_article_7635 The present study examined emotional facial perception (happy and angry) in 7, 9 and 11-year-old children from Caucasian and multicultural environments with an offset task for two ethnic groups of faces (Asian and Caucasian). In this task, participants were required to respond to a dynamic facial expression video when they believed that the first emotion presented had disappeared. Moreover, using an eye-tracker, we evaluated the ocular behavior pattern used to process these different faces. The analyses of reaction times do not show an emotional other-race effect (i.e., a facility in discriminating own-race faces over to other-race ones) in Caucasian children for Caucasian vs. Asian faces through offset times, but an effect of emotional face appeared in the oldest children. Furthermore, an eye-tracked ocular emotion and race-effect relative to processing strategies is observed and evolves between age 7 and 11. This study strengthens the interest in advancing an eye-tracking study in developmental and emotional processing studies, showing that even a “silent” effect should be detected and shrewdly analyzed through an objective means.

]]>
<![CDATA[Tsinghua facial expression database – A database of facial expressions in Chinese young and older women and men: Development and validation]]> https://www.researchpad.co/article/Nf679a1e8-67cb-47b3-95b4-f3d293b80761

Perception of facial identity and emotional expressions is fundamental to social interactions. Recently, interest in age associated changes in the processing of faces has grown rapidly. Due to the lack of older faces stimuli, most previous age-comparative studies only used young faces stimuli, which might cause own-age advantage. None of the existing Eastern face stimuli databases contain face images of different age groups (e.g. older adult faces). In this study, a database that comprises images of 110 Chinese young and older adults displaying eight facial emotional expressions (Neutral, Happiness, Anger, Disgust, Surprise, Fear, Content, and Sadness) was constructed. To validate this database, each image was rated on the basis of perceived facial expressions, perceived emotional intensity, and perceived age by two different age groups. Results have shown an overall 79.08% correct identification rate in the validation. Access to the freely available database can be requested by emailing the corresponding authors.

]]>
<![CDATA[Towards a fully automated surveillance of well-being status in laboratory mice using deep learning: Starting with facial expression analysis]]> https://www.researchpad.co/article/N201121b9-bfe0-423d-91d1-e349ea424365

Assessing the well-being of an animal is hindered by the limitations of efficient communication between humans and animals. Instead of direct communication, a variety of parameters are employed to evaluate the well-being of an animal. Especially in the field of biomedical research, scientifically sound tools to assess pain, suffering, and distress for experimental animals are highly demanded due to ethical and legal reasons. For mice, the most commonly used laboratory animals, a valuable tool is the Mouse Grimace Scale (MGS), a coding system for facial expressions of pain in mice. We aim to develop a fully automated system for the surveillance of post-surgical and post-anesthetic effects in mice. Our work introduces a semi-automated pipeline as a first step towards this goal. A new data set of images of black-furred laboratory mice that were moving freely is used and provided. Images were obtained after anesthesia (with isoflurane or ketamine/xylazine combination) and surgery (castration). We deploy two pre-trained state of the art deep convolutional neural network (CNN) architectures (ResNet50 and InceptionV3) and compare to a third CNN architecture without pre-training. Depending on the particular treatment, we achieve an accuracy of up to 99% for the recognition of the absence or presence of post-surgical and/or post-anesthetic effects on the facial expression.

]]>
<![CDATA[Cyborg groups enhance face recognition in crowded environments]]> https://www.researchpad.co/article/5c89773bd5eed0c4847d2790

Recognizing a person in a crowded environment is a challenging, yet critical, visual-search task for both humans and machine-vision algorithms. This paper explores the possibility of combining a residual neural network (ResNet), brain-computer interfaces (BCIs) and human participants to create “cyborgs” that improve decision making. Human participants and a ResNet undertook the same face-recognition experiment. BCIs were used to decode the decision confidence of humans from their EEG signals. Different types of cyborg groups were created, including either only humans (with or without the BCI) or groups of humans and the ResNet. Cyborg groups decisions were obtained weighing individual decisions by confidence estimates. Results show that groups of cyborgs are significantly more accurate (up to 35%) than the ResNet, the average participant, and equally-sized groups of humans not assisted by technology. These results suggest that melding humans, BCI, and machine-vision technology could significantly improve decision-making in realistic scenarios.

]]>
<![CDATA[Do professional facial image comparison training courses work?]]> https://www.researchpad.co/article/5c6dca04d5eed0c48452a6c3

Facial image comparison practitioners compare images of unfamiliar faces and decide whether or not they show the same person. Given the importance of these decisions for national security and criminal investigations, practitioners attend training courses to improve their face identification ability. However, these courses have not been empirically validated so it is unknown if they improve accuracy. Here, we review the content of eleven professional training courses offered to staff at national security, police, intelligence, passport issuance, immigration and border control agencies around the world. All reviewed courses include basic training in facial anatomy and prescribe facial feature (or ‘morphological’) comparison. Next, we evaluate the effectiveness of four representative courses by comparing face identification accuracy before and after training in novices (n = 152) and practitioners (n = 236). We find very strong evidence that short (1-hour and half-day) professional training courses do not improve identification accuracy, despite 93% of trainees believing their performance had improved. We find some evidence of improvement in a 3-day training course designed to introduce trainees to the unique feature-by-feature comparison strategy used by facial examiners in forensic settings. However, observed improvements are small, inconsistent across tests, and training did not produce the qualitative changes associated with examiners’ expertise. Future research should test the benefits of longer examination-focussed training courses and incorporate longitudinal approaches to track improvements caused by mentoring and deliberate practice. In the absence of evidence that training is effective, we advise agencies to explore alternative evidence-based strategies for improving the accuracy of face identification decisions.

]]>
<![CDATA[Abstract social categories facilitate access to socially skewed words]]> https://www.researchpad.co/article/5c61e920d5eed0c48496f83a

Recent work has shown that listeners process words faster if said by a member of the group that typically uses the word. This paper further explores how the social distributions of words affect lexical access by exploring whether access is facilitated by invoking more abstract social categories. We conduct four experiments, all of which combine an Implicit Association Task with a Lexical Decision Task. Participants sorted real and nonsense words while at the same time sorting older and younger faces (exp. 1), male and female faces (exp. 2), stereotypically male and female objects (exp. 3), and framed and unframed objects, which were always stereotypically male or female (exp. 4). Across the experiments, lexical decision to socially skewed words is facilitated when the socially congruent category is sorted with the same hand. This suggests that the lexicon contains social detail from which individuals make social abstractions that can influence lexical access.

]]>
<![CDATA[Using computer-vision and machine learning to automate facial coding of positive and negative affect intensity]]> https://www.researchpad.co/article/5c633970d5eed0c484ae6711

Facial expressions are fundamental to interpersonal communication, including social interaction, and allow people of different ages, cultures, and languages to quickly and reliably convey emotional information. Historically, facial expression research has followed from discrete emotion theories, which posit a limited number of distinct affective states that are represented with specific patterns of facial action. Much less work has focused on dimensional features of emotion, particularly positive and negative affect intensity. This is likely, in part, because achieving inter-rater reliability for facial action and affect intensity ratings is painstaking and labor-intensive. We use computer-vision and machine learning (CVML) to identify patterns of facial actions in 4,648 video recordings of 125 human participants, which show strong correspondences to positive and negative affect intensity ratings obtained from highly trained coders. Our results show that CVML can both (1) determine the importance of different facial actions that human coders use to derive positive and negative affective ratings when combined with interpretable machine learning methods, and (2) efficiently automate positive and negative affect intensity coding on large facial expression databases. Further, we show that CVML can be applied to individual human judges to infer which facial actions they use to generate perceptual emotion ratings from facial expressions.

]]>
<![CDATA[Automatic classification of human facial features based on their appearance]]> https://www.researchpad.co/article/5c59ff05d5eed0c484135990

Classification or typology systems used to categorize different human body parts have existed for many years. Nevertheless, there are very few taxonomies of facial features. Ergonomics, forensic anthropology, crime prevention or new human-machine interaction systems and online activities, like e-commerce, e-learning, games, dating or social networks, are fields in which classifications of facial features are useful, for example, to create digital interlocutors that optimize the interactions between human and machines. However, classifying isolated facial features is difficult for human observers. Previous works reported low inter-observer and intra-observer agreement in the evaluation of facial features. This work presents a computer-based procedure to automatically classify facial features based on their global appearance. This procedure deals with the difficulties associated with classifying features using judgements from human observers, and facilitates the development of taxonomies of facial features. Taxonomies obtained through this procedure are presented for eyes, mouths and noses.

]]>
<![CDATA[Using deep-learning algorithms to derive basic characteristics of social media users: The Brexit campaign as a case study]]> https://www.researchpad.co/article/5c6448e7d5eed0c484c2f17d

A recurrent criticism concerning the use of online social media data in political science research is the lack of demographic information about social media users. By employing a face-recognition algorithm to the profile pictures of Facebook users, the paper derives two fundamental demographic characteristics (age and gender) of a sample of Facebook users who interacted with the most relevant British parties in the two weeks before the Brexit referendum of 23 June 2016. The article achieves the goals of (i) testing the precision of the algorithm, (ii) testing its validity, (iii) inferring new evidence on digital mobilisation, and (iv) tracing the path for future developments and application of the algorithm. The findings show that the algorithm is reliable and that it can be fruitfully used in political and social sciences both to confirm the validity of survey data and to obtain information from populations that are generally unavailable within traditional surveys.

]]>
<![CDATA[Compassionate faces: Evidence for distinctive facial expressions associated with specific prosocial motivations]]> https://www.researchpad.co/article/5c5217edd5eed0c484795a2a

Compassion is a complex cognitive, emotional and behavioural process that has important real-world consequences for the self and others. Considering this, it is important to understand how compassion is communicated. The current research investigated the expression and perception of compassion via the face. We generated exemplar images of two compassionate facial expressions induced from two mental imagery tasks with different compassionate motivations (Study 1). Our kind- and empathic compassion faces were perceived differently and the empathic-compassion expression was perceived as best depicting the general definition of compassion (Study 2). Our two composite faces differed in their perceived happiness, kindness, sadness, fear and concern, which speak to their underling motivation and emotional resonance. Finally, both faces were accurately discriminated when presented along a compassion continuum (Study 3). Our results demonstrate two perceptually and functionally distinct facial expressions of compassion, with potentially different consequences for the suffering of others.

]]>
<![CDATA[Fixed or flexible? Orientation preference in identity and gaze processing in humans]]> https://www.researchpad.co/article/5c64495ed5eed0c484c2faf2

Vision begins with the encoding of contrast at specific orientations. Several works showed that humans identify their conspecifics best based on the horizontally-oriented information contained in the face image; this range conveys the main morphological features of the face. In contrast, the vertical structure of the eye region seems to deliver optimal cues to gaze direction. The present work investigates whether the human face processing system flexibly tunes to vertical information contained in the eye region when processing gaze direction. Alternatively, face processing may invariantly rely on the horizontal range, supporting the domain specificity of orientation tuning for faces and the gateway role of horizontal content to access any type of facial information. Participants judged the gaze direction of faces staring at a range of lateral positions. They additionally performed an identification task with upright and inverted face stimuli. Across tasks, stimuli were filtered to selectively reveal horizontal (H), vertical (V), or combined (HV) information. Most participants identified faces better based on horizontal than vertical information confirming the horizontal tuning of face identification. In contrast, they showed a vertically-tuned sensitivity to gaze direction. The logistic functions fitting the “left” and “right” response proportion as a function of gaze direction were indeed steeper when based on vertical than on horizontal information. The finding of a vertically-tuned processing of gaze direction favours the hypothesis that visual encoding of face information flexibly switches to the orientation channel carrying the cues most relevant to the task at hand. It suggests that horizontal structure, though predominant in the face stimulus, is not a mandatory gateway for efficient face processing. The present evidence may help better understand how visual signals travel the visual system to enable rich and complex representations of naturalistic stimuli such as faces.

]]>
<![CDATA[Illusory face detection in pure noise images: The role of interindividual variability in fMRI activation patterns]]> https://www.researchpad.co/article/5c466591d5eed0c484519d0d

Illusory face detection tasks can be used to study the neural correlates of top-down influences on face perception. In a typical functional magnetic resonance imaging (fMRI) study design, subjects are presented with pure noise images, but are told that half of the stimuli contain a face. The illusory face perception network is assessed by comparing blood oxygenation level dependent (BOLD) responses to images in which a face has been detected against BOLD activity related to images in which no face has been detected. In the present study, we highlight the existence of strong interindividual differences of BOLD activation patterns associated with illusory face perception. In the core system of face perception, 4 of 9 subjects had highly significant (p<0.05, corrected for multiple comparisons) activity in the bilateral occipital face area (OFA) and fusiform face area (FFA). In contrast, 5 of 9 subjects did not show any activity in these regions, even at statistical thresholds as liberal as p = 0.05, uncorrected. At the group level, this variability is reflected by non-significant activity in all regions of the core system. We argue that these differences might be related to individual differences in task execution: only some participants really detected faces in the noise images, while the other subjects simply responded in the desired way. This has several implications for future studies on illusory face detection. First, future studies should not only analyze results at the group level, but also for single subjects. Second, subjects should be explicitly queried after the fMRI experiment about whether they really detected faces or not. Third, if possible, not only the overt response of the subject, but also additional parameters that might indicate the perception of a noise stimulus as face should be collected (e.g., behavioral classification images).

]]>
<![CDATA[A data-driven study of Chinese participants' social judgments of Chinese faces]]> https://www.researchpad.co/article/5c390b95d5eed0c48491d5c3

Social judgments of faces made by Western participants are thought to be underpinned by two dimensions: valence and dominance. Because some research suggests that Western and Eastern participants process faces differently, the two-dimensional model of face evaluation may not necessarily apply to judgments of faces by Eastern participants. Here we used a data-driven approach to investigate the components underlying social judgments of Chinese faces by Chinese participants. Analyses showed that social judgments of Chinese faces by Chinese participants are partly underpinned by a general approachability dimension similar to the valence dimension previously found to underpin Western participants’ evaluations of White faces. However, we found that a general capability dimension, rather than a dominance dimension, contributed to Chinese participants’ evaluations of Chinese faces. Thus, our findings present evidence for both cultural similarities and cultural differences in social evaluations of faces. Importantly, the dimension that explained most of the variance in Chinese participants’ social judgments of faces was strikingly similar to the valence dimension previously reported for Western participants.

]]>
<![CDATA[The relationship between behavioral language laterality, face laterality and language performance in left-handers]]> https://www.researchpad.co/article/5c26976bd5eed0c48470f7a6

Left-handers provide unique information about the relationship between cognitive functions because of their larger variability in hemispheric dominance. This study presents the laterality distribution of, correlations between and test-retest reliability of behavioral lateralized language tasks (speech production, reading and speech perception), face recognition tasks, handedness measures and language performance tests based on data from 98 left-handers. The results show that a behavioral test battery leads to percentages of (a)typical dominance that are similar to those found in neuropsychological studies even though the incidence of clear atypical lateralization (about 20%) may be overestimated at the group level. Significant correlations were found between the language tasks for both reaction time and accuracy lateralization indices. The degree of language laterality could however not be linked to face laterality, handedness or language performance. Finally, individuals were classified less consistently than expected as being typical, bilateral or atypical across all tasks. This may be due to the often good (speech production and perception tasks) but sometimes weak (reading and face tasks) test-retest reliabilities. The lack of highly reliable and valid test protocols for functions unrelated to speech remains one of the largest impediments for individual analysis and cross-task investigations in laterality research.

]]>
<![CDATA[Nonnegative/Binary matrix factorization with a D-Wave quantum annealer]]> https://www.researchpad.co/article/5c1813bfd5eed0c484775b8d

D-Wave quantum annealers represent a novel computational architecture and have attracted significant interest. Much of this interest has focused on the quantum behavior of D-Wave machines, and there have been few practical algorithms that use the D-Wave. Machine learning has been identified as an area where quantum annealing may be useful. Here, we show that the D-Wave 2X can be effectively used as part of an unsupervised machine learning method. This method takes a matrix as input and produces two low-rank matrices as output—one containing latent features in the data and another matrix describing how the features can be combined to approximately reproduce the input matrix. Despite the limited number of bits in the D-Wave hardware, this method is capable of handling a large input matrix. The D-Wave only limits the rank of the two output matrices. We apply this method to learn the features from a set of facial images and compare the performance of the D-Wave to two classical tools. This method is able to learn facial features and accurately reproduce the set of facial images. The performance of the D-Wave shows some promise, but has some limitations. It outperforms the two classical codes in a benchmark when only a short amount of computational time is allowed (200-20,000 microseconds), but these results suggest heuristics that would likely outperform the D-Wave in this benchmark.

]]>
<![CDATA[Showup identification decisions for multiple perpetrator crimes: Testing for sequential dependencies]]> https://www.researchpad.co/article/5c12cefad5eed0c484913c61

Research in perception and recognition demonstrates that a current decision (i) can be influenced by previous ones (i–j), meaning that subsequent responses are not always independent. Experiments 1 and 2 tested whether initial showup identification decisions impact choosing behavior for subsequent showup identification responses. Participants watched a mock crime film involving three perpetrators and later made three showup identification decisions, one showup for each perpetrator. Across both experiments, evidence for sequential dependencies for choosing behavior was not consistently predictable. In Experiment 1, responses on the third, target-present showup assimilated towards previous choosing. In Experiment 2, responses on the second showup contrasted previous choosing regardless of target-presence. Experiment 3 examined whether differences in number of test trials in the eyewitness (vs. basic recognition) paradigm could account for the absence of hypothesized ability to predict patterns of sequential dependencies in Experiments 1 and 2. Sequential dependencies were detected in recognition decisions over many trials, including recognition for faces: the probability of a yes response on the current trial increased if the previous response was also yes (vs. no). However, choosing behavior on previous trials did not predict individual recognition decisions on the current trial. Thus, while sequential dependencies did arise to some extent, results suggest that the integrity of identification and recognition decisions are not likely to be impacted by making multiple decisions in a row.

]]>
<![CDATA[The eyes know it: Toddlers' visual scanning of sad faces is predicted by their theory of mind skills]]> https://www.researchpad.co/article/5c12cfa3d5eed0c484914af9

The current research explored toddlers’ gaze fixation during a scene showing a person expressing sadness after a ball is stolen from her. The relation between the duration of gaze fixation on different parts of the person’s sad face (e.g., eyes, mouth) and theory of mind skills was examined. Eye tracking data indicated that before the actor experienced the negative event, toddlers divided their fixation equally between the actor’s happy face and other distracting objects, but looked longer at the face after the ball was stolen and she expressed sadness. The strongest predictor of increased focus on the sad face versus other elements of the scene was toddlers’ ability to predict others’ emotional reactions when outcomes fulfilled (happiness) or failed to fulfill (sadness) desires, whereas toddlers’ visual perspective-taking skills predicted their more specific focusing on the actor’s eyes and, for boys only, mouth. Furthermore, gender differences emerged in toddlers’ fixation on parts of the scene. Taken together, these findings suggest that top-down processes are involved in the scanning of emotional facial expressions in toddlers.

]]>
<![CDATA[Revision of Varanus marathonensis (Squamata, Varanidae) based on historical and new material: morphology, systematics, and paleobiogeography of the European monitor lizards]]> https://www.researchpad.co/article/5c117b75d5eed0c4846994b2

Monitor lizards (genus Varanus) inhabited Europe at least from the early Miocene to the Pleistocene. Their fossil record is limited to about 40 localities that have provided mostly isolated vertebrae. Due to the poor diagnostic value of these fossils, it was recently claimed that all the European species described prior to the 21st century are not taxonomically valid and a new species, Varanus amnhophilis, was erected on the basis of fragmentary material including cranial elements, from the late Miocene of Samos (Greece). We re-examined the type material of Varanus marathonensis Weithofer, 1888, based on material from the late Miocene of Pikermi (Greece), and concluded that it is a valid, diagnosable species. Previously unpublished Iberian material from the Aragonian (middle Miocene) of Abocador de Can Mata (Vallès-Penedès Basin, Barcelona) and the Vallesian (late Miocene) of Batallones (Madrid Basin) is clearly referable to the same species on a morphological basis, further enabling to provide an emended diagnosis for this species. Varanus amnhophilis appears to be a junior subjective synonym of V. marathonensis. On the basis of the most complete fossil Varanus skeleton ever described, it has been possible to further resolve the internal phylogeny of this genus by cladistically analyzing 80 taxa coded for 495 morphological and 5729 molecular characters. Varanus marathonensis was a large-sized species distributed at relatively low latitudes in both southwestern and southeastern Europe from at least MN7+8 to MN12. Our cladistic analysis nests V. marathonensis into an eastern clade of Varanus instead of the African clade comprising Varanus griseus, to which it had been related in the past. At least two different Varanus lineages were present in Europe during the Neogene, represented by Varanus mokrensis (early Miocene) and V. marathonensis (middle to late Miocene), respectively.

]]>
<![CDATA[Oxytocin receptor gene variations and socio-emotional effects of MDMA: A pooled analysis of controlled studies in healthy subjects]]> https://www.researchpad.co/article/5b498f9a463d7e0897c6e016

Methylenedioxymethamphetamine (MDMA) increases oxytocin, empathy, and prosociality. Oxytocin plays a critical role in emotion processing and social behavior and has been shown to mediate the prosocial effects of MDMA in animals. Genetic variants, such as single-nucleotide polymorphisms (SNPs), of the oxytocin receptor (OXTR) may influence the emotional and social effects of MDMA in humans. The effects of common genetic variants of the OXTR (rs53576, rs1042778, and rs2254298 SNPs) on the emotional, empathogenic, and prosocial effects of MDMA were characterized in up to 132 healthy subjects in a pooled analysis of eight double-blind, placebo-controlled studies. In a subset of 53 subjects, MDMA produced significantly greater feelings of trust in rs1042778 TT genotypes compared with G allele carriers. The rs53576 and rs225498 SNPs did not moderate the subjective effects of MDMA in up to 132 subjects. None of the SNPs moderated MDMA-induced impairments in negative facial emotion recognition or enhancements in emotional empathy in the Multifaceted Empathy Test in 69 subjects. MDMA significantly increased plasma oxytocin concentrations. MDMA and oxytocin concentrations did not differ between OXTR gene variants. The present results provide preliminary evidence that OXTR gene variations may modulate aspects of the prosocial subjective effects of MDMA in humans. However, interpretation should be cautious due to the small sample size. Additionally, OXTR SNPs did not moderate the subjective overall effect of MDMA (any drug effect) or feelings of “closeness to others”.

Trial registration: ClinicalTrials.gov: http://www.clinicaltrials.gov, No: NCT00886886, NCT00990067, NCT01136278, NCT01270672, NCT01386177, NCT01465685, NCT01771874, and NCT01951508.

]]>