ResearchPad - acoustic-signals https://www.researchpad.co Default RSS Feed en-us © 2020 Newgen KnowledgeWorks <![CDATA[Adaptation to unstable coordination patterns in individual and joint actions]]> https://www.researchpad.co/article/elastic_article_7665 Previous research on interlimb coordination has shown that some coordination patterns are more stable than others, and function as attractors in the space of possible phase relations between different rhythmic movements. The canonical coordination patterns, i.e. the two most stable phase relations, are in-phase (0 degree) and anti-phase (180 degrees). Yet, musicians are able to perform other coordination patterns in intrapersonal as well as in interpersonal coordination with remarkable precision. This raises the question of how music experts manage to produce these unstable patterns of movement coordination. In the current study, we invited participants with at least five years of training on a musical instrument. We used an adaptation paradigm to address two factors that may facilitate producing unstable coordination patterns. First, we investigated adaptation in different coordination settings, to test the hypothesis that the lower coupling strength between individuals during joint performance makes it easier to achieve stability outside of the canonical patterns than the stronger coupling during individual bimanual performance. Second, we investigated whether adding to the structure of action effects may support achieving unstable coordination patterns, both intra- and inter-individually. The structure of action effects was strengthened by adding a melodic contour to the action effects, a measure that has been shown to improve the acquisition of bimanual coordination skills. Adaptation performance was measured both in terms of asynchrony and variability thereof. As predicted, we found that producing unstable patterns benefitted from the weaker coupling during joint performance. Surprisingly, the structure of action effects did not help with achieving unstable coordination patterns.

]]>
<![CDATA[Communication is key: Mother-offspring signaling can affect behavioral responses and offspring survival in feral horses (Equus caballus)]]> https://www.researchpad.co/article/Nfc9766a8-2564-4088-9a49-707302d05531

Acoustic signaling plays an important role in mother-offspring recognition and subsequent bond-formation. It remains unclear, however, if mothers and offspring use acoustic signaling in the same ways and for the same reasons throughout the juvenile stage, particularly after mutual recognition has been adequately established. Moreover, despite its critical role in mother-offspring bond formation, research explicitly linking mother-infant communication strategies to offspring survival are lacking. We examined the communicative patterns of mothers and offspring in the feral horse (Equus caballus) to better understand 1) the nature of mother-offspring communication throughout the first year of development; 2) the function(s) of mother- vs. offspring-initiated communication and; 3) the importance of mare and foal communication to offspring survival. We found that 1) mares and foals differ in when and how they initiate communication; 2) the outcomes of mare- vs. foal-initiated communication events consistently differ; and 3) the communicative patterns between mares and their foals can be important for offspring survival to one year of age. Moreover, given the importance of maternal activity to offspring behavior and subsequent survival, we submit that our data are uniquely positioned to address the long-debated question: do the behaviors exhibited during the juvenile stage (by both mothers and their young) confer delayed or immediate benefits to offspring? In summary, we aimed to better understand 1) the dynamics of mother-offspring communication, 2) whether mother-offspring communicative patterns were important to offspring survival, and 3) the implications of our research regarding the function of the mammalian juvenile stage. Our results demonstrate that we have achieved those aims.

]]>
<![CDATA[New physiological bench test reproducing nocturnal breathing pattern of patients with sleep disordered breathing]]> https://www.researchpad.co/article/N13bd4ad3-60c6-4376-997d-f10f1c975c0e

Previous studies have shown that Automatic Positive Airway Pressure devices display different behaviors when connected to a bench using theoretical respiratory cycle scripts. However, these scripts are limited and do not simulate physiological behavior during the night. Our aim was to develop a physiological bench that is able to simulate patient breathing airflow by integrating polygraph data. We developed an algorithm analyzing polygraph data and transformed this information into digital inputs required by the bench hardware to reproduce a patient breathing profile on bench. The inputs are respectively the simulated respiratory muscular effort pressure input for an artificial lung and the sealed chamber pressure to regulate the Starling resistor. We did simulations on our bench for a total of 8 hours and 59 minutes for a breathing profile from the demonstration recording of a Nox T3 Sleep Monitor. The simulation performance results showed that in terms of relative peak-valley amplitude of each breathing cycle, simulated bench airflow was biased by only 1.48% ± 6.80% compared to estimated polygraph nasal airflow for a total of 6,479 breathing cycles. For total respiratory cycle time, the average bias ± one standard deviation was 0.000 ± 0.288 seconds. For patient apnea events, our bench simulation had a sensitivity of 84.7% and a positive predictive value equal to 90.3%, considering 149 apneas detected both in polygraph nasal simulated bench airflows. Our new physiological bench would allow personalizing APAP device selection to each patient by taking into account individual characteristics of a sleep breathing profile.

]]>
<![CDATA[An efficient resource utilization scheme within PMIPv6 protocol for urban vehicular networks]]> https://www.researchpad.co/article/5c8acc80d5eed0c48498f8b1

Recently, the mobility management of urban vehicular networks has become great challenges for researchers due to its unique mobility requirements imposed by mobile users when accessing different services in a random fashion. To provide a ubiquitous Internet and seamless connectivity, the Internet Engineering Task Force (IETF) has proposed a Proxy Mobile IPv6 (PMIPv6) protocol. This is meant to address the signaling of the mobility transparent to the Mobile Node (MN) and also guarantee session continuity while the MN is in motion. However, performing a handoff by tens of thousands of MNs may harm the performance of the system significantly due to the high signaling overhead and the insufficient utilization of so-called Binding Cash Entry (BCE) at the Local Mobility Anchor (LMA). To address these issues, we propose an efficient scheme within the PMIPv6 protocol, named AE-PMIPv6 scheme, to effectively utilize the BCE at the LMA. This is primarily achieved by merging the BCEs of the MNs, thus, reducing the signaling overhead. Better utilization of the BCEs has been attained by employing virtual addresses and addressing pool mechanisms for the purpose of binding information of the MNs that are moving together towards the same network at a specific time, during their handoff process. Results obtained from our simulation demonstrates the superiority of AE-PMIPv6 scheme over E-PMIPv6 scheme. The AE-PMIPv6 succeeds in minimizing the signaling overhead, reduces the handover time and at the same time efficiently utilize the buffer resources.

]]>
<![CDATA[Deer browsing alters sound propagation in temperate deciduous forests]]> https://www.researchpad.co/article/5c6dc993d5eed0c484529e60

The efficacy of animal signals is strongly influenced by the structure of the habitat in which they are propagating. In recent years, the habitat structure of temperate forests has been increasingly subject to modifications from foraging by white-tailed deer (Odocoileus virginianus). Increasing deer numbers and the accompanying browsing have been shown to alter vegetation structure and thus the foraging, roosting, and breeding habitats of many species. However, despite a large body of literature on the effects of vegetation structure on sound propagation, we do not yet know what impact deer browsing may have on acoustic communication. Here we used playback experiments to determine whether sound fidelity and amplitude of white noise, pure tones, and trills differed between deer-browsed and deer-excluded plots. We found that sound fidelity, but not amplitude, differed between habitats, with deer-browsed habitats having greater sound fidelity than deer-excluded habitats. Difference in sound propagation characteristics between the two habitats could alter the efficacy of acoustic communication through plasticity, cultural evolution or local adaptation, in turn influencing vocally-mediated behaviors (e.g. agonistic, parent-offspring, mate selection). Reduced signal degradation suggests vocalizations may retain more information, improving the transfer of information to both intended and unintended receivers. Overall, our results suggest that deer browsing impacts sound propagation in temperate deciduous forest, although much work remains to be done on the potential impacts on communication.

]]>
<![CDATA[Implicit measurement of emotional experience and its dynamics]]> https://www.researchpad.co/article/5c63394ad5eed0c484ae6422

Although many studies revealed that emotions and their dynamics have a profound impact on cognition and behavior, it has proven difficult to unobtrusively measure emotions. In the current study, our objective was to distinguish different experiences elicited by audiovisual stimuli designed to evoke particularly happy, sad, fear and disgust emotions, using electroencephalography (EEG) and a multivariate approach. We show that we were able to classify these emotional experiences well above chance level. Importantly, we retained all the information (frequency and topography) present in the data. This allowed us to interpret the differences between emotional experiences in terms of component psychological processes such as attention and arousal that are known to be associated with the observed activation patterns. In addition, we illustrate how this method of classifying emotional experiences can be applied on a moment-by-moment basis in order to track dynamic changes in the emotional response over time. The application of our approach may be of value in many contexts in which the experience of a given stimulus or situation changes over time, ranging from clinical to consumption settings.

]]>
<![CDATA[A Gestalt inference model for auditory scene segregation]]> https://www.researchpad.co/article/5c50c46dd5eed0c4845e874d

Our current understanding of how the brain segregates auditory scenes into meaningful objects is in line with a Gestaltism framework. These Gestalt principles suggest a theory of how different attributes of the soundscape are extracted then bound together into separate groups that reflect different objects or streams present in the scene. These cues are thought to reflect the underlying statistical structure of natural sounds in a similar way that statistics of natural images are closely linked to the principles that guide figure-ground segregation and object segmentation in vision. In the present study, we leverage inference in stochastic neural networks to learn emergent grouping cues directly from natural soundscapes including speech, music and sounds in nature. The model learns a hierarchy of local and global spectro-temporal attributes reminiscent of simultaneous and sequential Gestalt cues that underlie the organization of auditory scenes. These mappings operate at multiple time scales to analyze an incoming complex scene and are then fused using a Hebbian network that binds together coherent features into perceptually-segregated auditory objects. The proposed architecture successfully emulates a wide range of well established auditory scene segregation phenomena and quantifies the complimentary role of segregation and binding cues in driving auditory scene segregation.

]]>
<![CDATA[Effect of F0 contour on perception of Mandarin Chinese speech against masking]]> https://www.researchpad.co/article/5c37b7a9d5eed0c48449083a

Intonation has many perceptually significant functions in language that contribute to speech recognition. This study aims to investigate whether intonation cues affect the unmasking of Mandarin Chinese speech in the presence of interfering sounds. Specifically, intelligibility of multi-tone Mandarin Chinese sentences with maskers consisting of either two-talker speech or steady-state noise was measured in three (flattened, typical, and exaggerated) intonation conditions. Different from most of the previous studies, the present study only manipulate and modify the intonation information but preserve tone information. The results showed that recognition of the final keywords in multi-tone Mandarin Chinese sentences was much better under the original F0 contour condition than the decreased F0 contour or exaggerated F0 contour conditions whenever there was a noise or speech masker, and an exaggerated F0 contour reduced the intelligibility of Mandarin Chinese more under the speech masker condition than that under the noise masker condition. These results suggested that speech in a tone language (Mandarin Chinese) is harder to understand when the intonation is unnatural, even if the tone information is preserved, and an unnatural intonation contour decreases releasing Mandarin Chinese speech from masking, especially in a multi-person talking environment.

]]>
<![CDATA[Ambulatory assessment of phonotraumatic vocal hyperfunction using glottal airflow measures estimated from neck-surface acceleration]]> https://www.researchpad.co/article/5c254519d5eed0c48442be9a

Phonotraumatic vocal hyperfunction (PVH) is associated with chronic misuse and/or abuse of voice that can result in lesions such as vocal fold nodules. The clinical aerodynamic assessment of vocal function has been recently shown to differentiate between patients with PVH and healthy controls to provide meaningful insight into pathophysiological mechanisms associated with these disorders. However, all current clinical assessment of PVH is incomplete because of its inability to objectively identify the type and extent of detrimental phonatory function that is associated with PVH during daily voice use. The current study sought to address this issue by incorporating, for the first time in a comprehensive ambulatory assessment, glottal airflow parameters estimated from a neck-mounted accelerometer and recorded to a smartphone-based voice monitor. We tested this approach on 48 patients with vocal fold nodules and 48 matched healthy-control subjects who each wore the voice monitor for a week. Seven glottal airflow features were estimated every 50 ms using an impedance-based inverse filtering scheme, and seven high-order summary statistics of each feature were computed every 5 minutes over voiced segments. Based on a univariate hypothesis testing, eight glottal airflow summary statistics were found to be statistically different between patient and healthy-control groups. L1-regularized logistic regression for a supervised classification task yielded a mean (standard deviation) area under the ROC curve of 0.82 (0.25) and an accuracy of 0.83 (0.14). These results outperform the state-of-the-art classification for the same classification task and provide a new avenue to improve the assessment and treatment of hyperfunctional voice disorders.

]]>
<![CDATA[The influence of pitch feedback on learning of motor -timing and sequencing: A piano study with novices]]> https://www.researchpad.co/article/5c084228d5eed0c484fcc01b

Audio-motor coordination is a fundamental requirement in the learning and execution of sequential actions such as music performance. Predictive motor control mechanisms determine the sequential content and timing of upcoming tones and thereby facilitate accurate performance. To study the role of auditory-motor predictions at early stages of acquiring piano performance skills, we conducted an experiment in which non-musicians learned to play a musical sequence on the piano in synchrony with a metronome. Three experimental conditions compared errors and timing. The first consisted of normal auditory feedback using conventional piano key-to-tone mappings. The second employed fixed-pitch auditory feedback consisting of a single tone that was given with each key stroke. In the third condition, for each key stroke a tone was randomly drawn from the set of tones associated with the normal sequence. The results showed that when auditory feedback tones were randomly assigned, participants produced more sequencing errors (i.e., a higher percentage of incorrect key strokes) compared to when auditory feedback was normal or consisted of a single tone of fixed pitch. Furthermore, synchronization with the metronome was most accurate in the fixed-pitch single-tone condition. These findings suggest that predictive motor control mechanisms support sequencing and timing, and that these sensorimotor processes are dissociable even at early stages of acquiring complex motor skills such as music performance.

]]>
<![CDATA[Sex differences in vocal communication of freely interacting adult mice depend upon behavioral context]]> https://www.researchpad.co/article/5bae98eb40307c0c23a1c14e

Ultrasonic vocalizations (USVs) are believed to play a critical role in mouse communication. Although mice produce USVs in multiple contexts, signals emitted in reproductive contexts are typically attributed solely to the male mouse. Only recently has evidence emerged showing that female mice are also vocally active during mixed-sex interactions. Therefore, this study aimed to systematically quantify and compare vocalizations emitted by female and male mice as the animals freely interacted. Using an eight-channel microphone array to determine which mouse emitted specific vocalizations during unrestrained social interaction, we recorded 13 mixed-sex pairs of mice. We report here that females vocalized significantly less often than males during dyadic interactions, with females accounting for approximately one sixth of all emitted signals. Moreover, the acoustic features of female and male signals differed. We found that the bandwidths (i.e., the range of frequencies that a signal spanned) of female-emitted signals were smaller than signals produced by males. When examining how the frequency of each signal changed over time, the slopes of male-emitted signals decreased more rapidly than female signals. Further, we revealed notable differences between male and female vocal signals when the animals were performing the same behaviors. Our study provides evidence that a female mouse does in fact vocalize during interactions with a male and that the acoustic features of female and male vocalizations differ during specific behavioral contexts.

]]>
<![CDATA[Response to Infant Cry in Clinically Depressed and Non-Depressed Mothers]]> https://www.researchpad.co/article/5989da83ab0ee8fa60b9b4d9

Background

Bowlby and Ainsworth hypothesized that maternal responsiveness is displayed in the context of infant distress. Depressed mothers are less responsive to infant distress vocalizations (cry) than non-depressed mothers. The present study focuses on acoustical components of infant cry that give rise to responsive caregiving in clinically depressed (n = 30) compared with non-depressed mothers (n = 30) in the natural setting of the home.

Methods

Analyses of infant and mother behaviors followed three paths: (1) tests of group differences in acoustic characteristics of infant cry, (2) tests of group differences of mothers’ behaviors during their infant’s crying, and (3) tree-based modeling to ascertain which variable(s) best predict maternal behaviors during infant cry.

Results

(1) Infants of depressed mothers cried as frequently and for equal durations as infants of non-depressed mothers; however, infants of depressed mothers cried with a higher fundamental frequency (f0) and in a more restricted range of f0. (2) Depressed mothers fed, rocked, and touched their crying infants less than non-depressed mothers, and depressed mothers were less responsive to their infants overall. (3) Novel tree-based analyses confirmed that depressed mothers engaged in less caregiving during their infants’ cry and indicated that depressed mothers responded only to cries at higher f0s and shorter durations. Older non-depressed mothers were the most interactive with infants.

Conclusions

Clinical depression affects maternal responsiveness during infant cry, leading to patterns of action that appear poorly attuned to infant needs.

]]>
<![CDATA[Development of a Genomic Resource and Quantitative Trait Loci Mapping of Male Calling Traits in the Lesser Wax Moth, Achroia grisella]]> https://www.researchpad.co/article/5989da5fab0ee8fa60b90b49

In the study of sexual selection among insects, the Lesser Waxmoth, Achroia grisella (Lepidoptera: Pyralidae), has been one of the more intensively studied species over the past 20 years. Studies have focused on how the male calling song functions in pair formation and on the quantitative genetics of male song characters and female preference for the song. Recent QTL studies have attempted to elucidate the genetic architecture of male song and female preference traits using AFLP markers. We continued these QTL studies using SNP markers derived from an EST library that allowed us to measure both DNA sequence variation and map loci with respect to the lepidopteran genome. We report that the level of sequence variation within A. grisella is typical among other Lepidoptera that have been examined, and that comparison with the Bombyx mori genome shows that macrosynteny is conserved. Our QTL map shows that a QTL for a male song trait, pulse-pair rate, is situated on the Z chromosome, a prediction for sexually selected traits in Lepidoptera. Our findings will be useful for future studies of genetic architecture of this model species and may help identify the genetics associated with the evolution of its novel acoustic communication.

]]>
<![CDATA[Genre Complexes in Popular Music]]> https://www.researchpad.co/article/5989da99ab0ee8fa60ba2d24

Recent work in the sociology of music suggests a declining importance of genre categories. Yet other work in this research stream and in the sociology of classification argues for the continued prevalence of genres as a meaningful tool through which creators, critics and consumers focus their attention in the topology of available works. Building from work in the study of categories and categorization we examine how boundary strength and internal differentiation structure the genre pairings of some 3 million musicians and groups. Using a range of network-based and statistical techniques, we uncover three musical “complexes,” which are collectively constituted by 16 smaller genre communities. Our analysis shows that the musical universe is not monolithically organized but rather composed of multiple worlds that are differently structured—i.e., uncentered, single-centered, and multi-centered.

]]>
<![CDATA[Vowel production of Mandarin-speaking hearing aid users with different types of hearing loss]]> https://www.researchpad.co/article/5989db5cab0ee8fa60be0272

In contrast with previous research focusing on cochlear implants, this study examined the speech performance of hearing aid users with conductive (n = 11), mixed (n = 10), and sensorineural hearing loss (n = 7) and compared it with the speech of hearing control. Speech intelligibility was evaluated by computing the vowel space area defined by the Mandarin Chinese corner vowels /a, u, i/. The acoustic differences between the vowels were assessed using the Euclidean distance. The results revealed that both the conductive and mixed hearing loss groups exhibited a reduced vowel working space, but no significant difference was found between the sensorineural hearing loss and normal hearing groups. An analysis using the Euclidean distance further showed that the compression of vowel space area in conductive hearing loss can be attributed to the substantial lowering of the second formant of /i/. The differences in vowel production between groups are discussed in terms of the occlusion effect and the signal transmission media of various hearing devices.

]]>
<![CDATA[Spatio-Temporal Progression of Cortical Activity Related to Continuous Overt and Covert Speech Production in a Reading Task]]> https://www.researchpad.co/article/5989d9e5ab0ee8fa60b6af40

How the human brain plans, executes, and monitors continuous and fluent speech has remained largely elusive. For example, previous research has defined the cortical locations most important for different aspects of speech function, but has not yet yielded a definition of the temporal progression of involvement of those locations as speech progresses either overtly or covertly. In this paper, we uncovered the spatio-temporal evolution of neuronal population-level activity related to continuous overt speech, and identified those locations that shared activity characteristics across overt and covert speech. Specifically, we asked subjects to repeat continuous sentences aloud or silently while we recorded electrical signals directly from the surface of the brain (electrocorticography (ECoG)). We then determined the relationship between cortical activity and speech output across different areas of cortex and at sub-second timescales. The results highlight a spatio-temporal progression of cortical involvement in the continuous speech process that initiates utterances in frontal-motor areas and ends with the monitoring of auditory feedback in superior temporal gyrus. Direct comparison of cortical activity related to overt versus covert conditions revealed a common network of brain regions involved in speech that may implement orthographic and phonological processing. Our results provide one of the first characterizations of the spatiotemporal electrophysiological representations of the continuous speech process, and also highlight the common neural substrate of overt and covert speech. These results thereby contribute to a refined understanding of speech functions in the human brain.

]]>
<![CDATA[Voice disorder in systemic lupus erythematosus]]> https://www.researchpad.co/article/5989db52ab0ee8fa60bdc88e

Systemic lupus erythematosus (SLE) is a chronic disease characterized by progressive tissue damage. In recent decades, novel treatments have greatly extended the life span of SLE patients. This creates a high demand for identifying the overarching symptoms associated with SLE and developing therapies that improve their life quality under chronic care. We hypothesized that SLE patients would present dysphonic symptoms. Given that voice disorders can reduce life quality, identifying a potential SLE-related dysphonia could be relevant for the appraisal and management of this disease. We measured objective vocal parameters and perceived vocal quality with the GRBAS (Grade, Roughness, Breathiness, Asthenia, Strain) scale in SLE patients and compared them to matched healthy controls. SLE patients also filled a questionnaire reporting perceived vocal deficits. SLE patients had significantly lower vocal intensity and harmonics to noise ratio, as well as increased jitter and shimmer. All subjective parameters of the GRBAS scale were significantly abnormal in SLE patients. Additionally, the vast majority of SLE patients (29/36) reported at least one perceived vocal deficit, with the most prevalent deficits being vocal fatigue (19/36) and hoarseness (17/36). Self-reported voice deficits were highly correlated with altered GRBAS scores. Additionally, tissue damage scores in different organ systems correlated with dysphonic symptoms, suggesting that some features of SLE-related dysphonia are due to tissue damage. Our results show that a large fraction of SLE patients suffers from perceivable dysphonia and may benefit from voice therapy in order to improve quality of life.

]]>
<![CDATA[Spatio-Temporal Dynamics of Field Cricket Calling Behaviour: Implications for Female Mate Search and Mate Choice]]> https://www.researchpad.co/article/5989da0aab0ee8fa60b77269

Amount of calling activity (calling effort) is a strong determinant of male mating success in species such as orthopterans and anurans that use acoustic communication in the context of mating behaviour. While many studies in crickets have investigated the determinants of calling effort, patterns of variability in male calling effort in natural choruses remain largely unexplored. Within-individual variability in calling activity across multiple nights of calling can influence female mate search and mate choice strategies. Moreover, calling site fidelity across multiple nights of calling can also affect the female mate sampling strategy. We therefore investigated the spatio-temporal dynamics of acoustic signaling behaviour in a wild population of the field cricket species Plebeiogryllus guttiventris. We first studied the consistency of calling activity by quantifying variation in male calling effort across multiple nights of calling using repeatability analysis. Callers were inconsistent in their calling effort across nights and did not optimize nightly calling effort to increase their total number of nights spent calling. We also estimated calling site fidelity of males across multiple nights by quantifying movement of callers. Callers frequently changed their calling sites across calling nights with substantial displacement but without any significant directionality. Finally, we investigated trade-offs between within-night calling effort and energetically expensive calling song features such as call intensity and chirp rate. Calling effort was not correlated with any of the calling song features, suggesting that energetically expensive song features do not constrain male calling effort. The two key features of signaling behaviour, calling effort and call intensity, which determine the duration and spatial coverage of the sexual signal, are therefore uncorrelated and function independently.

]]>
<![CDATA[Calibration Method of an Ultrasonic System for Temperature Measurement]]> https://www.researchpad.co/article/5989d9e4ab0ee8fa60b6a923

System calibration is fundamental to the overall accuracy of the ultrasonic temperature measurement, and it is basically involved in accurately measuring the path length and the system latency of the ultrasonic system. This paper proposes a method of high accuracy system calibration. By estimating the time delay between the transmitted signal and the received signal at several different temperatures, the calibration equations are constructed, and the calibrated results are determined with the use of the least squares algorithm. The formulas are deduced for calculating the calibration uncertainties, and the possible influential factors are analyzed. The experimental results in distilled water show that the calibrated path length and system latency can achieve uncertainties of 0.058 mm and 0.038 μs, respectively, and the temperature accuracy is significantly improved by using the calibrated results. The temperature error remains within ±0.04°C consistently, and the percentage error is less than 0.15%.

]]>
<![CDATA[Strategic Use of Affiliative Vocalizations by Wild Female Baboons]]> https://www.researchpad.co/article/5989d9ffab0ee8fa60b737b7

Although vocal production in non-human primates is highly constrained, individuals appear to have some control over whether to call or remain silent. We investigated how contextual factors affect the production of grunts given by wild female chacma baboons, Papio ursinus, during social interactions. Females grunted as they approached other adult females 28% of the time. Supporting previous research, females were much more likely to grunt to mothers with young infants than to females without infants. Grunts also significantly increased the likelihood of affiliative interactions among all partners. Notably, however, grunts did not simply mirror existing social bonds. Instead, they appeared to perform a very different function: namely, to serve as signals of benign intent between partners whose relationship is not necessarily close or predictable. Females were less likely to grunt to their mothers or adult daughters—the individuals with whom they shared the closest and least aggressive bonds—than to other females. In contrast, patterns of grunting between sisters were similar to those between nonkin, perhaps reflecting sisters’ more ambivalent relationships. Females grunted at higher rates to lower-ranking, than to higher-ranking, females, supporting the hypothesis that grunts do not simply signal the signaler’s level of arousal or anxiety about receiving aggression, but instead function as signals of benign intent. Taken together, results suggest that the grunts given by female baboons serve to reduce uncertainty about the likely outcome of an interaction between partners whose relationship is not predictably affiliative. Despite their limited vocal repertoire, baboons appear to be skilled at modifying call production in different social contexts and for different audiences.

]]>