ResearchPad - acoustics https://www.researchpad.co Default RSS Feed en-us © 2020 Newgen KnowledgeWorks <![CDATA[Adaptation to unstable coordination patterns in individual and joint actions]]> https://www.researchpad.co/article/elastic_article_7665 Previous research on interlimb coordination has shown that some coordination patterns are more stable than others, and function as attractors in the space of possible phase relations between different rhythmic movements. The canonical coordination patterns, i.e. the two most stable phase relations, are in-phase (0 degree) and anti-phase (180 degrees). Yet, musicians are able to perform other coordination patterns in intrapersonal as well as in interpersonal coordination with remarkable precision. This raises the question of how music experts manage to produce these unstable patterns of movement coordination. In the current study, we invited participants with at least five years of training on a musical instrument. We used an adaptation paradigm to address two factors that may facilitate producing unstable coordination patterns. First, we investigated adaptation in different coordination settings, to test the hypothesis that the lower coupling strength between individuals during joint performance makes it easier to achieve stability outside of the canonical patterns than the stronger coupling during individual bimanual performance. Second, we investigated whether adding to the structure of action effects may support achieving unstable coordination patterns, both intra- and inter-individually. The structure of action effects was strengthened by adding a melodic contour to the action effects, a measure that has been shown to improve the acquisition of bimanual coordination skills. Adaptation performance was measured both in terms of asynchrony and variability thereof. As predicted, we found that producing unstable patterns benefitted from the weaker coupling during joint performance. Surprisingly, the structure of action effects did not help with achieving unstable coordination patterns.

]]>
<![CDATA[A new simple brain segmentation method for extracerebral intracranial tumors]]> https://www.researchpad.co/article/Nb837d809-9647-425d-8dfd-2c3174a6dd80

Normal brain segmentation is available via FreeSurfer, Vbm, and Ibaspm software. However, these software packages cannot perform segmentation of the brain for patients with brain tumors. As we know, damage from extracerebral tumors to the brain occurs mainly by way of pushing and compressing while leaving the structure of the brain intact. Three-dimensional (3D) imaging, augmented reality (AR), and virtual reality (VR) technology have begun to be applied in clinical practice. The free medical open-source software 3D Slicer allows us to perform 3D simulations on a computer and requires little user interaction. Moreover, 3D Slicer can integrate with the third-party software mentioned above. The relationship between the tumor and surrounding brain tissue can be judged, but accurate brain segmentation cannot be performed using 3D Slicer. In this study, we combine 3D Slicer and FreeSurfer to provide a novel brain segmentation method for extracerebral tumors. This method can help surgeons identify the “real” relationship between the lesion and adjacent brain tissue before surgery and improve preoperative planning.

]]>
<![CDATA[Communication is key: Mother-offspring signaling can affect behavioral responses and offspring survival in feral horses (Equus caballus)]]> https://www.researchpad.co/article/Nfc9766a8-2564-4088-9a49-707302d05531

Acoustic signaling plays an important role in mother-offspring recognition and subsequent bond-formation. It remains unclear, however, if mothers and offspring use acoustic signaling in the same ways and for the same reasons throughout the juvenile stage, particularly after mutual recognition has been adequately established. Moreover, despite its critical role in mother-offspring bond formation, research explicitly linking mother-infant communication strategies to offspring survival are lacking. We examined the communicative patterns of mothers and offspring in the feral horse (Equus caballus) to better understand 1) the nature of mother-offspring communication throughout the first year of development; 2) the function(s) of mother- vs. offspring-initiated communication and; 3) the importance of mare and foal communication to offspring survival. We found that 1) mares and foals differ in when and how they initiate communication; 2) the outcomes of mare- vs. foal-initiated communication events consistently differ; and 3) the communicative patterns between mares and their foals can be important for offspring survival to one year of age. Moreover, given the importance of maternal activity to offspring behavior and subsequent survival, we submit that our data are uniquely positioned to address the long-debated question: do the behaviors exhibited during the juvenile stage (by both mothers and their young) confer delayed or immediate benefits to offspring? In summary, we aimed to better understand 1) the dynamics of mother-offspring communication, 2) whether mother-offspring communicative patterns were important to offspring survival, and 3) the implications of our research regarding the function of the mammalian juvenile stage. Our results demonstrate that we have achieved those aims.

]]>
<![CDATA[Noise edge pitch and models of pitch perception]]> https://www.researchpad.co/article/N0a57a937-2640-4428-b605-3264fbc5748a

Monaural noise edge pitch (NEP) is evoked by a broadband noise with a sharp falling edge in the power spectrum. The pitch is heard near the spectral edge frequency but shifted slightly into the frequency region of the noise. Thus, the pitch of a lowpass (LP) noise is matched by a pure tone typically 2%–5% below the edge, whereas the pitch of highpass (HP) noise is matched a comparable amount above the edge. Musically trained listeners can recognize musical intervals between NEPs. The pitches can be understood from a temporal pattern-matching model of pitch perception based on the peaks of a simplified autocorrelation function. The pitch shifts arise from limits on the autocorrelation window duration. An alternative place-theory approach explains the pitch shifts as the result of lateral inhibition. Psychophysical experiments using edge frequencies of 100 Hz and below find that LP-noise pitches exist but HP-noise pitches do not. The result is consistent with a temporal analysis in tonotopic regions outside the noise band. LP and HP experiments with high-frequency edges find that pitch tends to disappear as the edge frequency approaches 5000 Hz, as expected from a timing theory, though exceptional listeners can go an octave higher.

]]>
<![CDATA[New physiological bench test reproducing nocturnal breathing pattern of patients with sleep disordered breathing]]> https://www.researchpad.co/article/N13bd4ad3-60c6-4376-997d-f10f1c975c0e

Previous studies have shown that Automatic Positive Airway Pressure devices display different behaviors when connected to a bench using theoretical respiratory cycle scripts. However, these scripts are limited and do not simulate physiological behavior during the night. Our aim was to develop a physiological bench that is able to simulate patient breathing airflow by integrating polygraph data. We developed an algorithm analyzing polygraph data and transformed this information into digital inputs required by the bench hardware to reproduce a patient breathing profile on bench. The inputs are respectively the simulated respiratory muscular effort pressure input for an artificial lung and the sealed chamber pressure to regulate the Starling resistor. We did simulations on our bench for a total of 8 hours and 59 minutes for a breathing profile from the demonstration recording of a Nox T3 Sleep Monitor. The simulation performance results showed that in terms of relative peak-valley amplitude of each breathing cycle, simulated bench airflow was biased by only 1.48% ± 6.80% compared to estimated polygraph nasal airflow for a total of 6,479 breathing cycles. For total respiratory cycle time, the average bias ± one standard deviation was 0.000 ± 0.288 seconds. For patient apnea events, our bench simulation had a sensitivity of 84.7% and a positive predictive value equal to 90.3%, considering 149 apneas detected both in polygraph nasal simulated bench airflows. Our new physiological bench would allow personalizing APAP device selection to each patient by taking into account individual characteristics of a sleep breathing profile.

]]>
<![CDATA[An efficient resource utilization scheme within PMIPv6 protocol for urban vehicular networks]]> https://www.researchpad.co/article/5c8acc80d5eed0c48498f8b1

Recently, the mobility management of urban vehicular networks has become great challenges for researchers due to its unique mobility requirements imposed by mobile users when accessing different services in a random fashion. To provide a ubiquitous Internet and seamless connectivity, the Internet Engineering Task Force (IETF) has proposed a Proxy Mobile IPv6 (PMIPv6) protocol. This is meant to address the signaling of the mobility transparent to the Mobile Node (MN) and also guarantee session continuity while the MN is in motion. However, performing a handoff by tens of thousands of MNs may harm the performance of the system significantly due to the high signaling overhead and the insufficient utilization of so-called Binding Cash Entry (BCE) at the Local Mobility Anchor (LMA). To address these issues, we propose an efficient scheme within the PMIPv6 protocol, named AE-PMIPv6 scheme, to effectively utilize the BCE at the LMA. This is primarily achieved by merging the BCEs of the MNs, thus, reducing the signaling overhead. Better utilization of the BCEs has been attained by employing virtual addresses and addressing pool mechanisms for the purpose of binding information of the MNs that are moving together towards the same network at a specific time, during their handoff process. Results obtained from our simulation demonstrates the superiority of AE-PMIPv6 scheme over E-PMIPv6 scheme. The AE-PMIPv6 succeeds in minimizing the signaling overhead, reduces the handover time and at the same time efficiently utilize the buffer resources.

]]>
<![CDATA[Deer browsing alters sound propagation in temperate deciduous forests]]> https://www.researchpad.co/article/5c6dc993d5eed0c484529e60

The efficacy of animal signals is strongly influenced by the structure of the habitat in which they are propagating. In recent years, the habitat structure of temperate forests has been increasingly subject to modifications from foraging by white-tailed deer (Odocoileus virginianus). Increasing deer numbers and the accompanying browsing have been shown to alter vegetation structure and thus the foraging, roosting, and breeding habitats of many species. However, despite a large body of literature on the effects of vegetation structure on sound propagation, we do not yet know what impact deer browsing may have on acoustic communication. Here we used playback experiments to determine whether sound fidelity and amplitude of white noise, pure tones, and trills differed between deer-browsed and deer-excluded plots. We found that sound fidelity, but not amplitude, differed between habitats, with deer-browsed habitats having greater sound fidelity than deer-excluded habitats. Difference in sound propagation characteristics between the two habitats could alter the efficacy of acoustic communication through plasticity, cultural evolution or local adaptation, in turn influencing vocally-mediated behaviors (e.g. agonistic, parent-offspring, mate selection). Reduced signal degradation suggests vocalizations may retain more information, improving the transfer of information to both intended and unintended receivers. Overall, our results suggest that deer browsing impacts sound propagation in temperate deciduous forest, although much work remains to be done on the potential impacts on communication.

]]>
<![CDATA[STRFs in primary auditory cortex emerge from masking-based statistics of natural sounds]]> https://www.researchpad.co/article/5c4a3057d5eed0c4844bfd7a

We investigate how the neural processing in auditory cortex is shaped by the statistics of natural sounds. Hypothesising that auditory cortex (A1) represents the structural primitives out of which sounds are composed, we employ a statistical model to extract such components. The input to the model are cochleagrams which approximate the non-linear transformations a sound undergoes from the outer ear, through the cochlea to the auditory nerve. Cochleagram components do not superimpose linearly, but rather according to a rule which can be approximated using the max function. This is a consequence of the compression inherent in the cochleagram and the sparsity of natural sounds. Furthermore, cochleagrams do not have negative values. Cochleagrams are therefore not matched well by the assumptions of standard linear approaches such as sparse coding or ICA. We therefore consider a new encoding approach for natural sounds, which combines a model of early auditory processing with maximal causes analysis (MCA), a sparse coding model which captures both the non-linear combination rule and non-negativity of the data. An efficient truncated EM algorithm is used to fit the MCA model to cochleagram data. We characterize the generative fields (GFs) inferred by MCA with respect to in vivo neural responses in A1 by applying reverse correlation to estimate spectro-temporal receptive fields (STRFs) implied by the learned GFs. Despite the GFs being non-negative, the STRF estimates are found to contain both positive and negative subfields, where the negative subfields can be attributed to explaining away effects as captured by the applied inference method. A direct comparison with ferret A1 shows many similar forms, and the spectral and temporal modulation tuning of both ferret and model STRFs show similar ranges over the population. In summary, our model represents an alternative to linear approaches for biological auditory encoding while it captures salient data properties and links inhibitory subfields to explaining away effects.

]]>
<![CDATA[Complex tone stimulation may induce binaural diplacusis with low-tone hearing loss]]> https://www.researchpad.co/article/5c57e67ad5eed0c484ef339d

To clarify the possible mechanism causing binaural diplacusis with low-tone hearing loss, two psychoacoustic experiments were performed with 20 healthy subjects, using harmonic complex tones. In the first experiment, two tones were presented unilaterally, either from the right or left side. One of the tones presented was higher in frequency in terms of the fundamental component, but lower or equal in frequency in terms of the highest component, than the other tone. The subjects were asked which tone was higher in pitch after listening to both tones. They were also asked to compare tones in which low-tone components were eliminated. In the second experiment, the subjects heard these complex tones binaurally, with low-tone components eliminated in one ear. In the first experiment, most subjects perceived pitch direction, that is, higher or lower, in a reverse way when low-tone components were eliminated from the complex tones. In the second experiment, approximately half of all subjects heard the tones at different pitches in both ears. Under certain conditions, complex tone stimulation may induce binaural diplacusis when low-tone hearing is lost in one ear.

]]>
<![CDATA[Habituation of the electrodermal response – A biological correlate of resilience?]]> https://www.researchpad.co/article/5c57e673d5eed0c484ef3263

Current approaches to quantifying resilience make extensive use of self-reported data. Problematically, this type of scales is plagued by response distortions–both deliberate and unintentional, particularly in occupational populations. The aim of the current study was to develop an objective index of resilience. The study was conducted in 30 young healthy adults. Following completion of the Connor-Davidson Resilience Scale (CD-RISC) and Depression/Anxiety/Stress Scale (DASS), they were subjected to a series of 15 acoustic startle stimuli (95 dB, 50 ms) presented at random intervals, with respiration, skin conductance and ECG recorded. As expected, resilience (CD-RISC) significantly and negatively correlated with all three DASS subscales–Depression (r = -0.66, p<0.0001), Anxiety (r = -0.50, p<0.005) and Stress (r = -0.48, p<0.005). Acoustic stimuli consistently provoked transient skin conductance (SC) responses, with SC slopes indexing response habituation. This slope significantly and positively correlated with DASS-Depression (r = 0.59, p<0.005), DASS-Anxiety (r = 0.35, p<0.05) and DASS-Total (r = 0.50, p<0.005) scores, and negatively with resilience score (r = -0.47; p = 0.006), indicating that high-resilience individuals are characterized by steeper habituation slopes compared to low-resilience individuals. Our key finding of the connection between habituation of the skin conductance responses to repeated acoustic startle stimulus and resilience-related psychometric constructs suggests that response habituation paradigm has the potential to characterize important attributes of cognitive fitness and well-being–such as depression, anxiety and resilience. With steep negative slopes reflecting faster habituation, lower depression/anxiety and higher resilience, and slower or no habituation characterizing less resilient individuals, this protocol may offer a distortion-free method for objective assessment and monitoring of psychological resilience.

]]>
<![CDATA[Implicit measurement of emotional experience and its dynamics]]> https://www.researchpad.co/article/5c63394ad5eed0c484ae6422

Although many studies revealed that emotions and their dynamics have a profound impact on cognition and behavior, it has proven difficult to unobtrusively measure emotions. In the current study, our objective was to distinguish different experiences elicited by audiovisual stimuli designed to evoke particularly happy, sad, fear and disgust emotions, using electroencephalography (EEG) and a multivariate approach. We show that we were able to classify these emotional experiences well above chance level. Importantly, we retained all the information (frequency and topography) present in the data. This allowed us to interpret the differences between emotional experiences in terms of component psychological processes such as attention and arousal that are known to be associated with the observed activation patterns. In addition, we illustrate how this method of classifying emotional experiences can be applied on a moment-by-moment basis in order to track dynamic changes in the emotional response over time. The application of our approach may be of value in many contexts in which the experience of a given stimulus or situation changes over time, ranging from clinical to consumption settings.

]]>
<![CDATA[A Gestalt inference model for auditory scene segregation]]> https://www.researchpad.co/article/5c50c46dd5eed0c4845e874d

Our current understanding of how the brain segregates auditory scenes into meaningful objects is in line with a Gestaltism framework. These Gestalt principles suggest a theory of how different attributes of the soundscape are extracted then bound together into separate groups that reflect different objects or streams present in the scene. These cues are thought to reflect the underlying statistical structure of natural sounds in a similar way that statistics of natural images are closely linked to the principles that guide figure-ground segregation and object segmentation in vision. In the present study, we leverage inference in stochastic neural networks to learn emergent grouping cues directly from natural soundscapes including speech, music and sounds in nature. The model learns a hierarchy of local and global spectro-temporal attributes reminiscent of simultaneous and sequential Gestalt cues that underlie the organization of auditory scenes. These mappings operate at multiple time scales to analyze an incoming complex scene and are then fused using a Hebbian network that binds together coherent features into perceptually-segregated auditory objects. The proposed architecture successfully emulates a wide range of well established auditory scene segregation phenomena and quantifies the complimentary role of segregation and binding cues in driving auditory scene segregation.

]]>
<![CDATA[Song variation of the South Eastern Indian Ocean pygmy blue whale population in the Perth Canyon, Western Australia]]> https://www.researchpad.co/article/5c50c450d5eed0c4845e850b

Sea noise collected over 2003 to 2017 from the Perth Canyon, Western Australia was analysed for variation in the South Eastern Indian Ocean pygmy blue whale song structure. The primary song-types were: P3, a three unit phrase (I, II and III) repeated with an inter-song interval (ISI) of 170–194 s; P2, a phrase consisting of only units II & III repeated every 84–96 s; and P1 with a phrase consisting of only unit II repeated every 45–49 s. The different ISI values were approximate multiples of each other within a season. When comparing data from each season, across seasons, the ISI value for each song increased significantly through time (all fits had p << 0.001), at 0.30 s/Year (95%CI 0.217–0.383), 0.8 s/Year (95%CI 0.655–1.025) and 1.73 s/Year (95%CI 1.264–2.196) for the P1, P2 and P3 songs respectively. The proportions of each song-type averaged at 21.5, 24.2 and 56% for P1, P2 and P3 occurrence respectively and these ratios could vary by up to ± 8% (95% CI) amongst years. On some occasions animals changed the P3 ISI to be significantly shorter (120–160 s) or longer (220–280 s). Hybrid song patterns occurred where animals combined multiple phrase types into a repeated song. In recent years whales introduced further complexity by splitting song units. This variability of song-type and proportions implies abundance measure for this whale sub population based on song detection needs to factor in trends in song variability to make data comparable between seasons. Further, such variability in song production by a sub population of pygmy blue whales raises questions as to the stability of the song types that are used to delineate populations. The high level of song variability may be driven by an increasing number of background whale callers creating ‘noise’ and so forcing animals to alter song in order to ‘stand out’ amongst the crowd.

]]>
<![CDATA[Estimation of auditory steady-state responses based on the averaging of independent EEG epochs]]> https://www.researchpad.co/article/5c5369bcd5eed0c484a465a1

The amplitude of auditory steady-state responses (ASSRs) generated in the brainstem of rats exponentially decreases over the sequential averaging of EEG epochs. This behavior is partially due to the adaptation of the ASSR induced by the continuous and monotonous stimulation. In this study, we analyzed the potential clinical relevance of the ASSR adaptation. ASSR were elicited in eight anesthetized adult rats by 8-kHz tones, modulated in amplitude at 115 Hz. We called independent epochs to those EEG epochs acquired with sufficiently long inter-stimulus interval, so the ASSR contained in any given epoch is not affected by the previous stimulation. We tested whether the detection of ASSRs is improved when the response is computed by averaging independent EEG epochs, containing only unadapted auditory responses. The improvements in the ASSR detection obtained with standard, weighted and sorted averaging were compared. In the absence of artifacts, when the ASSR was elicited by continuous acoustic stimulation, the computation of the ASSR amplitude relied upon the averaging method. While the adaptive behavior of the ASSR was still evident after the weighting of epochs, the sorted averaging resulted in under-estimations of the ASSR amplitude. In the absence of artifacts, the ASSR amplitudes computed by averaging independent epochs did not depend on the averaging procedure. Averaging independent epochs resulted in higher ASSR amplitudes and halved the number of EEG epochs needed to be acquired to achieve the maximum detection rate of the ASSR. Acquisition protocols based on averaging independent EEG epochs, in combination with appropriate averaging methods for artifact reduction might contribute to develop more accurate hearing assessments based on ASSRs.

]]>
<![CDATA[Subarctic singers: Humpback whale (Megaptera novaeangliae) song structure and progression from an Icelandic feeding ground during winter]]> https://www.researchpad.co/article/5c52181ad5eed0c48479736e

Humpback whale songs associated with breeding behaviors are increasingly reported outside of traditional low latitude breeding grounds. Songs from a subarctic feeding ground during the winter were quantitatively characterized to investigate the structure and temporal changes of the songs at such an atypical location. Recordings were collected from 26. January to 12. March, 2011, using bottom mounted recorders. Humpback songs were detected on 91% of the recording days with peak singing activities during 9.–26. February. The majority of the recordings included multiple chorusing singers. The songs were characterized by a) common static themes which transitioned consistently to predictable themes, b) shifting themes which occurred less predictably and c) rare themes. A set median sequence was found for four different periods (sets) of recordings (approximately 1 week each). The set medians were highly similar and formed a single cluster indicating that the sequences of themes sung in this area belonged to a single cluster of songs despite of the variation caused by the shifting themes. These subarctic winter songs could, thus, represent a characteristic song type for this area which is comparable to extensively studied songs from traditional low latitude breeding grounds. An increase in the number of themes per sequence was observed throughout the recording period including minor changes in the application of themes in the songs; indicating a gradual song progression. The results confirm that continual singing of sophisticated songs occur during the breeding season in the subarctic. In addition to being a well-established summer feeding ground the study area appears to be an important overwintering site for humpback whales delaying or canceling their migration where males engage in active sexual displays, i.e. singing. Importantly, such singing activity on a shared feeding ground likely aids the cultural transmission of songs in the North Atlantic.

]]>
<![CDATA[The sensation of groove is affected by the interaction of rhythmic and harmonic complexity]]> https://www.researchpad.co/article/5c40f78ed5eed0c48438638f

The pleasurable desire to move to music, also known as groove, is modulated by rhythmic complexity. How the sensation of groove is influenced by other musical features, such as the harmonic complexity of individual chords, is less clear. To address this, we asked people with a range of musical experience to rate stimuli that varied in both rhythmic and harmonic complexity. Rhythm showed an inverted U-shaped relationship with ratings of pleasure and wanting to move, whereas medium and low complexity chords were rated similarly. Pleasure mediated the effect of harmony on wanting to move and high complexity chords attenuated the effect of rhythm on pleasure. We suggest that while rhythmic complexity is the primary driver, harmony, by altering emotional valence, modulates the attentional and temporal prediction processes that underlie rhythm perception. Investigation of the effects of musical training with both regression and group comparison showed that training increased the inverted U effect for harmony and rhythm, respectively. Taken together, this work provides important new information about how the prediction and entrainment processes involved in rhythm perception interact with musical pleasure.

]]>
<![CDATA[Aggregation process of drifting fish aggregating devices (DFADs) in the Western Indian Ocean: Who arrives first, tuna or non-tuna species?]]> https://www.researchpad.co/article/5c478cabd5eed0c484bd3cb0

Floating objects drifting in the surface of tropical waters, also known as drifting fish aggregating devices (DFADs), attract hundreds of marine species, including tuna and non-tuna species. Industrial tropical purse seiners have been increasingly deploying artificial man-made DFADs equipped with satellite linked echo-sounder buoys, which provide fishers with information on the accurate geo-location of the object and rough estimates of the biomass aggregated underneath, to facilitate the catch of tuna. Although several hypotheses are under consideration to explain the aggregation and retention processes of pelagic species around DFADs, the reasons driving this associative behavior are uncertain. This study uses information from 962 echo-sounder buoys attached to virgin (i.e. newly deployed) DFADs deployed in the Western Indian Ocean between 2012 and 2015 by the Spanish fleet (42,322 days observations) to determine the first detection day of tuna and non-tuna species at DFAD and to model the aggregation processes of both species group using Generalize Additive Mixed Models. Moreover, different seasons, areas and depths of the DFAD underwater structure were considered in the analysis to account for potential spatio-temporal and structure differences. Results show that tuna species arrive at DFADs before non-tuna species (13.5±8.4 and 21.7±15.1 days, respectively), and provide evidence of the significant relationship between DFAD depth and detection time for tuna, suggesting faster tuna colonization in deeper objects. For non-tuna species, this relationship appeared to be not significant. The study also reveals both seasonal and spatial differences in the aggregation patterns for different species groups, suggesting that tuna and non-tuna species may have different aggregative behaviors depending on the spatio-temporal dynamic of DFADs. This work will contribute to the understanding of the fine and mesoscale ecology and behavior of target and non-target species around DFADs and will assist managers on the sustainability of exploited resources, helping to design spatio-temporal conservation management measures for tuna and non-tuna species.

]]>
<![CDATA[Doppler sonography enhances rtPA-induced fibrinolysis in an in vitro clot model of spontaneous intracerebral hemorrhages]]> https://www.researchpad.co/article/5c605aabd5eed0c4847cd3f5

Background

Transcranial Doppler (TCD) was shown to enhance intravascular fibrinolysis by rtPA in ischemic stroke. Studies revealed that catheter-based administration of rtPA induces lysis of intracerebral hemorrhages (ICH). However, it is unknown whether TCD would be suitable to enhance rtPA-induced fibrinolysis in patients with ICH. The aim of this study was to assess the potential of TCD to enhance rtPA-induced fibrinolysis in an in vitro clot system.

Methods

Reproducible human blood clots of 25 ml were incubated in a water bath at 37°C during treatments. They were weighed before and after 6 different treatments: (I) control (incubation only), (II) rtPA only, (III) one Doppler probe, (IV) two Doppler probes placed vis-à-vis, (V) one probe and rtPA and (VI) two probes and rtPA. To quantify lysis of the blood clots and attenuation of the Doppler through a temporal squama acoustic peak rarefaction pressure (APRP) was measured in the field of the probes. Temperature was assessed to evaluate possible side effects.

Results

Clot weight was reduced in all groups. The control group had the highest relative end weight of 70.2%±7.2% compared to all other groups (p<0,0001). Most efficient lysis was achieved using (VI) 2 probes and rtPA 36.3%±4.4% compared to (II, III, IV) (p<0.0001; p = 0.0002; p = 0.048). APRP was above lysis threshold (535.5±7.2 kPa) using 2 probes even through the temporal squama (731.6±32.5 kPa) (p = 0.0043). There was a maximal temperature elevation of 0.17±0.07°C using both probes.

Conclusions

TCD significantly enhances rtPA-induced lysis of blood clots, and the effect is amplified by using multiple probes. Our results indicate that bitemporal TCD insonation of hematomas could be a new and safe approach to enhance fibrinolysis of ICH´s treated with intralesional catheter and rtPA.

]]>
<![CDATA[Effect of F0 contour on perception of Mandarin Chinese speech against masking]]> https://www.researchpad.co/article/5c37b7a9d5eed0c48449083a

Intonation has many perceptually significant functions in language that contribute to speech recognition. This study aims to investigate whether intonation cues affect the unmasking of Mandarin Chinese speech in the presence of interfering sounds. Specifically, intelligibility of multi-tone Mandarin Chinese sentences with maskers consisting of either two-talker speech or steady-state noise was measured in three (flattened, typical, and exaggerated) intonation conditions. Different from most of the previous studies, the present study only manipulate and modify the intonation information but preserve tone information. The results showed that recognition of the final keywords in multi-tone Mandarin Chinese sentences was much better under the original F0 contour condition than the decreased F0 contour or exaggerated F0 contour conditions whenever there was a noise or speech masker, and an exaggerated F0 contour reduced the intelligibility of Mandarin Chinese more under the speech masker condition than that under the noise masker condition. These results suggested that speech in a tone language (Mandarin Chinese) is harder to understand when the intonation is unnatural, even if the tone information is preserved, and an unnatural intonation contour decreases releasing Mandarin Chinese speech from masking, especially in a multi-person talking environment.

]]>
<![CDATA[Implicit acoustic sequence learning recruits the hippocampus]]> https://www.researchpad.co/article/5c26977ed5eed0c48470fb1a

The exclusive role of the medial temporal lobe in explicit memory has been questioned by several studies reporting medial temporal lobe involvement during implicit learning. Prior studies have demonstrated that hippocampal engagement is present during the implicit learning of perceptual associations, however, it is absent during learning response-related associations. Therefore, it was hypothesized that the function of the medial temporal lobe during implicit learning is related to the extraction of perceptual associations in general. While in most implicit learning tasks visual stimuli were used, the aim of the current functional magnetic resonance imaging (fMRI) study was to detect whether activations within medial temporal lobe structures are also found during implicit learning of auditory associations. In a modified version of the classical serial reaction time task, participants reacted to the presentation of five different tones. Unbeknownst to the participants, the tones were presented with an underlying sequential regularity that could be learned. To avoid an influence of response learning on acoustic associative learning, response buttons were remapped in every trial. After learning, two different tests were used to measure participants’ conscious knowledge about the underlying sequence in order to assess the amount of implicit memory and to exclude participants with explicit knowledge acquired during learning. fMRI results revealed hippocampal activations for implicit learning of the acoustic sequence. When detecting a relation between implicit learning of acoustic associations and hippocampal activations, this study indicated a relation between hippocampal activations and memory formation of perceptual-based relational representation regardless of explicit knowledge. Thus, present findings suggest a general functional role for the formation of sequenced perceptual associations independent of the involvement of awareness.

]]>