ResearchPad - language-acquisition Default RSS Feed en-us © 2020 Newgen KnowledgeWorks <![CDATA[Disentangling sequential from hierarchical learning in Artificial Grammar Learning: Evidence from a modified Simon Task]]> In this paper we probe the interaction between sequential and hierarchical learning by investigating implicit learning in a group of school-aged children. We administered a serial reaction time task, in the form of a modified Simon Task in which the stimuli were organised following the rules of two distinct artificial grammars, specifically Lindenmayer systems: the Fibonacci grammar (Fib) and the Skip grammar (a modification of the former). The choice of grammars is determined by the goal of this study, which is to investigate how sensitivity to structure emerges in the course of exposure to an input whose surface transitional properties (by hypothesis) bootstrap structure. The studies conducted to date have been mainly designed to investigate low-level superficial regularities, learnable in purely statistical terms, whereas hierarchical learning has not been effectively investigated yet. The possibility to directly pinpoint the interplay between sequential and hierarchical learning is instead at the core of our study: we presented children with two grammars, Fib and Skip, which share the same transitional regularities, thus providing identical opportunities for sequential learning, while crucially differing in their hierarchical structure. More particularly, there are specific points in the sequence (k-points), which, despite giving rise to the same transitional regularities in the two grammars, support hierarchical reconstruction in Fib but not in Skip. In our protocol, children were simply asked to perform a traditional Simon Task, and they were completely unaware of the real purposes of the task. Results indicate that sequential learning occurred in both grammars, as shown by the decrease in reaction times throughout the task, while differences were found in the sensitivity to k-points: these, we contend, play a role in hierarchical reconstruction in Fib, whereas they are devoid of structural significance in Skip. More particularly, we found that children were faster in correspondence to k-points in sequences produced by Fib, thus providing an entirely new kind of evidence for the hypothesis that implicit learning involves an early activation of strategies of hierarchical reconstruction, based on a straightforward interplay with the statistically-based computation of transitional regularities on the sequences of symbols.

<![CDATA[The Language of Innovation]]> Predicting innovation is a peculiar problem in data science. Following its definition, an innovation is always a never-seen-before event, leaving no room for traditional supervised learning approaches. Here we propose a strategy to address the problem in the context of innovative patents, by defining innovations as never-seen-before associations of technologies and exploiting self-supervised learning techniques. We think of technological codes present in patents as a vocabulary and the whole technological corpus as written in a specific, evolving language. We leverage such structure with techniques borrowed from Natural Language Processing by embedding technologies in a high dimensional euclidean space where relative positions are representative of learned semantics. Proximity in this space is an effective predictor of specific innovation events, that outperforms a wide range of standard link-prediction metrics. The success of patented innovations follows a complex dynamics characterized by different patterns which we analyze in details with specific examples. The methods proposed in this paper provide a completely new way of understanding and forecasting innovation, by tackling it from a revealing perspective and opening interesting scenarios for a number of applications and further analytic approaches.

<![CDATA[Online comprehension across different semantic categories in preschool children with autism spectrum disorder]]>


Word comprehension across semantic categories is a key area of language development. Using online automated eye-tracking technology to reduce response demands during a word comprehension test may be advantageous in children with autism spectrum disorder (ASD).


To measure online accuracy of word recognition across eleven semantic categories in preschool children with ASD and in typically developing (TD) children matched for gender and developmental age.


Using eye-tracker methodology we measured the relative number of fixations on a target image as compared to a foil of the same category shown simultaneously on screen. This online accuracy measure was considered a measure of word understanding. We tested the relationship between online accuracy and offline word recognition and the effects of clinical variables on online accuracy. Twenty-four children with ASD and 21 TD control children underwent the eye-tracking task.


On average, children with ASD were significantly less accurate at fixating on the target image than the TD children. After multiple comparison correction, no significant differences were found across the eleven semantic categories of the experiment between preschool children with ASD and younger TD children matched for developmental age. The ASD group showed higher intragroup variability consistent with greater variation in vocabulary growth rates. Direct effects of non-verbal cognitive levels, vocabulary levels and gesture productions on online word recognition in both groups support a dimensional view of language abilities in ASD.


Online measures of word comprehension across different semantic categories show higher interindividual variability in children with ASD and may be useful for objectively monitor gains on targeted language interventions.

<![CDATA[Effect of language proficiency on proactive occulo-motor control among bilinguals]]>

We examined the effect of language proficiency on the status and dynamics of proactive inhibitory control in an occulo-motor cued go-no-go task. The first experiment was designed to demonstrate the effect of second language proficiency on proactive inhibitory cost and adjustments in control by evaluating previous trial effects. This was achieved by introducing uncertainty about the upcoming event (go or no-go stimulus). High- and low- proficiency Hindi-English bilingual adults participated in the study. Saccadic latencies and errors were taken as the measures of performance. The results demonstrate a significantly lower proactive inhibitory cost and better up-regulation of proactive control under uncertainty among high- proficiency bilinguals. An analysis based on previous trial effects suggests that high- proficiency bilinguals were found to be better at releasing inhibition and adjustments in control, in an ongoing response activity in the case of uncertainty. To further understand the dynamics of proactive inhibitory control as a function of proficiency, the second experiment was designed to test the default versus temporary state hypothesis of proactive inhibitory control. Certain manipulations were introduced in the cued go-no-go task in order to make the upcoming go or no-go trial difficult to predict, which increased the demands on the implementation and maintenance of proactive control. High- proficiency bilinguals were found to rely on a default state of proactive inhibitory control whereas low- proficiency bilinguals were found to rely on temporary/transient proactive inhibition. Language proficiency, as one of the measures of bilingualism, was found to influence proactive inhibitory control and appears to modulate the dynamics of proactive inhibitory control.

<![CDATA[Emergence of linguistic conventions in multi-agent reinforcement learning]]>

Recently, emergence of signaling conventions, among which language is a prime example, draws a considerable interdisciplinary interest ranging from game theory, to robotics to evolutionary linguistics. Such a wide spectrum of research is based on much different assumptions and methodologies, but complexity of the problem precludes formulation of a unifying and commonly accepted explanation. We examine formation of signaling conventions in a framework of a multi-agent reinforcement learning model. When the network of interactions between agents is a complete graph or a sufficiently dense random graph, a global consensus is typically reached with the emerging language being a nearly unique object-word mapping or containing some synonyms and homonyms. On finite-dimensional lattices, the model gets trapped in disordered configurations with a local consensus only. Such a trapping can be avoided by introducing a population renewal, which in the presence of superlinear reinforcement restores an ordinary surface-tension driven coarsening and considerably enhances formation of efficient signaling.

<![CDATA[Call combinations in birds and the evolution of compositional syntax]]>

Syntax is the set of rules for combining words into phrases, providing the basis for the generative power of linguistic expressions. In human language, the principle of compositionality governs how words are combined into a larger unit, the meaning of which depends on both the meanings of the words and the way in which they are combined. This linguistic capability, i.e., compositional syntax, has long been considered a trait unique to human language. Here, we review recent studies on call combinations in a passerine bird, the Japanese tit (Parus minor), that provide the first firm evidence for compositional syntax in a nonhuman animal. While it has been suggested that the findings of these studies fail to provide evidence for compositionality in Japanese tits, this criticism is based on misunderstanding of experimental design, misrepresentation of the importance of word order in human syntax, and necessitating linguistic capabilities beyond those given by the standard definition of compositionality. We argue that research on avian call combinations has provided the first steps in elucidating how compositional expressions could have emerged in animal communication systems.

<![CDATA[Vowel reduction in word-final position by early and late Spanish-English bilinguals]]>

Vowel reduction is a prominent feature of American English, as well as other stress-timed languages. As a phonological process, vowel reduction neutralizes multiple vowel quality contrasts in unstressed syllables. For bilinguals whose native language is not characterized by large spectral and durational differences between tonic and atonic vowels, systematically reducing unstressed vowels to the central vowel space can be problematic. Failure to maintain this pattern of stressed-unstressed syllables in American English is one key element that contributes to a “foreign accent” in second language speakers. Reduced vowels, or “schwas,” have also been identified as particularly vulnerable to the co-articulatory effects of adjacent consonants. The current study examined the effects of adjacent sounds on the spectral and temporal qualities of schwa in word-final position. Three groups of English-speaking adults were tested: Miami-based monolingual English speakers, early Spanish-English bilinguals, and late Spanish-English bilinguals. Subjects performed a reading task to examine their schwa productions in fluent speech when schwas were preceded by consonants from various points of articulation. Results indicated that monolingual English and late Spanish-English bilingual groups produced targeted vowel qualities for schwa, whereas early Spanish-English bilinguals lacked homogeneity in their vowel productions. This extends prior claims that schwa is targetless for F2 position for native speakers to highly-proficient bilingual speakers. Though spectral qualities lacked homogeneity for early Spanish-English bilinguals, early bilinguals produced schwas with near native-like vowel duration. In contrast, late bilinguals produced schwas with significantly longer durations than English monolinguals or early Spanish-English bilinguals. Our results suggest that the temporal properties of a language are better integrated into second language phonologies than spectral qualities. Finally, we examined the role of nonstructural variables (e.g. linguistic history measures) in predicting native-like vowel duration. These factors included: Age of L2 learning, amount of L1 use, and self-reported bilingual dominance. Our results suggested that different sociolinguistic factors predicted native-like reduced vowel duration than predicted native-like vowel qualities across multiple phonetic environments.

<![CDATA[What Constitutes a Phrase in Sound-Based Music? A Mixed-Methods Investigation of Perception and Acoustics]]>

Phrasing facilitates the organization of auditory information and is central to speech and music. Not surprisingly, aspects of changing intensity, rhythm, and pitch are key determinants of musical phrases and their boundaries in instrumental note-based music. Different kinds of speech (such as tone- vs. stress-languages) share these features in different proportions and form an instructive comparison. However, little is known about whether or how musical phrasing is perceived in sound-based music, where the basic musical unit from which a piece is created is commonly non-instrumental continuous sounds, rather than instrumental discontinuous notes. This issue forms the target of the present paper. Twenty participants (17 untrained in music) were presented with six stimuli derived from sound-based music, note-based music, and environmental sound. Their task was to indicate each occurrence of a perceived phrase and qualitatively describe key characteristics of the stimulus associated with each phrase response. It was hypothesized that sound-based music does elicit phrase perception, and that this is primarily associated with temporal changes in intensity and timbre, rather than rhythm and pitch. Results supported this hypothesis. Qualitative analysis of participant descriptions showed that for sound-based music, the majority of perceived phrases were associated with intensity or timbral change. For the note-based piano piece, rhythm was the main theme associated with perceived musical phrasing. We modeled the occurrence in time of perceived musical phrases with recurrent event ‘hazard’ analyses using time-series data representing acoustic predictors associated with intensity, spectral flatness, and rhythmic density. Acoustic intensity and timbre (represented here by spectral flatness) were strong predictors of perceived musical phrasing in sound-based music, and rhythm was only predictive for the piano piece. A further analysis including five additional spectral measures linked to timbre strengthened the models. Overall, results show that even when little of the pitch and rhythm information important for phrasing in note-based music is available, phrasing is still perceived, primarily in response to changes of intensity and timbre. Implications for electroacoustic music composition and music recommender systems are discussed.

<![CDATA[Novelty, Challenge, and Practice: The Impact of Intensive Language Learning on Attentional Functions]]>

We investigated the impact of a short intensive language course on attentional functions.

We examined 33 participants of a one-week Scottish Gaelic course and compared them to 34 controls: 16 active controls who participated in courses of comparable duration and intensity but not involving foreign language learning and 18 passive controls who followed their usual routines. Participants completed auditory tests of attentional inhibition and switching. There was no difference between the groups in any measures at the beginning of the course. At the end of the course, a significant improvement in attention switching was observed in the language group (p < .001) but not the control group (p = .127), independent of the age of participants (18–78 years). Half of the language participants (n = 17) were retested nine months after their course. All those who practiced Gaelic 5 hours or more per week improved from their baseline performance. In contrast, those who practiced 4 hours or fewer showed an inconsistent pattern: some improved while others stayed the same or deteriorated. Our results suggest that even a short period of intensive language learning can modulate attentional functions and that all age groups can benefit from this effect. Moreover, these short-term effects can be maintained through continuous practice.

<![CDATA[Sequence Memory Constraints Give Rise to Language-Like Structure through Iterated Learning]]>

Human language is composed of sequences of reusable elements. The origins of the sequential structure of language is a hotly debated topic in evolutionary linguistics. In this paper, we show that sets of sequences with language-like statistical properties can emerge from a process of cultural evolution under pressure from chunk-based memory constraints. We employ a novel experimental task that is non-linguistic and non-communicative in nature, in which participants are trained on and later asked to recall a set of sequences one-by-one. Recalled sequences from one participant become training data for the next participant. In this way, we simulate cultural evolution in the laboratory. Our results show a cumulative increase in structure, and by comparing this structure to data from existing linguistic corpora, we demonstrate a close parallel between the sets of sequences that emerge in our experiment and those seen in natural language.

<![CDATA[Infant Directed Speech Enhances Statistical Learning in Newborn Infants: An ERP Study]]>

Statistical learning and the social contexts of language addressed to infants are hypothesized to play important roles in early language development. Previous behavioral work has found that the exaggerated prosodic contours of infant-directed speech (IDS) facilitate statistical learning in 8-month-old infants. Here we examined the neural processes involved in on-line statistical learning and investigated whether the use of IDS facilitates statistical learning in sleeping newborns. Event-related potentials (ERPs) were recorded while newborns were exposed to12 pseudo-words, six spoken with exaggerated pitch contours of IDS and six spoken without exaggerated pitch contours (ADS) in ten alternating blocks. We examined whether ERP amplitudes for syllable position within a pseudo-word (word-initial vs. word-medial vs. word-final, indicating statistical word learning) and speech register (ADS vs. IDS) would interact. The ADS and IDS registers elicited similar ERP patterns for syllable position in an early 0–100 ms component but elicited different ERP effects in both the polarity and topographical distribution at 200–400 ms and 450–650 ms. These results provide the first evidence that the exaggerated pitch contours of IDS result in differences in brain activity linked to on-line statistical learning in sleeping newborns.

<![CDATA[Does Grammatical Structure Accelerate Number Word Learning? Evidence from Learners of Dual and Non-Dual Dialects of Slovenian]]>

How does linguistic structure affect children’s acquisition of early number word meanings? Previous studies have tested this question by comparing how children learning languages with different grammatical representations of number learn the meanings of labels for small numbers, like 1, 2, and 3. For example, children who acquire a language with singular-plural marking, like English, are faster to learn the word for 1 than children learning a language that lacks the singular-plural distinction, perhaps because the word for 1 is always used in singular contexts, highlighting its meaning. These studies are problematic, however, because reported differences in number word learning may be due to unmeasured cross-cultural differences rather than specific linguistic differences. To address this problem, we investigated number word learning in four groups of children from a single culture who spoke different dialects of the same language that differed chiefly with respect to how they grammatically mark number. We found that learning a dialect which features “dual” morphology (marking of pairs) accelerated children’s acquisition of the number word two relative to learning a “non-dual” dialect of the same language.

<![CDATA[Can Learning a Foreign Language Foster Analytic Thinking?—Evidence from Chinese EFL Learners' Writings]]>

Language is not only the representation of thinking, but also shapes thinking. Studies on bilinguals suggest that a foreign language plays an important and unconscious role in thinking. In this study, a software—Linguistic Inquiry and Word Count 2007—was used to investigate whether the learning of English as a foreign language (EFL) can foster Chinese high school students’ English analytic thinking (EAT) through the analysis of their English writings with our self-built corpus. It was found that: (1) learning English can foster Chinese learners’ EAT. Chinese EFL learners’ ability of making distinctions, degree of cognitive complexity and degree of thinking activeness have all improved along with the increase of their English proficiency and their age; (2) there exist differences in Chinese EFL learners’ EAT and that of English native speakers, i. e. English native speakers are better in the ability of making distinctions and degree of thinking activeness. These findings suggest that the best EFL learners in high schools have gained native-like analytic thinking through six years’ English learning and are able to switch their cognitive styles as needed.

<![CDATA[Mandarin-English Bilinguals Process Lexical Tones in Newly Learned Words in Accordance with the Language Context]]>

Previous research has mainly considered the impact of tone-language experience on ability to discriminate linguistic pitch, but proficient bilingual listening requires differential processing of sound variation in each language context. Here, we ask whether Mandarin-English bilinguals, for whom pitch indicates word distinctions in one language but not the other, can process pitch differently in a Mandarin context vs. an English context. Across three eye-tracked word-learning experiments, results indicated that tone-intonation bilinguals process tone in accordance with the language context. In Experiment 1, 51 Mandarin-English bilinguals and 26 English speakers without tone experience were taught Mandarin-compatible novel words with tones. Mandarin-English bilinguals out-performed English speakers, and, for bilinguals, overall accuracy was correlated with Mandarin dominance. Experiment 2 taught 24 Mandarin-English bilinguals and 25 English speakers novel words with Mandarin-like tones, but English-like phonemes and phonotactics. The Mandarin-dominance advantages observed in Experiment 1 disappeared when words were English-like. Experiment 3 contrasted Mandarin-like vs. English-like words in a within-subjects design, providing even stronger evidence that bilinguals can process tone language-specifically. Bilinguals (N = 58), regardless of language dominance, attended more to tone than English speakers without Mandarin experience (N = 28), but only when words were Mandarin-like—not when they were English-like. Mandarin-English bilinguals thus tailor tone processing to the within-word language context.

<![CDATA[Unconscious improvement in foreign language learning using mismatch negativity neurofeedback: A preliminary study]]>

When people learn foreign languages, they find it difficult to perceive speech sounds that are nonexistent in their native language, and extensive training is consequently necessary. Our previous studies have shown that by using neurofeedback based on the mismatch negativity event-related brain potential, participants could unconsciously achieve learning in the auditory discrimination of pure tones that could not be consciously discriminated without the neurofeedback. Here, we examined whether mismatch negativity neurofeedback is effective for helping someone to perceive new speech sounds in foreign language learning. We developed a task for training native Japanese speakers to discriminate between ‘l’ and ‘r’ sounds in English, as they usually cannot discriminate between these two sounds. Without participants attending to auditory stimuli or being aware of the nature of the experiment, neurofeedback training helped them to achieve significant improvement in unconscious auditory discrimination and recognition of the target words ‘light’ and ‘right’. There was also improvement in the recognition of other words containing ‘l’ and ‘r’ (e.g., ‘blight’ and ‘bright’), even though these words had not been presented during training. This method could be used to facilitate foreign language learning and can be extended to other fields of auditory and clinical research and even other senses.

<![CDATA[What Homophones Say about Words]]>

The number of potential meanings for a new word is astronomic. To make the word-learning problem tractable, one must restrict the hypothesis space. To do so, current word learning accounts often incorporate constraints about cognition or about the mature lexicon directly in the learning device. We are concerned with the convexity constraint, which holds that concepts (privileged sets of entities that we think of as “coherent”) do not have gaps (if A and B belong to a concept, so does any entity “between” A and B). To leverage from it a linguistic constraint, learning algorithms have percolated this constraint from concepts, to word forms: some algorithms rely on the possibility that word forms are associated with convex sets of objects. Yet this does have to be the case: homophones are word forms associated with two separate words and meanings. Two sets of experiments show that when evidence suggests that a novel label is associated with a disjoint (non-convex) set of objects, either a) because there is a gap in conceptual space between the learning exemplars for a given word or b) because of the intervention of other lexical items in that gap, adults prefer to postulate homophony, where a single word form is associated with two separate words and meanings, rather than inferring that the word could have a disjunctive, discontinuous meaning. These results about homophony must be integrated to current word learning algorithms. We conclude by arguing for a weaker specialization of word learning algorithms, which too often could miss important constraints by focusing on a restricted empirical basis (e.g., non-homophonous content words).

<![CDATA[Compositional Reasoning in Early Childhood]]>

Compositional “language of thought” models have recently been proposed to account for a wide range of children’s conceptual and linguistic learning. The present work aims to evaluate one of the most basic assumptions of these models: children should have an ability to represent and compose functions. We show that 3.5–4.5 year olds are able to predictively compose two novel functions at significantly above chance levels, even without any explicit training or feedback on the composition itself. We take this as evidence that children at this age possess some capacity for compositionality, consistent with models that make this ability explicit, and providing an empirical challenge to those that do not.

<![CDATA[Motherese by Eye and Ear: Infants Perceive Visual Prosody in Point-Line Displays of Talking Heads]]>

Infant-directed (ID) speech provides exaggerated auditory and visual prosodic cues. Here we investigated if infants were sensitive to the match between the auditory and visual correlates of ID speech prosody. We presented 8-month-old infants with two silent line-joined point-light displays of faces speaking different ID sentences, and a single vocal-only sentence matched to one of the displays. Infants looked longer to the matched than mismatched visual signal when full-spectrum speech was presented; and when the vocal signals contained speech low-pass filtered at 400 Hz. When the visual display was separated into rigid (head only) and non-rigid (face only) motion, the infants looked longer to the visual match in the rigid condition; and to the visual mismatch in the non-rigid condition. Overall, the results suggest 8-month-olds can extract information about the prosodic structure of speech from voice and head kinematics, and are sensitive to their match; and that they are less sensitive to the match between lip and voice information in connected speech.

<![CDATA[The Impact of Clickers Instruction on Cognitive Loads and Listening and Speaking Skills in College English Class]]>

Clickers might own a bright future in China if properly introduced although they have not been widely acknowledged as an effective tool to facilitate English learning and teaching in Chinese contexts. By randomly selecting participants from undergraduates in a university in China over four academic years, this study aims to identify the impact of clickers on college English listening and speaking skills, and differences in cognitive loads between clickers and traditional multimedia assisted instruction modes. It was concluded that in China's college English class, compared with multimedia assisted instruction, (1) clickers could improve college English listening skills; (2) clickers could improve college English speaking skills; and (3) clickers could reduce undergraduates' cognitive loads in College English Class. Reasons for the results and defects in this study were also explored and discussed, based on learning, teaching and cognitive load theories. Some Suggestions for future research were also raised.

<![CDATA[Children Use Statistics and Semantics in the Retreat from Overgeneralization]]>

How do children learn to restrict their productivity and avoid ungrammatical utterances? The present study addresses this question by examining why some verbs are used with un- prefixation (e.g., unwrap) and others are not (e.g., *unsqueeze). Experiment 1 used a priming methodology to examine children's (3–4; 5–6) grammatical restrictions on verbal un- prefixation. To elicit production of un-prefixed verbs, test trials were preceded by a prime sentence, which described reversal actions with grammatical un- prefixed verbs (e.g., Marge folded her arms and then she unfolded them). Children then completed target sentences by describing cartoon reversal actions corresponding to (potentially) un- prefixed verbs. The younger age-group's production probability of verbs in un- form was negatively related to the frequency of the target verb in bare form (e.g., squeez/e/ed/es/ing), while the production probability of verbs in un- form for both age groups was negatively predicted by the frequency of synonyms to a verb's un- form (e.g., release/*unsqueeze). In Experiment 2, the same children rated the grammaticality of all verbs in un- form. The older age-group's grammaticality judgments were (a) positively predicted by the extent to which each verb was semantically consistent with a semantic “cryptotype” of meanings - where “cryptotype” refers to a covert category of overlapping, probabilistic meanings that are difficult to access - hypothesised to be shared by verbs which take un-, and (b) negatively predicted by the frequency of synonyms to a verb's un- form. Taken together, these experiments demonstrate that children as young as 4;0 employ pre-emption and entrenchment to restrict generalizations, and that use of a semantic cryptotype to guide judgments of overgeneralizations is also evident by age 6;0. Thus, even early developmental accounts of children's restriction of productivity must encompass a mechanism in which a verb's semantic and statistical properties interact.