ResearchPad - gaussian-noise https://www.researchpad.co Default RSS Feed en-us © 2020 Newgen KnowledgeWorks <![CDATA[Robust pollution source parameter identification based on the artificial bee colony algorithm using a wireless sensor network]]> https://www.researchpad.co/article/elastic_article_14751 Pollution source parameter identification (PSPI) is significant for pollution control, since it can provide important information and save a lot of time for subsequent pollution elimination works. For solving the PSPI problem, a large number of pollution sensor nodes can be rapidly deployed to cover a large area and form a wireless sensor network (WSN). Based on the measurements of WSN, least-squares estimation methods can solve the PSPI problem by searching for the solution that minimize the sum of squared measurement noises. They are independent of the measurement noise distribution, i.e., robust to the noise distribution. To search for the least-squares solution, population-based parallel search techniques usually can overcome the premature convergence problem, which can stagnate the single-point search algorithm. In this paper, we adapt the relatively newly presented artificial bee colony (ABC) algorithm to solve the WSN-based PSPI problem and verifies its feasibility and robustness. Extensive simulation results show that the ABC and the particle swarm optimization (PSO) algorithm obtained similar identification results in the same simulation scenario. Moreover, the ABC and the PSO achieved much better performance than a traditionally used single-point search algorithm, i.e., the trust-region reflective algorithm.

]]>
<![CDATA[Evaluation of functional methods of joint centre determination for quasi-planar movement]]> https://www.researchpad.co/article/5c605a9ed5eed0c4847cd32a

Functional methods identify joint centres as the centre of rotation (CoR) of two adjacent movements during an ad-hoc movement. The methods have been used for functionally determining hip joint centre in gait analysis and have revealed advantages compared to predictive regression techniques. However, the current implementation of functional methods hinders its application in clinical use when subjects have difficulties performing multi-plane movements over the required range. In this study, we systematically investigated whether functional methods can be used to localise the CoR during a quasi-planar movement. The effects of the following factors were analysed: the algorithms, the range and speed of the movement, marker cluster location, marker cluster size and distance to the joint centre. A mechanical linkage was used in our study to isolate the factors of interest and give insight to variation in implementation of functional methods. Our results showed the algorithms and cluster locations significantly affected the estimate results. For all algorithms, a significantly positive relationship between CoR errors and the distance of proximal cluster coordinate location to the joint centre along the medial-lateral direction was observed while the distal marker clusters were best located as close as possible to the joint centre. By optimising the analytical and experimental factors, the transformation algorithms achieved a root mean square error (RMSE) of 5.3 mm while the sphere fitting methods yielded the best estimation with an RMSE of 2.6 mm. The transformation algorithms performed better in presence of random noise and simulated soft tissue artefacts.

]]>
<![CDATA[Deepbinner: Demultiplexing barcoded Oxford Nanopore reads with deep convolutional neural networks]]> https://www.researchpad.co/article/5bfdb372d5eed0c4845c990b

Multiplexing, the simultaneous sequencing of multiple barcoded DNA samples on a single flow cell, has made Oxford Nanopore sequencing cost-effective for small genomes. However, it depends on the ability to sort the resulting sequencing reads by barcode, and current demultiplexing tools fail to classify many reads. Here we present Deepbinner, a tool for Oxford Nanopore demultiplexing that uses a deep neural network to classify reads based on the raw electrical read signal. This ‘signal-space’ approach allows for greater accuracy than existing ‘base-space’ tools (Albacore and Porechop) for which signals must first be converted to DNA base calls, itself a complex problem that can introduce noise into the barcode sequence. To assess Deepbinner and existing tools, we performed multiplex sequencing on 12 amplicons chosen for their distinguishability. This allowed us to establish a ground truth classification for each read based on internal sequence alone. Deepbinner had the lowest rate of unclassified reads (7.8%) and the highest demultiplexing precision (98.5% of classified reads were correctly assigned). It can be used alone (to maximise the number of classified reads) or in conjunction with other demultiplexers (to maximise precision and minimise false positive classifications). We also found cross-sample chimeric reads (0.3%) and evidence of barcode switching (0.3%) in our dataset, which likely arise during library preparation and may be detrimental for quantitative studies that use multiplexing. Deepbinner is open source (GPLv3) and available at https://github.com/rrwick/Deepbinner.

]]>
<![CDATA[Effects of Physiological Internal Noise on Model Predictions of Concurrent Vowel Identification for Normal-Hearing Listeners]]> https://www.researchpad.co/article/5989dad7ab0ee8fa60bb84df

Previous studies have shown that concurrent vowel identification improves with increasing temporal onset asynchrony of the vowels, even if the vowels have the same fundamental frequency. The current study investigated the possible underlying neural processing involved in concurrent vowel perception. The individual vowel stimuli from a previously published study were used as inputs for a phenomenological auditory-nerve (AN) model. Spectrotemporal representations of simulated neural excitation patterns were constructed (i.e., neurograms) and then matched quantitatively with the neurograms of the single vowels using the Neurogram Similarity Index Measure (NSIM). A novel computational decision model was used to predict concurrent vowel identification. To facilitate optimum matches between the model predictions and the behavioral human data, internal noise was added at either neurogram generation or neurogram matching using the NSIM procedure. The best fit to the behavioral data was achieved with a signal-to-noise ratio (SNR) of 8 dB for internal noise added at the neurogram but with a much smaller amount of internal noise (SNR of 60 dB) for internal noise added at the level of the NSIM computations. The results suggest that accurate modeling of concurrent vowel data from listeners with normal hearing may partly depend on internal noise and where internal noise is hypothesized to occur during the concurrent vowel identification process.

]]>
<![CDATA[A Method for Non-Rigid Face Alignment via Combining Local and Holistic Matching]]> https://www.researchpad.co/article/5989daf1ab0ee8fa60bc12bb

We propose a method for non-rigid face alignment which only needs a single template, such as using a person’s smile face to match his surprise face. First, in order to be robust to outliers caused by complex geometric deformations, a new local feature matching method called K Patch Pairs (K-PP) is proposed. Specifically, inspired by the state-of-art similarity measure used in template matching, K-PP is to find the mutual K nearest neighbors between two images. A weight matrix is then presented to balance the similarity and the number of local matching. Second, we proposed a modified Lucas-Kanade algorithm combined with local matching constraint to solve the non-rigid face alignment, so that a holistic face representation and local features can be jointly modeled in the object function. Both the flexible ability of local matching and the robust ability of holistic fitting are included in our method. Furthermore, we show that the optimization problem can be efficiently solved by the inverse compositional algorithm. Comparison results with conventional methods demonstrate our superiority in terms of both accuracy and robustness.

]]>
<![CDATA[Robust information propagation through noisy neural circuits]]> https://www.researchpad.co/article/5989db5aab0ee8fa60bdf260

Sensory neurons give highly variable responses to stimulation, which can limit the amount of stimulus information available to downstream circuits. Much work has investigated the factors that affect the amount of information encoded in these population responses, leading to insights about the role of covariability among neurons, tuning curve shape, etc. However, the informativeness of neural responses is not the only relevant feature of population codes; of potentially equal importance is how robustly that information propagates to downstream structures. For instance, to quantify the retina’s performance, one must consider not only the informativeness of the optic nerve responses, but also the amount of information that survives the spike-generating nonlinearity and noise corruption in the next stage of processing, the lateral geniculate nucleus. Our study identifies the set of covariance structures for the upstream cells that optimize the ability of information to propagate through noisy, nonlinear circuits. Within this optimal family are covariances with “differential correlations”, which are known to reduce the information encoded in neural population activities. Thus, covariance structures that maximize information in neural population codes, and those that maximize the ability of this information to propagate, can be very different. Moreover, redundancy is neither necessary nor sufficient to make population codes robust against corruption by noise: redundant codes can be very fragile, and synergistic codes can—in some cases—optimize robustness against noise.

]]>
<![CDATA[A New Variational Approach for Multiplicative Noise and Blur Removal]]> https://www.researchpad.co/article/5989db53ab0ee8fa60bdcb12

This paper proposes a new variational model for joint multiplicative denoising and deblurring. It combines a total generalized variation filter (which has been proved to be able to reduce the blocky-effects by being aware of high-order smoothness) and shearlet transform (that effectively preserves anisotropic image features such as sharp edges, curves and so on). The new model takes the advantage of both regularizers since it is able to minimize the staircase effects while preserving sharp edges, textures and other fine image details. The existence and uniqueness of a solution to the proposed variational model is also discussed. The resulting energy functional is then solved by using alternating direction method of multipliers. Numerical experiments showing that the proposed model achieves satisfactory restoration results, both visually and quantitatively in handling the blur (motion, Gaussian, disk, and Moffat) and multiplicative noise (Gaussian, Gamma, or Rayleigh) reduction. A comparison with other recent methods in this field is provided as well. The proposed model can also be applied for restoring both single and multi-channel images contaminated with multiplicative noise, and permit cross-channel blurs when the underlying image has more than one channel. Numerical tests on color images are conducted to demonstrate the effectiveness of the proposed model.

]]>
<![CDATA[How to Distinguish Conformational Selection and Induced Fit Based on Chemical Relaxation Rates]]> https://www.researchpad.co/article/5989da50ab0ee8fa60b8da57

Protein binding often involves conformational changes. Important questions are whether a conformational change occurs prior to a binding event (‘conformational selection’) or after a binding event (‘induced fit’), and how conformational transition rates can be obtained from experiments. In this article, we present general results for the chemical relaxation rates of conformational-selection and induced-fit binding processes that hold for all concentrations of proteins and ligands and, thus, go beyond the standard pseudo-first-order approximation of large ligand concentration. These results allow to distinguish conformational-selection from induced-fit processes—also in cases in which such a distinction is not possible under pseudo-first-order conditions—and to extract conformational transition rates of proteins from chemical relaxation data.

]]>
<![CDATA[A No-Reference Adaptive Blockiness Measure for JPEG Compressed Images]]> https://www.researchpad.co/article/5989da71ab0ee8fa60b9516c

Digital images have been extensively used in education, research, and entertainment. Many of these images, taken by consumer cameras, are compressed by the JPEG algorithm for effective storage and transmission. Blocking artifact is a well-known problem caused by this algorithm. Effective measurement of blocking artifacts plays an important role in the design, optimization, and evaluation of image compression algorithms. In this paper, we propose a no-reference objective blockiness measure, which is adaptive to high frequency component in an image. Difference of entropies across blocks and variation of block boundary pixel values in edge images are adopted to calculate the blockiness level in areas with low and high frequency component, respectively. Extensive experimental results prove that the proposed measure is effective and stable across a wide variety of images. It is robust to image noise and can be used for real-world image quality monitoring and control. Index Terms—JPEG, no-reference, blockiness measure

]]>
<![CDATA[Criticality meets learning: Criticality signatures in a self-organizing recurrent neural network]]> https://www.researchpad.co/article/5989db5cab0ee8fa60be009c

Many experiments have suggested that the brain operates close to a critical state, based on signatures of criticality such as power-law distributed neuronal avalanches. In neural network models, criticality is a dynamical state that maximizes information processing capacities, e.g. sensitivity to input, dynamical range and storage capacity, which makes it a favorable candidate state for brain function. Although models that self-organize towards a critical state have been proposed, the relation between criticality signatures and learning is still unclear. Here, we investigate signatures of criticality in a self-organizing recurrent neural network (SORN). Investigating criticality in the SORN is of particular interest because it has not been developed to show criticality. Instead, the SORN has been shown to exhibit spatio-temporal pattern learning through a combination of neural plasticity mechanisms and it reproduces a number of biological findings on neural variability and the statistics and fluctuations of synaptic efficacies. We show that, after a transient, the SORN spontaneously self-organizes into a dynamical state that shows criticality signatures comparable to those found in experiments. The plasticity mechanisms are necessary to attain that dynamical state, but not to maintain it. Furthermore, onset of external input transiently changes the slope of the avalanche distributions – matching recent experimental findings. Interestingly, the membrane noise level necessary for the occurrence of the criticality signatures reduces the model’s performance in simple learning tasks. Overall, our work shows that the biologically inspired plasticity and homeostasis mechanisms responsible for the SORN’s spatio-temporal learning abilities can give rise to criticality signatures in its activity when driven by random input, but these break down under the structured input of short repeating sequences.

]]>
<![CDATA[A Secure and Efficient Scalable Secret Image Sharing Scheme with Flexible Shadow Sizes]]> https://www.researchpad.co/article/5989db28ab0ee8fa60bd0a50

In a general (k, n) scalable secret image sharing (SSIS) scheme, the secret image is shared by n participants and any k or more than k participants have the ability to reconstruct it. The scalability means that the amount of information in the reconstructed image scales in proportion to the number of the participants. In most existing SSIS schemes, the size of each image shadow is relatively large and the dealer does not has a flexible control strategy to adjust it to meet the demand of differen applications. Besides, almost all existing SSIS schemes are not applicable under noise circumstances. To address these deficiencies, in this paper we present a novel SSIS scheme based on a brand-new technique, called compressed sensing, which has been widely used in many fields such as image processing, wireless communication and medical imaging. Our scheme has the property of flexibility, which means that the dealer can achieve a compromise between the size of each shadow and the quality of the reconstructed image. In addition, our scheme has many other advantages, including smooth scalability, noise-resilient capability, and high security. The experimental results and the comparison with similar works demonstrate the feasibility and superiority of our scheme.

]]>
<![CDATA[Let’s Not Waste Time: Using Temporal Information in Clustered Activity Estimation with Spatial Adjacency Restrictions (CAESAR) for Parcellating FMRI Data]]> https://www.researchpad.co/article/5989da2fab0ee8fa60b83e08

We have proposed a Bayesian approach for functional parcellation of whole-brain FMRI measurements which we call Clustered Activity Estimation with Spatial Adjacency Restrictions (CAESAR). We use distance-dependent Chinese restaurant processes (dd-CRPs) to define a flexible prior which partitions the voxel measurements into clusters whose number and shapes are unknown a priori. With dd-CRPs we can conveniently implement spatial constraints to ensure that our parcellations remain spatially contiguous and thereby physiologically meaningful. In the present work, we extend CAESAR by using Gaussian process (GP) priors to model the temporally smooth haemodynamic signals that give rise to the measured FMRI data. A challenge for GP inference in our setting is the cubic scaling with respect to the number of time points, which can become computationally prohibitive with FMRI measurements, potentially consisting of long time series. As a solution we describe an efficient implementation that is practically as fast as the corresponding time-independent non-GP model with typically-sized FMRI data sets. We also employ a population Monte-Carlo algorithm that can significantly speed up convergence compared to traditional single-chain methods. First we illustrate the benefits of CAESAR and the GP priors with simulated experiments. Next, we demonstrate our approach by parcellating resting state FMRI data measured from twenty participants as taken from the Human Connectome Project data repository. Results show that CAESAR affords highly robust and scalable whole-brain clustering of FMRI timecourses.

]]>
<![CDATA[Systematic Design of a Metal Ion Biosensor: A Multi-Objective Optimization Approach]]> https://www.researchpad.co/article/5989da77ab0ee8fa60b971c2

With the recent industrial expansion, heavy metals and other pollutants have increasingly contaminated our living surroundings. Heavy metals, being non-degradable, tend to accumulate in the food chain, resulting in potentially damaging toxicity to organisms. Thus, techniques to detect metal ions have gradually begun to receive attention. Recent progress in research on synthetic biology offers an alternative means for metal ion detection via the help of promoter elements derived from microorganisms. To make the design easier, it is necessary to develop a systemic design method for evaluating and selecting adequate components to achieve a desired detection performance. A multi-objective (MO) H2/H performance criterion is derived here for design specifications of a metal ion biosensor to achieve the H2 optimal matching of a desired input/output (I/O) response and simultaneous H optimal filtering of intrinsic parameter fluctuations and external cellular noise. According to the two design specifications, a Takagi-Sugeno (T-S) fuzzy model is employed to interpolate several local linear stochastic systems to approximate the nonlinear stochastic metal ion biosensor system so that the multi-objective H2/H design of the metal ion biosensor can be solved by an associated linear matrix inequality (LMI)-constrained multi-objective (MO) design problem. The analysis and design of a metal ion biosensor with optimal I/O response matching and optimal noise filtering ability then can be achieved by solving the multi-objective problem under a set of LMIs. Moreover, a multi-objective evolutionary algorithm (MOEA)-based library search method is employed to find adequate components from corresponding libraries to solve LMI-constrained MO H2/H design problems. It is a useful tool for the design of metal ion biosensors, particularly regarding the tradeoffs between the design factors under consideration.

]]>
<![CDATA[Elucidation of molecular kinetic schemes from macroscopic traces using system identification]]> https://www.researchpad.co/article/5989db54ab0ee8fa60bdd040

Overall cellular responses to biologically-relevant stimuli are mediated by networks of simpler lower-level processes. Although information about some of these processes can now be obtained by visualizing and recording events at the molecular level, this is still possible only in especially favorable cases. Therefore the development of methods to extract the dynamics and relationships between the different lower-level (microscopic) processes from the overall (macroscopic) response remains a crucial challenge in the understanding of many aspects of physiology. Here we have devised a hybrid computational-analytical method to accomplish this task, the SYStems-based MOLecular kinetic scheme Extractor (SYSMOLE). SYSMOLE utilizes system-identification input-output analysis to obtain a transfer function between the stimulus and the overall cellular response in the Laplace-transformed domain. It then derives a Markov-chain state molecular kinetic scheme uniquely associated with the transfer function by means of a classification procedure and an analytical step that imposes general biological constraints. We first tested SYSMOLE with synthetic data and evaluated its performance in terms of its rate of convergence to the correct molecular kinetic scheme and its robustness to noise. We then examined its performance on real experimental traces by analyzing macroscopic calcium-current traces elicited by membrane depolarization. SYSMOLE derived the correct, previously known molecular kinetic scheme describing the activation and inactivation of the underlying calcium channels and correctly identified the accepted mechanism of action of nifedipine, a calcium-channel blocker clinically used in patients with cardiovascular disease. Finally, we applied SYSMOLE to study the pharmacology of a new class of glutamate antipsychotic drugs and their crosstalk mechanism through a heteromeric complex of G protein-coupled receptors. Our results indicate that our methodology can be successfully applied to accurately derive molecular kinetic schemes from experimental macroscopic traces, and we anticipate that it may be useful in the study of a wide variety of biological systems.

]]>
<![CDATA[Two-Level Scheduling for Video Transmission over Downlink OFDMA Networks]]> https://www.researchpad.co/article/5989da13ab0ee8fa60b7a3ca

This paper presents a two-level scheduling scheme for video transmission over downlink orthogonal frequency-division multiple access (OFDMA) networks. It aims to maximize the aggregate quality of the video users subject to the playback delay and resource constraints, by exploiting the multiuser diversity and the video characteristics. The upper level schedules the transmission of video packets among multiple users based on an overall target bit-error-rate (BER), the importance level of packet and resource consumption efficiency factor. Instead, the lower level renders unequal error protection (UEP) in terms of target BER among the scheduled packets by solving a weighted sum distortion minimization problem, where each user weight reflects the total importance level of the packets that has been scheduled for that user. Frequency-selective power is then water-filled over all the assigned subcarriers in order to leverage the potential channel coding gain. Realistic simulation results demonstrate that the proposed scheme significantly outperforms the state-of-the-art scheduling scheme by up to 6.8 dB in terms of peak-signal-to-noise-ratio (PSNR). Further test evaluates the suitability of equal power allocation which is the common assumption in the literature.

]]>
<![CDATA[Towards a Video Passive Content Fingerprinting Method for Partial-Copy Detection Robust against Non-Simulated Attacks]]> https://www.researchpad.co/article/5989d9dbab0ee8fa60b67851

Passive content fingerprinting is widely used for video content identification and monitoring. However, many challenges remain unsolved especially for partial-copies detection. The main challenge is to find the right balance between the computational cost of fingerprint extraction and fingerprint dimension, without compromising detection performance against various attacks (robustness). Fast video detection performance is desirable in several modern applications, for instance, in those where video detection involves the use of large video databases or in applications requiring real-time video detection of partial copies, a process whose difficulty increases when videos suffer severe transformations. In this context, conventional fingerprinting methods are not fully suitable to cope with the attacks and transformations mentioned before, either because the robustness of these methods is not enough or because their execution time is very high, where the time bottleneck is commonly found in the fingerprint extraction and matching operations. Motivated by these issues, in this work we propose a content fingerprinting method based on the extraction of a set of independent binary global and local fingerprints. Although these features are robust against common video transformations, their combination is more discriminant against severe video transformations such as signal processing attacks, geometric transformations and temporal and spatial desynchronization. Additionally, we use an efficient multilevel filtering system accelerating the processes of fingerprint extraction and matching. This multilevel filtering system helps to rapidly identify potential similar video copies upon which the fingerprint process is carried out only, thus saving computational time. We tested with datasets of real copied videos, and the results show how our method outperforms state-of-the-art methods regarding detection scores. Furthermore, the granularity of our method makes it suitable for partial-copy detection; that is, by processing only short segments of 1 second length.

]]>
<![CDATA[Bearing-based localization for leader-follower formation control]]> https://www.researchpad.co/article/5989db51ab0ee8fa60bdc4ba

The observability of the leader robot system and the leader-follower formation control are studied. First, the nonlinear observability is studied for when the leader robot observes landmarks. Second, the system is shown to be completely observable when the leader robot observes two different landmarks. When the leader robot system is observable, multi-robots can rapidly form and maintain a formation based on the bearing-only information that the follower robots observe from the leader robot. Finally, simulations confirm the effectiveness of the proposed formation control.

]]>
<![CDATA[Stabilizing patterns in time: Neural network approach]]> https://www.researchpad.co/article/5ab4e878463d7e0cbd0422e2

Recurrent and feedback networks are capable of holding dynamic memories. Nonetheless, training a network for that task is challenging. In order to do so, one should face non-linear propagation of errors in the system. Small deviations from the desired dynamics due to error or inherent noise might have a dramatic effect in the future. A method to cope with these difficulties is thus needed. In this work we focus on recurrent networks with linear activation functions and binary output unit. We characterize its ability to reproduce a temporal sequence of actions over its output unit. We suggest casting the temporal learning problem to a perceptron problem. In the discrete case a finite margin appears, providing the network, to some extent, robustness to noise, for which it performs perfectly (i.e. producing a desired sequence for an arbitrary number of cycles flawlessly). In the continuous case the margin approaches zero when the output unit changes its state, hence the network is only able to reproduce the sequence with slight jitters. Numerical simulation suggest that in the discrete time case, the longest sequence that can be learned scales, at best, as square root of the network size. A dramatic effect occurs when learning several short sequences in parallel, that is, their total length substantially exceeds the length of the longest single sequence the network can learn. This model easily generalizes to an arbitrary number of output units, which boost its performance. This effect is demonstrated by considering two practical examples for sequence learning. This work suggests a way to overcome stability problems for training recurrent networks and further quantifies the performance of a network under the specific learning scheme.

]]>
<![CDATA[How Do Efficient Coding Strategies Depend on Origins of Noise in Neural Circuits?]]> https://www.researchpad.co/article/5989db0dab0ee8fa60bcab02

Neural circuits reliably encode and transmit signals despite the presence of noise at multiple stages of processing. The efficient coding hypothesis, a guiding principle in computational neuroscience, suggests that a neuron or population of neurons allocates its limited range of responses as efficiently as possible to best encode inputs while mitigating the effects of noise. Previous work on this question relies on specific assumptions about where noise enters a circuit, limiting the generality of the resulting conclusions. Here we systematically investigate how noise introduced at different stages of neural processing impacts optimal coding strategies. Using simulations and a flexible analytical approach, we show how these strategies depend on the strength of each noise source, revealing under what conditions the different noise sources have competing or complementary effects. We draw two primary conclusions: (1) differences in encoding strategies between sensory systems—or even adaptational changes in encoding properties within a given system—may be produced by changes in the structure or location of neural noise, and (2) characterization of both circuit nonlinearities as well as noise are necessary to evaluate whether a circuit is performing efficiently.

]]>
<![CDATA[Fast and Accurate Learning When Making Discrete Numerical Estimates]]> https://www.researchpad.co/article/5989d9deab0ee8fa60b68ad1

Many everyday estimation tasks have an inherently discrete nature, whether the task is counting objects (e.g., a number of paint buckets) or estimating discretized continuous variables (e.g., the number of paint buckets needed to paint a room). While Bayesian inference is often used for modeling estimates made along continuous scales, discrete numerical estimates have not received as much attention, despite their common everyday occurrence. Using two tasks, a numerosity task and an area estimation task, we invoke Bayesian decision theory to characterize how people learn discrete numerical distributions and make numerical estimates. Across three experiments with novel stimulus distributions we found that participants fell between two common decision functions for converting their uncertain representation into a response: drawing a sample from their posterior distribution and taking the maximum of their posterior distribution. While this was consistent with the decision function found in previous work using continuous estimation tasks, surprisingly the prior distributions learned by participants in our experiments were much more adaptive: When making continuous estimates, participants have required thousands of trials to learn bimodal priors, but in our tasks participants learned discrete bimodal and even discrete quadrimodal priors within a few hundred trials. This makes discrete numerical estimation tasks good testbeds for investigating how people learn and make estimates.

]]>