ResearchPad - tangents https://www.researchpad.co Default RSS Feed en-us © 2020 Newgen KnowledgeWorks <![CDATA[Multipurpose chemical liquid sensing applications by microwave approach]]> https://www.researchpad.co/article/elastic_article_7700 In this work, a novel sensor based on printed circuit board (PCB) microstrip rectangular patch antenna is proposed to detect different ratios of ethanol alcohol in wines and isopropyl alcohol in disinfectants. The proposed sensor was designed by finite integration technique (FIT) based high-frequency electromagnetic solver (CST) and was fabricated by Proto Mat E33 machine. To implement the numerical investigations, dielectric properties of the samples were first measured by a dielectric probe kit then uploaded into the simulation program. Results showed a linear shifting in the resonant frequency of the sensor when the dielectric constant of the samples were changed due to different concentrations of ethanol alcohol and isopropyl alcohol. A good agreement was observed between the calculated and measured results, emphasizing the usability of dielectric behavior as an input sensing agent. It was concluded that the proposed sensor is viable for multipurpose chemical sensing applications.

]]>
<![CDATA[Adaptive multi-degree of freedom Brain Computer Interface using online feedback: Towards novel methods and metrics of mutual adaptation between humans and machines for BCI]]> https://www.researchpad.co/article/5c89771ad5eed0c4847d2469

This paper proposes a novel adaptive online-feedback methodology for Brain Computer Interfaces (BCI). The method uses ElectroEncephaloGraphic (EEG) signals and combines motor with speech imagery to allow for tasks that involve multiple degrees of freedom (DoF). The main approach utilizes the covariance matrix descriptor as feature, and the Relevance Vector Machines (RVM) classifier. The novel contributions include, (1) a new method to select representative data to update the RVM model, and (2) an online classifier which is an adaptively-weighted mixture of RVM models to account for the users’ exploration and exploitation processes during the learning phase. Instead of evaluating the subjects’ performance solely based on the conventional metric of accuracy, we analyze their skill’s improvement based on 3 other criteria, namely the confusion matrix’s quality, the separability of the data, and their instability. After collecting calibration data for 8 minutes in the first run, 8 participants were able to control the system while receiving visual feedback in the subsequent runs. We observed significant improvement in all subjects, including two of them who fell into the BCI illiteracy category. Our proposed BCI system complements the existing approaches in several aspects. First, the co-adaptation paradigm not only adapts the classifiers, but also allows the users to actively discover their own way to use the BCI through their exploration and exploitation processes. Furthermore, the auto-calibrating system can be used immediately with a minimal calibration time. Finally, this is the first work to combine motor and speech imagery in an online feedback experiment to provide multiple DoF for BCI control applications.

]]>
<![CDATA[Dynamical analogues of rank distributions]]> https://www.researchpad.co/article/5c61e933d5eed0c48496f97e

We present an equivalence between stochastic and deterministic variable approaches to represent ranked data and find the expressions obtained to be suggestive of statistical-mechanical meanings. We first reproduce size-rank distributions N(k) from real data sets by straightforward considerations based on the assumed knowledge of the background probability distribution P(N) that generates samples of random variable values similar to real data. The choice of different functional expressions for P(N): power law, exponential, Gaussian, etc., leads to different classes of distributions N(k) for which we find examples in nature. Then we show that all of these types of functions can be alternatively obtained from deterministic dynamical systems. These correspond to one-dimensional nonlinear iterated maps near a tangent bifurcation whose trajectories are proved to be precise analogues of the N(k). We provide explicit expressions for the maps and their trajectories and find they operate under conditions of vanishing or small Lyapunov exponent, therefore at or near a transition to or out of chaos. We give explicit examples ranging from exponential to logarithmic behavior, including Zipf’s law. Adoption of the nonlinear map as the formalism central character is a useful viewpoint, as variation of its few parameters, that modify its tangency property, translate into the different classes for N(k).

]]>
<![CDATA[Ecology of trading strategies in a forex market for limit and market orders]]> https://www.researchpad.co/article/5c2151c2d5eed0c4843fbc76

There is a growing interest to understand financial markets as ecological systems, where the variety of trading strategies correspond to that of biological species. For this purpose, transaction data for individual traders are studied recently as empirical analyses. However, there are few empirical studies addressing how traders submit limit and market order at the level of individual traders. Since limit and market orders are key ingredients finally leading to transactions, it would be necessary to understand what kind of strategies are actually employed among traders before making transactions. Here we demonstrate the variety of limit-order and market-order strategies and show their roles in the financial markets from an ecological perspective. We find these trading strategies can be well-characterized by their response pattern to historical price changes. By applying a clustering analysis, we provide an overall picture of trading strategies as an ecological matrix, illustrating that liquidity consumers are likely to exhibit high trading performances compared with liquidity providers. Furthermore, we reveal both high-frequency traders (HFTs) and low-frequency traders (LFTs) exhibit high trading performance, despite the difference in their trading styles; HFTs attempt to maximize their trading efficiency by reducing risk, whereas LFTs make their profit by taking risk.

]]>
<![CDATA[Using data from the Microsoft Kinect 2 to determine postural stability in healthy subjects: A feasibility trial]]> https://www.researchpad.co/article/5989db53ab0ee8fa60bdca3c

The objective of this study was to determine whether kinematic data collected by the Microsoft Kinect 2 (MK2) could be used to quantify postural stability in healthy subjects. Twelve subjects were recruited for the project, and were instructed to perform a sequence of simple postural stability tasks. The movement sequence was performed as subjects were seated on top of a force platform, and the MK2 was positioned in front of them. This sequence of tasks was performed by each subject under three different postural conditions: “both feet on the ground” (1), “One foot off the ground” (2), and “both feet off the ground” (3). We compared force platform and MK2 data to quantify the degree to which the MK2 was returning reliable data across subjects. We then applied a novel machine-learning paradigm to the MK2 data in order to determine the extent to which data from the MK2 could be used to reliably classify different postural conditions. Our initial comparison of force plate and MK2 data showed a strong agreement between the two devices, with strong Pearson correlations between the trunk centroids “Spine_Mid” (0.85 ± 0.06), “Neck” (0.86 ± 0.07) and “Head” (0.87 ± 0.07), and the center of pressure centroid inferred by the force platform. Mean accuracy for the machine learning classifier from MK2 was 97.0%, with a specific classification accuracy breakdown of 90.9%, 100%, and 100% for conditions 1 through 3, respectively. Mean accuracy for the machine learning classifier derived from the force platform data was lower at 84.4%. We conclude that data from the MK2 has sufficient information content to allow us to classify sequences of tasks being performed under different levels of postural stability. Future studies will focus on validating this protocol on large populations of individuals with actual balance impairments in order to create a toolkit that is clinically validated and available to the medical community.

]]>
<![CDATA[Predicting Market Impact Costs Using Nonparametric Machine Learning Models]]> https://www.researchpad.co/article/5989da6dab0ee8fa60b93b37

Market impact cost is the most significant portion of implicit transaction costs that can reduce the overall transaction cost, although it cannot be measured directly. In this paper, we employed the state-of-the-art nonparametric machine learning models: neural networks, Bayesian neural network, Gaussian process, and support vector regression, to predict market impact cost accurately and to provide the predictive model that is versatile in the number of variables. We collected a large amount of real single transaction data of US stock market from Bloomberg Terminal and generated three independent input variables. As a result, most nonparametric machine learning models outperformed a-state-of-the-art benchmark parametric model such as I-star model in four error measures. Although these models encounter certain difficulties in separating the permanent and temporary cost directly, nonparametric machine learning models can be good alternatives in reducing transaction costs by considerably improving in prediction performance.

]]>
<![CDATA[Influence of Age on Ocular Biomechanical Properties in a Canine Glaucoma Model with ADAMTS10 Mutation]]> https://www.researchpad.co/article/5989da19ab0ee8fa60b7c2dc

Soft tissue often displays marked age-associated stiffening. This study aims to investigate how age affects scleral biomechanical properties in a canine glaucoma model with ADAMTS10 mutation, whose extracellular matrix is concomitantly influenced by the mutation and an increased mechanical load from an early age. Biomechanical data was acquired from ADAMTS10-mutant dogs (n = 10, 21 to 131 months) and normal dogs (n = 5, 69 to 113 months). Infusion testing was first performed in the whole globes to measure ocular rigidity. After infusion experiments, the corneas were immediately trephined to prepare scleral shells that were mounted on a pressurization chamber to measure strains in the posterior sclera using an inflation testing protocol. Dynamic viscoelastic mechanical testing was then performed on dissected posterior scleral strips and the data were combined with those reported earlier by our group from the same animal model (Palko et al, IOVS 2013). The association between age and scleral biomechanical properties was evaluated using multivariate linear regression. The relationships between scleral properties and the mean and last measured intraocular pressure (IOP) were also evaluated. Our results showed that age was positively associated with complex modulus (p<0.001) and negatively associated with loss tangent (p<0.001) in both the affected and the normal groups, suggesting an increased stiffness and decreased mechanical damping with age. The regression slopes were not different between the groups, although the complex modulus was significantly lower in the affected group (p = 0.041). The posterior circumferential tangential strain was negatively correlated with complex modulus (R = -0.744, p = 0.006) showing consistent mechanical evaluation between the testing methods. Normalized ocular rigidity was negatively correlated with the last IOP in the affected group (p = 0.003). Despite a mutation that affects the extracellular matrix and a chronic IOP elevation in the affected dogs, age-associated scleral stiffening and loss of mechanical damping were still prominent and had a similar rate of change as in the normal dogs.

]]>
<![CDATA[Synthesis and Behavior of Cetyltrimethyl Ammonium Bromide Stabilized Zn1+xSnO3+x (0 ≤ x ≤1) Nano-Crystallites]]> https://www.researchpad.co/article/5989da98ab0ee8fa60ba293b

We report synthesis of cetyltrimethyl ammonium bromide (CTAB) stabilized Zn1+xSnO3+x (0 ≤ x ≤1) nano-crystallites by facile cost-effective wet chemistry route. The X-ray diffraction patterns of as-synthesized powders at the Zn/Sn ratio of 1 exhibited formation of ZnSn(OH)6. Increasing the Zn/Sn ratio further resulted in the precipitation of an additional phase corresponding to Zn(OH)2. The decomposition of these powders at 650°C for 3h led to the formation of the orthorhombic phase of ZnSnO3 and tetragonal SnO2-type phase of Zn2SnO4 at the Zn/Sn ratio of 1 and 2, respectively, with the formation of their mixed phases at intermediate compositions, i.e., at Zn/Sn ratio of 1.25, 1.50 and 1.75, respectively. The lattice parameters of orthorhombic and tetragonal phases were a ~ 3.6203 Å, b ~ 4.2646 Å and c ~ 12.8291Å (for ZnSnO3) and a = b ~ 5.0136 Å and c ~ 3.3055Å (for Zn2SnO4). The transmission electron micrographs revealed the formation of nano-crystallites with aspect ratio ~ 2; the length and thickness being 24, 13 nm (for ZnSnO3) and 47, 22 nm (for Zn2SnO4), respectively. The estimated direct bandgap values for the ZnSnO3 and Zn2SnO4 were found to be 4.21 eV and 4.12 eV, respectively. The ac conductivity values at room temperature (at 10 kHz) for the ZnSnO3 and Zn2SnO4 samples were 8.02 × 10−8-1 cm-1 and 6.77 × 10−8-1 cm-1, respectively. The relative permittivity was found to increase with increase in temperature, the room temperature values being 14.24 and 25.22 for the samples ZnSnO3 and Zn2SnO4, respectively. Both the samples, i.e., ZnSnO3 and Zn2SnO4, exhibited low values of loss tangent up to 300 K, the room temperature values being 0.89 and 0.72, respectively. A dye-sensitized solar cell has been fabricated using the optimized sample of zinc stannate photo-anode, i.e., Zn2SnO4. The cyclic voltammetry revealed oxidation and reduction around 0.40 V (current density ~ 11.1 mA/cm2) and 0.57 V (current density– 11.7 mA/cm2) for Zn2SnO4 photo-anode in presence of light.

]]>
<![CDATA[Phase diagrams and dynamics of a computationally efficient map-based neuron model]]> https://www.researchpad.co/article/5989db52ab0ee8fa60bdc6db

We introduce a new map-based neuron model derived from the dynamical perceptron family that has the best compromise between computational efficiency, analytical tractability, reduced parameter space and many dynamical behaviors. We calculate bifurcation and phase diagrams analytically and computationally that underpins a rich repertoire of autonomous and excitable dynamical behaviors. We report the existence of a new regime of cardiac spikes corresponding to nonchaotic aperiodic behavior. We compare the features of our model to standard neuron models currently available in the literature.

]]>
<![CDATA[A novel tri-band T-junction impedance-transforming power divider with independent power division ratios]]> https://www.researchpad.co/article/5989db5cab0ee8fa60be0359

In this paper, a novel L network (LN) is presented, which is composed of a frequency-selected section (FSS) and a middle stub (MS). Based on the proposed LN, a tri-band T-junction power divider (TTPD) with impedance transformation and independent power division ratios is designed. Moreover, the closed-form design theory of the TTPD is derived based on the transmission line theory and circuit theory. Finally, a microstrip prototype of the TTPD is simulated, fabricated, and measured. The design is for three arbitrarily chosen frequencies, 1 GHz, 1.6 GHz, and 2.35 GHz with the independent power division ratios of 0.5, 0.7, and 0.9. The measured results show that the fabricated prototype is consistent with the simulation, which demonstrates the effectiveness of this proposed design.

]]>
<![CDATA[Error Correction and the Structure of Inter-Trial Fluctuations in a Redundant Movement Task]]> https://www.researchpad.co/article/5989dac9ab0ee8fa60bb3a89

We study inter-trial movement fluctuations exhibited by human participants during the repeated execution of a virtual shuffleboard task. Focusing on skilled performance, theoretical analysis of a previously-developed general model of inter-trial error correction is used to predict the temporal and geometric structure of variability near a goal equivalent manifold (GEM). The theory also predicts that the goal-level error scales linearly with intrinsic body-level noise via the total body-goal sensitivity, a new derived quantity that illustrates how task performance arises from the interaction of active error correction and passive sensitivity properties along the GEM. Linear models estimated from observed fluctuations, together with a novel application of bootstrapping to the estimation of dynamical and correlation properties of the inter-trial dynamics, are used to experimentally confirm all predictions, thus validating our model. In addition, we show that, unlike “static” variability analyses, our dynamical approach yields results that are independent of the coordinates used to measure task execution and, in so doing, provides a new set of task coordinates that are intrinsic to the error-regulation process itself.

]]>
<![CDATA[Tangent screen perimetry in the evaluation of visual field defects associated with ptosis and dermatochalasis]]> https://www.researchpad.co/article/5989db53ab0ee8fa60bdc9eb

Purpose

To determine if tangent visual fields gathered during assessment of superior visual field deficits caused by blepharoptosis and dermatochalasis offer good correlation to clinical exam in a time and cost efficient manner.

Methods

Prospective, observational case series. Subjects included all patients referred to a single surgeon (CCN) who underwent surgical correction of blepharoptosis and/or dermatochalasis. Preoperatively and postoperatively, upper margin-to-reflex distances were assessed. Tangent visual fields were performed in a timed fashion and analyzed for degrees of intact vision in the vertical meridian and degrees squared of area under the curve. Data were compared by Student t-tests and Pearson correlation coefficients.

Results

Mean preoperative superior visual fields with the eyelid in the natural position measured 8° in the vertical meridian. Measurements in the vertical meridian and area under the curve showed excellent correlation (r = 0.87). Patients with ptosis showed strong correlation between margin-to-reflex distance and superior visual fields. Patients completed field testing faster than reported times for automated or Goldmann testing. Finally, tangent screens were the least expensive type of equipment to purchase.

Conclusions

Tangent visual fields are a rapid and inexpensive way to test for functional loss of superior visual field in patients with upper eyelid malposition. Our data revealed potential differences between tangent screen results and published results for automated or Goldmann visual field testing which warrants further studies.

]]>
<![CDATA[Surface Reconstruction from Parallel Curves with Application to Parietal Bone Fracture Reconstruction]]> https://www.researchpad.co/article/5989daceab0ee8fa60bb53e4

Maxillofacial trauma are common, secondary to road traffic accident, sports injury, falls and require sophisticated radiological imaging to precisely diagnose. A direct surgical reconstruction is complex and require clinical expertise. Bio-modelling helps in reconstructing surface model from 2D contours. In this manuscript we have constructed the 3D surface using 2D Computerized Tomography (CT) scan contours. The fracture part of the cranial vault are reconstructed using GC1 rational cubic Ball curve with three free parameters, later the 2D contours are flipped into 3D with equidistant z component. The constructed surface is represented by contours blending interpolant. At the end of this manuscript a case report of parietal bone fracture is also illustrated by employing this method with a Graphical User Interface (GUI) illustration.

]]>
<![CDATA[An algorithm for constructing the skeleton graph of degenerate systems of linear inequalities]]> https://www.researchpad.co/article/5989db52ab0ee8fa60bdc543

Derive the quantitative predictions of constraint-based models require of conversion algorithms to enumerate and construct the skeleton graph conformed by the extreme points of the feasible region, where all constraints in the model are fulfilled. The conversion is problematic when the system of linear constraints is degenerate. This paper describes a conversion algorithm that combines the best of two methods: the incremental slicing of cones that defeats degeneracy and pivoting for a swift traversal of the set of extreme points. An extensive computational practice uncovers two complementary classes of conversion problems. The two classes are distinguished by a practical measure of complexity that involves the input and output sizes. Detailed characterizations of the complexity classes and the corresponding performances of the algorithm are presented. For the benefit of implementors, a simple example illustrates the stages of the exposition.

]]>
<![CDATA[Cardiac Mean Electrical Axis in Thoroughbreds—Standardization by the Dubois Lead Positioning System]]> https://www.researchpad.co/article/5989dac3ab0ee8fa60bb152b

Background

Different methodologies for electrocardiographic acquisition in horses have been used since the first ECG recordings in equines were reported early in the last century. This study aimed to determine the best ECG electrodes positioning method and the most reliable calculation of mean cardiac axis (MEA) in equines.

Materials and Methods

We evaluated the electrocardiographic profile of 53 clinically healthy Thoroughbreds, 38 males and 15 females, with ages ranging 2–7 years old, all reared at the São Paulo Jockey Club, in Brazil. Two ECG tracings were recorded from each animal, one using the Dubois lead positioning system, the second using the base-apex method. QRS complex amplitudes were analyzed to obtain MEA values in the frontal plane for each of the two electrode positioning methods mentioned above, using two calculation approaches, the first by Tilley tables and the second by trigonometric calculation. Results were compared between the two methods.

Results

There was significant difference in cardiac axis values: MEA obtained by the Tilley tables was +135.1° ± 90.9° vs. -81.1° ± 3.6° (p<0.0001), and by trigonometric calculation it was -15.0° ± 11.3° vs. -79.9° ± 7.4° (p<0.0001), base-apex and Dubois, respectively. Furthermore, Dubois method presented small range of variation without statistical or clinical difference by either calculation mode, while there was a wide variation in the base-apex method.

Conclusion

Dubois improved centralization of the Thoroughbreds' hearts, engendering what seems to be the real frontal plane. By either calculation mode, it was the most reliable methodology to obtain cardiac mean electrical axis in equines.

]]>
<![CDATA[Ten simple rules for short and swift presentations]]> https://www.researchpad.co/article/5989db54ab0ee8fa60bdd0af ]]> <![CDATA[Numerical Simulation of Dry Granular Flow Impacting a Rigid Wall Using the Discrete Element Method]]> https://www.researchpad.co/article/5989da13ab0ee8fa60b7a33f

This paper presents a clump model based on Discrete Element Method. The clump model was more close to the real particle than a spherical particle. Numerical simulations of several tests of dry granular flow impacting a rigid wall flowing in an inclined chute have been achieved. Five clump models with different sphericity have been used in the simulations. By comparing the simulation results with the experimental results of normal force on the rigid wall, a clump model with better sphericity was selected to complete the following numerical simulation analysis and discussion. The calculation results of normal force showed good agreement with the experimental results, which verify the effectiveness of the clump model. Then, total normal force and bending moment of the rigid wall and motion process of the granular flow were further analyzed. Finally, comparison analysis of the numerical simulations using the clump model with different grain composition was obtained. By observing normal force on the rigid wall and distribution of particle size at the front of the rigid wall at the final state, the effect of grain composition on the force of the rigid wall has been revealed. It mainly showed that, with the increase of the particle size, the peak force at the retaining wall also increase. The result can provide a basis for the research of relevant disaster and the design of protective structures.

]]>
<![CDATA[A Variational Bayes Approach to the Analysis of Occupancy Models]]> https://www.researchpad.co/article/5989dabdab0ee8fa60baf650

Detection-nondetection data are often used to investigate species range dynamics using Bayesian occupancy models which rely on the use of Markov chain Monte Carlo (MCMC) methods to sample from the posterior distribution of the parameters of the model. In this article we develop two Variational Bayes (VB) approximations to the posterior distribution of the parameters of a single-season site occupancy model which uses logistic link functions to model the probability of species occurrence at sites and of species detection probabilities. This task is accomplished through the development of iterative algorithms that do not use MCMC methods. Simulations and small practical examples demonstrate the effectiveness of the proposed technique. We specifically show that (under certain circumstances) the variational distributions can provide accurate approximations to the true posterior distributions of the parameters of the model when the number of visits per site (K) are as low as three and that the accuracy of the approximations improves as K increases. We also show that the methodology can be used to obtain the posterior distribution of the predictive distribution of the proportion of sites occupied (PAO).

]]>
<![CDATA[Accuracy Maximization Analysis for Sensory-Perceptual Tasks: Computational Improvements, Filter Robustness, and Coding Advantages for Scaled Additive Noise]]> https://www.researchpad.co/article/5989db53ab0ee8fa60bdce6f

Accuracy Maximization Analysis (AMA) is a recently developed Bayesian ideal observer method for task-specific dimensionality reduction. Given a training set of proximal stimuli (e.g. retinal images), a response noise model, and a cost function, AMA returns the filters (i.e. receptive fields) that extract the most useful stimulus features for estimating a user-specified latent variable from those stimuli. Here, we first contribute two technical advances that significantly reduce AMA’s compute time: we derive gradients of cost functions for which two popular estimators are appropriate, and we implement a stochastic gradient descent (AMA-SGD) routine for filter learning. Next, we show how the method can be used to simultaneously probe the impact on neural encoding of natural stimulus variability, the prior over the latent variable, noise power, and the choice of cost function. Then, we examine the geometry of AMA’s unique combination of properties that distinguish it from better-known statistical methods. Using binocular disparity estimation as a concrete test case, we develop insights that have general implications for understanding neural encoding and decoding in a broad class of fundamental sensory-perceptual tasks connected to the energy model. Specifically, we find that non-orthogonal (partially redundant) filters with scaled additive noise tend to outperform orthogonal filters with constant additive noise; non-orthogonal filters and scaled additive noise can interact to sculpt noise-induced stimulus encoding uncertainty to match task-irrelevant stimulus variability. Thus, we show that some properties of neural response thought to be biophysical nuisances can confer coding advantages to neural systems. Finally, we speculate that, if repurposed for the problem of neural systems identification, AMA may be able to overcome a fundamental limitation of standard subunit model estimation. As natural stimuli become more widely used in the study of psychophysical and neurophysiological performance, we expect that task-specific methods for feature learning like AMA will become increasingly important.

]]>
<![CDATA[Fast exploration of an optimal path on the multidimensional free energy surface]]> https://www.researchpad.co/article/5989db5cab0ee8fa60bdfef4

In a reaction, determination of an optimal path with a high reaction rate (or a low free energy barrier) is important for the study of the reaction mechanism. This is a complicated problem that involves lots of degrees of freedom. For simple models, one can build an initial path in the collective variable space by the interpolation method first and then update the whole path constantly in the optimization. However, such interpolation method could be risky in the high dimensional space for large molecules. On the path, steric clashes between neighboring atoms could cause extremely high energy barriers and thus fail the optimization. Moreover, performing simulations for all the snapshots on the path is also time-consuming. In this paper, we build and optimize the path by a growing method on the free energy surface. The method grows a path from the reactant and extends its length in the collective variable space step by step. The growing direction is determined by both the free energy gradient at the end of the path and the direction vector pointing at the product. With fewer snapshots on the path, this strategy can let the path avoid the high energy states in the growing process and save the precious simulation time at each iteration step. Applications show that the presented method is efficient enough to produce optimal paths on either the two-dimensional or the twelve-dimensional free energy surfaces of different small molecules.

]]>