ResearchPad - linear-algebra https://www.researchpad.co Default RSS Feed en-us © 2020 Newgen KnowledgeWorks <![CDATA[Mesh smoothing algorithm based on exterior angles split]]> https://www.researchpad.co/article/elastic_article_13823 Since meshes of poor quality give rise to low accuracy in finite element analysis and kinds of inconveniences in many other applications, mesh smoothing is widely used as an essential technique for the improvement of mesh quality. With respect to this issue, the main contribution of this paper is that a novel mesh smoothing method based on an exterior-angle-split process is proposed. The proposed method contains three main stages: the first stage is independent element geometric transformation performed by exterior-angle-split operations, treating elements unconnected; the second stage is to offset scaling and displacement induced by element transformation; the third stage is to determine the final positions of nodes with a weighted strategy. Theoretical proof describes the regularity of this method and many numerical experiments illustrate its convergence. Not only is this method applicable for triangular mesh, but also can be naturally extended to arbitrary polygonal surface mesh. Quality improvements of demonstrations on triangular and quadrilateral meshes show the effectiveness of this method.

]]>
<![CDATA[The Language of Innovation]]> https://www.researchpad.co/article/elastic_article_10245 Predicting innovation is a peculiar problem in data science. Following its definition, an innovation is always a never-seen-before event, leaving no room for traditional supervised learning approaches. Here we propose a strategy to address the problem in the context of innovative patents, by defining innovations as never-seen-before associations of technologies and exploiting self-supervised learning techniques. We think of technological codes present in patents as a vocabulary and the whole technological corpus as written in a specific, evolving language. We leverage such structure with techniques borrowed from Natural Language Processing by embedding technologies in a high dimensional euclidean space where relative positions are representative of learned semantics. Proximity in this space is an effective predictor of specific innovation events, that outperforms a wide range of standard link-prediction metrics. The success of patented innovations follows a complex dynamics characterized by different patterns which we analyze in details with specific examples. The methods proposed in this paper provide a completely new way of understanding and forecasting innovation, by tackling it from a revealing perspective and opening interesting scenarios for a number of applications and further analytic approaches.

]]>
<![CDATA[Controlling seizure propagation in large-scale brain networks]]> https://www.researchpad.co/article/5c7d95e6d5eed0c484734f24

Information transmission in the human brain is a fundamentally dynamic network process. In partial epilepsy, this process is perturbed and highly synchronous seizures originate in a local network, the so-called epileptogenic zone (EZ), before recruiting other close or distant brain regions. We studied patient-specific brain network models of 15 drug-resistant epilepsy patients with implanted stereotactic electroencephalography (SEEG) electrodes. Each personalized brain model was derived from structural data of magnetic resonance imaging (MRI) and diffusion tensor weighted imaging (DTI), comprising 88 nodes equipped with region specific neural mass models capable of demonstrating a range of epileptiform discharges. Each patient’s virtual brain was further personalized through the integration of the clinically hypothesized EZ. Subsequent simulations and connectivity modulations were performed and uncovered a finite repertoire of seizure propagation patterns. Across patients, we found that (i) patient-specific network connectivity is predictive for the subsequent seizure propagation pattern; (ii) seizure propagation is characterized by a systematic sequence of brain states; (iii) propagation can be controlled by an optimal intervention on the connectivity matrix; (iv) the degree of invasiveness can be significantly reduced via the proposed seizure control as compared to traditional resective surgery. To stop seizures, neurosurgeons typically resect the EZ completely. We showed that stability analysis of the network dynamics, employing structural and dynamical information, estimates reliably the spatiotemporal properties of seizure propagation. This suggests novel less invasive paradigms of surgical interventions to treat and manage partial epilepsy.

]]>
<![CDATA[Assessing mental health service user and carer involvement in physical health care planning: The development and validation of a new patient-reported experience measure]]> https://www.researchpad.co/article/5c6dc9a5d5eed0c484529f71

Background

People living with serious mental health conditions experience increased morbidity due to physical health issues driven by medication side-effects and lifestyle factors. Coordinated mental and physical healthcare delivered in accordance with a care plan could help to reduce morbidity and mortality in this population. Efforts to develop new models of care are hampered by a lack of validated instruments to accurately assess the extent to which mental health services users and carers are involved in care planning for physical health.

Objective

To develop a brief and accurate patient-reported experience measure (PREM) capable of assessing involvement in physical health care planning for mental health service users and their carers.

Methods

We employed psychometric and statistical techniques to refine a bank of candidate questionnaire items, derived from qualitative interviews, into a valid and reliable measure involvement in physical health care planning. We assessed the psychometric performance of the item bank using modern psychometric analyses. We assessed unidimensionality, scalability, fit to the partial credit Rasch model, category threshold ordering, local dependency, differential item functioning, and test-retest reliability. Once purified of poorly performing and erroneous items, we simulated computerized adaptive testing (CAT) with 15, 10 and 5 items using the calibrated item bank.

Results

Issues with category threshold ordering, local dependency and differential item functioning were evident for a number of items in the nascent item bank and were resolved by removing problematic items. The final 19 item PREM had excellent fit to the Rasch model fit (x2 = 192.94, df = 1515, P = .02, RMSEA = .03 (95% CI = .01-.04). The 19-item bank had excellent reliability (marginal r = 0.87). The correlation between questionnaire scores at baseline and 2-week follow-up was high (r = .70, P < .01) and 94.9% of assessment pairs were within the Bland Altman limits of agreement. Simulated CAT demonstrated that assessments could be made using as few as 10 items (mean SE = .43).

Discussion

We developed a flexible patient reported outcome measure to quantify service user and carer involvement in physical health care planning. We demonstrate the potential to substantially reduce assessment length whilst maintaining reliability by utilizing CAT.

]]>
<![CDATA[Overcoming the problem of multicollinearity in sports performance data: A novel application of partial least squares correlation analysis]]> https://www.researchpad.co/article/5c6f1492d5eed0c48467a325

Objectives

Professional sporting organisations invest considerable resources collecting and analysing data in order to better understand the factors that influence performance. Recent advances in non-invasive technologies, such as global positioning systems (GPS), mean that large volumes of data are now readily available to coaches and sport scientists. However analysing such data can be challenging, particularly when sample sizes are small and data sets contain multiple highly correlated variables, as is often the case in a sporting context. Multicollinearity in particular, if not treated appropriately, can be problematic and might lead to erroneous conclusions. In this paper we present a novel ‘leave one variable out’ (LOVO) partial least squares correlation analysis (PLSCA) methodology, designed to overcome the problem of multicollinearity, and show how this can be used to identify the training load (TL) variables that influence most ‘end fitness’ in young rugby league players.

Methods

The accumulated TL of sixteen male professional youth rugby league players (17.7 ± 0.9 years) was quantified via GPS, a micro-electrical-mechanical-system (MEMS), and players’ session-rating-of-perceived-exertion (sRPE) over a 6-week pre-season training period. Immediately prior to and following this training period, participants undertook a 30–15 intermittent fitness test (30-15IFT), which was used to determine a players ‘starting fitness’ and ‘end fitness’. In total twelve TL variables were collected, and these along with ‘starting fitness’ as a covariate were regressed against ‘end fitness’. However, considerable multicollinearity in the data (VIF >1000 for nine variables) meant that the multiple linear regression (MLR) process was unstable and so we developed a novel LOVO PLSCA adaptation to quantify the relative importance of the predictor variables and thus minimise multicollinearity issues. As such, the LOVO PLSCA was used as a tool to inform and refine the MLR process.

Results

The LOVO PLSCA identified the distance accumulated at very-high speed (>7 m·s-1) as being the most important TL variable to influence improvement in player fitness, with this variable causing the largest decrease in singular value inertia (5.93). When included in a refined linear regression model, this variable, along with ‘starting fitness’ as a covariate, explained 73% of the variance in v30-15IFT ‘end fitness’ (p<0.001) and eliminated completely any multicollinearity issues.

Conclusions

The LOVO PLSCA technique appears to be a useful tool for evaluating the relative importance of predictor variables in data sets that exhibit considerable multicollinearity. When used as a filtering tool, LOVO PLSCA produced a MLR model that demonstrated a significant relationship between ‘end fitness’ and the predictor variable ‘accumulated distance at very-high speed’ when ‘starting fitness’ was included as a covariate. As such, LOVO PLSCA may be a useful tool for sport scientists and coaches seeking to analyse data sets obtained using GPS and MEMS technologies.

]]>
<![CDATA[Building geochemically based quantitative analogies from soil classification systems using different compositional datasets]]> https://www.researchpad.co/article/5c75abe2d5eed0c484d07e1f

Soil heterogeneity is a major contributor to the uncertainty in near-surface biogeochemical modeling. We sought to overcome this limitation by exploring the development of a new classification analogy concept for transcribing the largely qualitative criteria in the pedomorphologically based, soil taxonomic classification systems to quantitative physicochemical descriptions. We collected soil horizons classified under the Alfisols taxonomic Order in the U.S. National Resource Conservation Service (NRCS) soil classification system and quantified their properties via physical and chemical characterizations. Using multivariate statistical modeling modified for compositional data analysis (CoDA), we developed quantitative analogies by partitioning the characterization data up into three different compositions: Water-extracted (WE), Mehlich-III extracted (ME), and particle-size distribution (PSD) compositions. Afterwards, statistical tests were performed to determine the level of discrimination at different taxonomic and location-specific designations. The analogies showed different abilities to discriminate among the samples. Overall, analogies made up from the WE composition more accurately classified the samples than the other compositions, particularly at the Great Group and thermal regime designations. This work points to the potential to quantitatively discriminate taxonomically different soil types characterized by varying compositional datasets.

]]>
<![CDATA[On the synchronization techniques of chaotic oscillators and their FPGA-based implementation for secure image transmission]]> https://www.researchpad.co/article/5c648cedd5eed0c484c81aca

Synchronizing chaotic oscillators has been a challenge to guarantee successful applications in secure communications. That way, three synchronization techniques are applied herein to twenty two chaotic oscillators, three of them based on piecewise-linear functions and nineteen proposed by Julien C. Sprott. These chaotic oscillators are simulated to generate chaotic time series that are used to evaluate their Lyapunov exponents and Kaplan-Yorke dimension to rank their unpredictability. The oscillators with the high positive Lyapunov exponent are implemented into a field-programmable gate array (FPGA), and afterwards they are synchronized in a master-slave topology applying three techniques: the seminal work introduced by Pecora-Carroll, Hamiltonian forms and observer approach, and open-plus-closed-loop (OPCL). These techniques are compared with respect to their synchronization error and latency that is associated to the FPGA implementation. Finally, the chaotic oscillators providing the high positive Lyapunov exponent are synchronized and applied to a communication system with chaotic masking to perform a secure image transmission. Correlation analysis is performed among the original image, the chaotic channel and the recovered image for the three synchronization schemes. The experimental results show that both Hamiltonian forms and OPCL can recover the original image and its correlation with the chaotic channel is as low as 0.00002, demonstrating the advantage of synchronizing chaotic oscillators with high positive Lyapunov exponent to guarantee high security in data transmission.

]]>
<![CDATA[Plant-soil feedbacks promote coexistence and resilience in multi-species communities]]> https://www.researchpad.co/article/5c6b26b6d5eed0c484289eef

Both ecological theory and empirical evidence suggest that negative frequency dependent feedbacks structure plant communities, but integration of these findings has been limited. Here we develop a generic model of frequency dependent feedback to analyze coexistence and invasibility in random theoretical and real communities for which frequency dependence through plant-soil feedbacks (PSFs) was determined empirically. We investigated community stability and invasibility by means of mechanistic analysis of invasion conditions and numerical simulations. We found that communities fall along a spectrum of coexistence types ranging from strict pair-wise negative feedback to strict intransitive networks. Intermediate community structures characterized by partial intransitivity may feature “keystone competitors” which disproportionately influence community stability. Real communities were characterized by stronger negative feedback and higher robustness to species loss than randomly assembled communities. Partial intransitivity became increasingly likely in more diverse communities. The results presented here theoretically explain why more diverse communities are characterized by stronger negative frequency dependent feedbacks, a pattern previously encountered in observational studies. Natural communities are more likely to be maintained by strict negative plant-soil feedback than expected by chance, but our results also show that community stability often depends on partial intransitivity. These results suggest that plant-soil feedbacks can facilitate coexistence in multi-species communities, but that these feedbacks may also initiate cascading effects on community diversity following from single-species loss.

]]>
<![CDATA[A large scale screening study with a SMR-based BCI: Categorization of BCI users and differences in their SMR activity]]> https://www.researchpad.co/article/5c57e677d5eed0c484ef330f

Brain-Computer Interfaces (BCIs) are inefficient for a non-negligible part of the population, estimated around 25%. To understand this phenomenon in Sensorimotor Rhythm (SMR) based BCIs, data from a large-scale screening study conducted on 80 novice participants with the Berlin BCI system and its standard machine-learning approach were investigated. Each participant performed one BCI session with resting state Encephalography, Motor Observation, Motor Execution and Motor Imagery recordings and 128 electrodes. A significant portion of the participants (40%) could not achieve BCI control (feedback performance > 70%). Based on the performance of the calibration and feedback runs, BCI users were stratified in three groups. Analyses directed to detect and elucidate the differences in the SMR activity of these groups were performed. Statistics on reactive frequencies, task prevalence and classification results are reported. Based on their SMR activity, also a systematic list of potential reasons leading to performance drops and thus hints for possible improvements of BCI experimental design are given. The categorization of BCI users has several advantages, allowing researchers 1) to select subjects for further analyses as well as for testing new BCI paradigms or algorithms, 2) to adopt a better subject-dependent training strategy and 3) easier comparisons between different studies.

]]>
<![CDATA[Integrating predicted transcriptome from multiple tissues improves association detection]]> https://www.researchpad.co/article/5c50c43bd5eed0c4845e8359

Integration of genome-wide association studies (GWAS) and expression quantitative trait loci (eQTL) studies is needed to improve our understanding of the biological mechanisms underlying GWAS hits, and our ability to identify therapeutic targets. Gene-level association methods such as PrediXcan can prioritize candidate targets. However, limited eQTL sample sizes and absence of relevant developmental and disease context restrict our ability to detect associations. Here we propose an efficient statistical method (MultiXcan) that leverages the substantial sharing of eQTLs across tissues and contexts to improve our ability to identify potential target genes. MultiXcan integrates evidence across multiple panels using multivariate regression, which naturally takes into account the correlation structure. We apply our method to simulated and real traits from the UK Biobank and show that, in realistic settings, we can detect a larger set of significantly associated genes than using each panel separately. To improve applicability, we developed a summary result-based extension called S-MultiXcan, which we show yields highly concordant results with the individual level version when LD is well matched. Our multivariate model-based approach allowed us to use the individual level results as a gold standard to calibrate and develop a robust implementation of the summary-based extension. Results from our analysis as well as software and necessary resources to apply our method are publicly available.

]]>
<![CDATA[The finite state projection based Fisher information matrix approach to estimate information and optimize single-cell experiments]]> https://www.researchpad.co/article/5c478c61d5eed0c484bd1f74

Modern optical imaging experiments not only measure single-cell and single-molecule dynamics with high precision, but they can also perturb the cellular environment in myriad controlled and novel settings. Techniques, such as single-molecule fluorescence in-situ hybridization, microfluidics, and optogenetics, have opened the door to a large number of potential experiments, which begs the question of how to choose the best possible experiment. The Fisher information matrix (FIM) estimates how well potential experiments will constrain model parameters and can be used to design optimal experiments. Here, we introduce the finite state projection (FSP) based FIM, which uses the formalism of the chemical master equation to derive and compute the FIM. The FSP-FIM makes no assumptions about the distribution shapes of single-cell data, and it does not require precise measurements of higher order moments of such distributions. We validate the FSP-FIM against well-known Fisher information results for the simple case of constitutive gene expression. We then use numerical simulations to demonstrate the use of the FSP-FIM to optimize the timing of single-cell experiments with more complex, non-Gaussian fluctuations. We validate optimal simulated experiments determined using the FSP-FIM with Monte-Carlo approaches and contrast these to experiment designs chosen by traditional analyses that assume Gaussian fluctuations or use the central limit theorem. By systematically designing experiments to use all of the measurable fluctuations, our method enables a key step to improve co-design of experiments and quantitative models.

]]>
<![CDATA[Automatic classification of human facial features based on their appearance]]> https://www.researchpad.co/article/5c59ff05d5eed0c484135990

Classification or typology systems used to categorize different human body parts have existed for many years. Nevertheless, there are very few taxonomies of facial features. Ergonomics, forensic anthropology, crime prevention or new human-machine interaction systems and online activities, like e-commerce, e-learning, games, dating or social networks, are fields in which classifications of facial features are useful, for example, to create digital interlocutors that optimize the interactions between human and machines. However, classifying isolated facial features is difficult for human observers. Previous works reported low inter-observer and intra-observer agreement in the evaluation of facial features. This work presents a computer-based procedure to automatically classify facial features based on their global appearance. This procedure deals with the difficulties associated with classifying features using judgements from human observers, and facilitates the development of taxonomies of facial features. Taxonomies obtained through this procedure are presented for eyes, mouths and noses.

]]>
<![CDATA[Development and psychometric testing of the Chinese version of the Resilience Scale for Southeast Asian immigrant women who divorced in Taiwan]]> https://www.researchpad.co/article/5c61e8d2d5eed0c48496f1e5

Background

Only a few studies exist on the resilience of divorced women. Furthermore, relevant instruments for assessing the resilience of divorced immigrant Southeast Asian women are rare. Accordingly, the aim of this study was to develop and examine a new Resilience Scale-Chinese version (RS-C) that is specific to divorced immigrant Southeast Asian women in Taiwan.

Methods

The study was conducted in two phases. In phase 1, 20 items were used to evaluate face and content validities. In phase 2, a cross-sectional study was conducted. In total, 118 immigrant women participated in this study and were recruited from three nongovernmental organizations providing services for immigrants in Taipei City and Miaoli and Chiayi Counties. Psychometric properties of the instrument (i.e., internal consistency, test–retest reliability, item-to-total correlation, construct validity, and convergent validity) were examined. Significance was set at p < 0.05 for all statistical tests.

Results

The final 16-item RS-C resulted in a three-factor model. The three factors, namely personal competence, family identity, and social connections, were an acceptable fit for the data and explained 54.60% of the variance. Cronbach’s α of the RS-C was 0.85, and those of its subscales ranged from 0.77 to 0.82. The correlation value of the test–retest reliability was 0.87. The RS-C was significantly associated with the General Self-Efficacy scale and the Chinese Health Questionnaire-12.

Conclusion

The RS-C is a brief and specific self-report tool for evaluating the resilience of divorced immigrant Southeast Asian women and demonstrated adequate reliability and validity in this study. This RS-C instrument has potential applications in both clinical practice and research with strength-based resiliency interventions. However, additional research on the RS-C is required to further establish its reliability and validity.

]]>
<![CDATA[Deterministic column subset selection for single-cell RNA-Seq]]> https://www.researchpad.co/article/5c64493fd5eed0c484c2f93e

Analysis of single-cell RNA sequencing (scRNA-Seq) data often involves filtering out uninteresting or poorly measured genes and dimensionality reduction to reduce noise and simplify data visualization. However, techniques such as principal components analysis (PCA) fail to preserve non-negativity and sparsity structures present in the original matrices, and the coordinates of projected cells are not easily interpretable. Commonly used thresholding methods to filter genes avoid those pitfalls, but ignore collinearity and covariance in the original matrix. We show that a deterministic column subset selection (DCSS) method possesses many of the favorable properties of common thresholding methods and PCA, while avoiding pitfalls from both. We derive new spectral bounds for DCSS. We apply DCSS to two measures of gene expression from two scRNA-Seq experiments with different clustering workflows, and compare to three thresholding methods. In each case study, the clusters based on the small subset of the complete gene expression profile selected by DCSS are similar to clusters produced from the full set. The resulting clusters are informative for cell type.

]]>
<![CDATA[On identifying collective displacements in apo-proteins that reveal eventual binding pathways]]> https://www.researchpad.co/article/5c478c43d5eed0c484bd1278

Binding of small molecules to proteins often involves large conformational changes in the latter, which open up pathways to the binding site. Observing and pinpointing these rare events in large scale, all-atom, computations of specific protein-ligand complexes, is expensive and to a great extent serendipitous. Further, relevant collective variables which characterise specific binding or un-binding scenarios are still difficult to identify despite the large body of work on the subject. Here, we show that possible primary and secondary binding pathways can be discovered from short simulations of the apo-protein without waiting for an actual binding event to occur. We use a projection formalism, introduced earlier to study deformation in solids, to analyse local atomic displacements into two mutually orthogonal subspaces—those which are “affine” i.e. expressible as a homogeneous deformation of the native structure, and those which are not. The susceptibility to non-affine displacements among the various residues in the apo- protein is then shown to correlate with typical binding pathways and sites crucial for allosteric modifications. We validate our observation with all-atom computations of three proteins, T4-Lysozyme, Src kinase and Cytochrome P450.

]]>
<![CDATA[Two-dimensional local Fourier image reconstruction via domain decomposition Fourier continuation method]]> https://www.researchpad.co/article/5c3fa5aed5eed0c484ca744f

The MRI image is obtained in the spatial domain from the given Fourier coefficients in the frequency domain. It is costly to obtain the high resolution image because it requires higher frequency Fourier data while the lower frequency Fourier data is less costly and effective if the image is smooth. However, the Gibbs ringing, if existent, prevails with the lower frequency Fourier data. We propose an efficient and accurate local reconstruction method with the lower frequency Fourier data that yields sharp image profile near the local edge. The proposed method utilizes only the small number of image data in the local area. Thus the method is efficient. Furthermore the method is accurate because it minimizes the global effects on the reconstruction near the weak edges shown in many other global methods for which all the image data is used for the reconstruction. To utilize the Fourier method locally based on the local non-periodic data, the proposed method is based on the Fourier continuation method. This work is an extension of our previous 1D Fourier domain decomposition method to 2D Fourier data. The proposed method first divides the MRI image in the spatial domain into many subdomains and applies the Fourier continuation method for the smooth periodic extension of the subdomain of interest. Then the proposed method reconstructs the local image based on L2 minimization regularized by the L1 norm of edge sparsity to sharpen the image near edges. Our numerical results suggest that the proposed method should be utilized in dimension-by-dimension manner instead of in a global manner for both the quality of the reconstruction and computational efficiency. The numerical results show that the proposed method is effective when the local reconstruction is sought and that the solution is free of Gibbs oscillations.

]]>
<![CDATA[An algorithm of image mosaic based on binary tree and eliminating distortion error]]> https://www.researchpad.co/article/5c3d010fd5eed0c484037edc

The traditional image mosaic result based on SIFT feature points extraction, to some extent, has distortion errors: the larger the input image set, the greater the spliced panoramic distortion. To achieve the goal of creating a high-quality panorama, a new and improved algorithm based on the A-KAZE feature is proposed in this paper. This includes changing the way reference image are selected and putting forward a method for selecting a reference image based on the binary tree model, which takes the input image set as the leaf node set of a binary tree and uses the bottom-up approach to construct a complete binary tree. The root node image of the binary tree is the ultimate panorama obtained by stitching. Compared with the traditional way, the novel method improves the accuracy of feature points detection and enhances the stitching quality of the panorama. Additionally, the improved method proposes an automatic image straightening model to rectify the panorama, which further improves the panoramic distortion. The experimental results show that the proposed method cannot only enhance the efficiency of image stitching processing, but also reduce the panoramic distortion errors and obtain a better quality panoramic result.

]]>
<![CDATA[Validation of modified radio-frequency identification tag firmware, using an equine population case study]]> https://www.researchpad.co/article/5c3fa5fcd5eed0c484caad7f

Background

Contact networks can be used to assess disease spread potential within a population. However, the data required to generate the networks can be challenging to collect. One method of collecting this type of data is by using radio-frequency identification (RFID) tags. The OpenBeacon RFID system generally consists of tags and readers. Communicating tags should be within 10m of the readers, which are powered by an external power source. The readers are challenging to implement in agricultural settings due to the lack of a power source and the large area needed to be covered.

Methods

OpenBeacon firmware was modified to use the tag’s onboard flash memory for data storage. The tags were deployed within an equine facility for a 7-day period. Tags were attached to the horses’ halters, worn by facility staff, and placed in strategic locations around the facility to monitor which participants had contact with the specified locations during the study period. When the tags came within 2m of each other, they recorded the contact event participant IDs, and start and end times. At the end of the study period, the data were downloaded to a computer and analyzed using network analysis methods.

Results

The resulting networks were plausible given the facility schedule as described in a survey completed by the facility manager. Furthermore, changes in the daily facility operations as described in the survey were reflected in the tag-collected data. In terms of the battery life, 88% of batteries maintained a charge for at least 6 days. Lastly, no consistent trends were evident in the horses’ centrality metrics.

Discussion

This study demonstrates the utility of RFID tags for the collection of equine contact data. Future work should include the collection of contact data from multiple equine facilities to better characterize equine disease spread potential in Ontario.

]]>
<![CDATA[Modeling musculoskeletal kinematic and dynamic redundancy using null space projection]]> https://www.researchpad.co/article/5c366805d5eed0c4841a6e05

The coordination of the human musculoskeletal system is deeply influenced by its redundant structure, in both kinematic and dynamic terms. Noticing a lack of a relevant, thorough treatment in the literature, we formally address the issue in order to understand and quantify factors affecting the motor coordination. We employed well-established techniques from linear algebra and projection operators to extend the underlying kinematic and dynamic relations by modeling the redundancy effects in null space. We distinguish three types of operational spaces, namely task, joint and muscle space, which are directly associated with the physiological factors of the system. A method for consistently quantifying the redundancy on multiple levels in the entire space of feasible solutions is also presented. We evaluate the proposed muscle space projection on segmental level reflexes and the computation of the feasible muscle forces for arbitrary movements. The former proves to be a convenient representation for interfacing with segmental level models or implementing controllers for tendon driven robots, while the latter enables the identification of force variability and correlations between muscle groups, attributed to the system’s redundancy. Furthermore, the usefulness of the proposed framework is demonstrated in the context of estimating the bounds of the joint reaction loads, where we show that misinterpretation of the results is possible if the null space forces are ignored. This work presents a theoretical analysis of the redundancy problem, facilitating application in a broad range of fields related to motor coordination, as it provides the groundwork for null space characterization. The proposed framework rigorously accounts for the effects of kinematic and dynamic redundancy, incorporating it directly into the underlying equations using the notion of null space projection, leading to a complete description of the system.

]]>
<![CDATA[Coherency of circadian rhythms in the SCN is governed by the interplay of two coupling factors]]> https://www.researchpad.co/article/5c18139dd5eed0c4847755e7

Circadian clocks are autonomous oscillators driving daily rhythms in physiology and behavior. In mammals, a network of coupled neurons in the suprachiasmatic nucleus (SCN) is entrained to environmental light-dark cycles and orchestrates the timing of peripheral organs. In each neuron, transcriptional feedbacks generate noisy oscillations. Coupling mediated by neuropeptides such as VIP and AVP lends precision and robustness to circadian rhythms. The detailed coupling mechanisms between SCN neurons are debated. We analyze organotypic SCN slices from neonatal and adult mice in wild-type and multiple knockout conditions. Different degrees of rhythmicity are quantified by pixel-level analysis of bioluminescence data. We use empirical orthogonal functions (EOFs) to characterize spatio-temporal patterns. Simulations of coupled stochastic single cell oscillators can reproduce the diversity of observed patterns. Our combination of data analysis and modeling provides deeper insight into the enormous complexity of the data: (1) Neonatal slices are typically stronger oscillators than adult slices pointing to developmental changes of coupling. (2) Wild-type slices are completely synchronized and exhibit specific spatio-temporal patterns of phases. (3) Some slices of Cry double knockouts obey impaired synchrony that can lead to co–existing rhythms (“splitting”). (4) The loss of VIP-coupling leads to desynchronized rhythms with few residual local clusters. Additional information was extracted from co–culturing slices with rhythmic neonatal wild-type SCNs. These co–culturing experiments were simulated using external forcing terms representing VIP and AVP signaling. The rescue of rhythmicity via co–culturing lead to surprising results, since a cocktail of AVP-antagonists improved synchrony. Our modeling suggests that these counter-intuitive observations are pointing to an antagonistic action of VIP and AVP coupling. Our systematic theoretical and experimental study shows that dual coupling mechanisms can explain the astonishing complexity of spatio-temporal patterns in SCN slices.

]]>