ResearchPad - data-visualization https://www.researchpad.co Default RSS Feed en-us © 2020 Newgen KnowledgeWorks <![CDATA[A content analysis-based approach to explore simulation verification and identify its current challenges]]> https://www.researchpad.co/article/elastic_article_14497 Verification is a crucial process to facilitate the identification and removal of errors within simulations. This study explores semantic changes to the concept of simulation verification over the past six decades using a data-supported, automated content analysis approach. We collect and utilize a corpus of 4,047 peer-reviewed Modeling and Simulation (M&S) publications dealing with a wide range of studies of simulation verification from 1963 to 2015. We group the selected papers by decade of publication to provide insights and explore the corpus from four perspectives: (i) the positioning of prominent concepts across the corpus as a whole; (ii) a comparison of the prominence of verification, validation, and Verification and Validation (V&V) as separate concepts; (iii) the positioning of the concepts specifically associated with verification; and (iv) an evaluation of verification’s defining characteristics within each decade. Our analysis reveals unique characterizations of verification in each decade. The insights gathered helped to identify and discuss three categories of verification challenges as avenues of future research, awareness, and understanding for researchers, students, and practitioners. These categories include conveying confidence and maintaining ease of use; techniques’ coverage abilities for handling increasing simulation complexities; and new ways to provide error feedback to model users.

]]>
<![CDATA[Scedar: A scalable Python package for single-cell RNA-seq exploratory data analysis]]> https://www.researchpad.co/article/elastic_article_13837 In single-cell RNA-seq (scRNA-seq) experiments, the number of individual cells has increased exponentially, and the sequencing depth of each cell has decreased significantly. As a result, analyzing scRNA-seq data requires extensive considerations of program efficiency and method selection. In order to reduce the complexity of scRNA-seq data analysis, we present scedar, a scalable Python package for scRNA-seq exploratory data analysis. The package provides a convenient and reliable interface for performing visualization, imputation of gene dropouts, detection of rare transcriptomic profiles, and clustering on large-scale scRNA-seq datasets. The analytical methods are efficient, and they also do not assume that the data follow certain statistical distributions. The package is extensible and modular, which would facilitate the further development of functionalities for future requirements with the open-source development community. The scedar package is distributed under the terms of the MIT license at https://pypi.org/project/scedar.

]]>
<![CDATA[SimSurvey: An R package for comparing the design and analysis of surveys by simulating spatially-correlated populations]]> https://www.researchpad.co/article/elastic_article_8465 Populations often show complex spatial and temporal dynamics, creating challenges in designing and implementing effective surveys. Inappropriate sampling designs can potentially lead to both under-sampling (reducing precision) and over-sampling (through the extensive and potentially expensive sampling of correlated metrics). These issues can be difficult to identify and avoid in sample surveys of fish populations as they tend to be costly and comprised of multiple levels of sampling. Population estimates are therefore affected by each level of sampling as well as the pathway taken to analyze such data. Though simulations are a useful tool for exploring the efficacy of specific sampling strategies and statistical methods, there are a limited number of tools that facilitate the simulation testing of a range of sampling and analytical pathways for multi-stage survey data. Here we introduce the R package SimSurvey, which has been designed to simplify the process of simulating surveys of age-structured and spatially-distributed populations. The package allows the user to simulate age-structured populations that vary in space and time and explore the efficacy of a range of built-in or user-defined sampling protocols to reproduce the population parameters of the known population. SimSurvey also includes a function for estimating the stratified mean and variance of the population from the simulated survey data. We demonstrate the use of this package using a case study and show that it can reveal unexpected sources of bias and be used to explore design-based solutions to such problems. In summary, SimSurvey can serve as a convenient, accessible and flexible platform for simulating a wide range of sampling strategies for fish stocks and other populations that show complex structuring. Various statistical approaches can then be applied to the results to test the efficacy of different analytical approaches.

]]>
<![CDATA[Long-term outcomes after extracorporeal membrane oxygenation in patients with dialysis-requiring acute kidney injury: A cohort study]]> https://www.researchpad.co/article/5c92b361d5eed0c4843a3f31

Background

Acute kidney injury (AKI) is a common complication of extracorporeal membrane oxygenation (ECMO) treatment. The aim of this study was to elucidate the long-term outcomes of adult patients with AKI who receive ECMO.

Materials and methods

The study analyzed encrypted datasets from Taiwan’s National Health Insurance Research Database. The data of 3251 patients who received first-time ECMO treatment between January 1, 2003, and December 31, 2013, were analyzed. Characteristics and outcomes were compared between patients who required dialysis for AKI (D-AKI) and those who did not in order to evaluate the impact of D-AKI on long-term mortality and major adverse kidney events.

Results

Of the 3251 patients, 54.1% had D-AKI. Compared with the patients without D-AKI, those with D-AKI had higher rates of all-cause mortality (52.3% vs. 33.3%; adjusted hazard ratio [aHR] 1.82, 95% confidence interval [CI] 1.53–2.17), chronic kidney disease (13.7% vs. 8.1%; adjusted subdistribution HR [aSHR] 1.66, 95% CI 1.16–2.38), and end-stage renal disease (5.2% vs. 0.5%; aSHR 14.28, 95% CI 4.67–43.62). The long-term mortality of patients who survived more than 90 days after discharge was 22.0% (153/695), 32.3% (91/282), and 50.0% (10/20) in the patients without D-AKI, with recovery D-AKI, and with nonrecovery D-AKI who required long-term dialysis, respectively, demonstrating a significant trend (Pfor trend <0.001).

Conclusion

AKI is associated with an increased risk of long-term mortality and major adverse kidney events in adult patients who receive ECMO.

]]>
<![CDATA[Is it time to stop sweeping data cleaning under the carpet? A novel algorithm for outlier management in growth data]]> https://www.researchpad.co/article/N6ac4201b-e1d9-4dac-b706-1c6b88e127a6

All data are prone to error and require data cleaning prior to analysis. An important example is longitudinal growth data, for which there are no universally agreed standard methods for identifying and removing implausible values and many existing methods have limitations that restrict their usage across different domains. A decision-making algorithm that modified or deleted growth measurements based on a combination of pre-defined cut-offs and logic rules was designed. Five data cleaning methods for growth were tested with and without the addition of the algorithm and applied to five different longitudinal growth datasets: four uncleaned canine weight or height datasets and one pre-cleaned human weight dataset with randomly simulated errors. Prior to the addition of the algorithm, data cleaning based on non-linear mixed effects models was the most effective in all datasets and had on average a minimum of 26.00% higher sensitivity and 0.12% higher specificity than other methods. Data cleaning methods using the algorithm had improved data preservation and were capable of correcting simulated errors according to the gold standard; returning a value to its original state prior to error simulation. The algorithm improved the performance of all data cleaning methods and increased the average sensitivity and specificity of the non-linear mixed effects model method by 7.68% and 0.42% respectively. Using non-linear mixed effects models combined with the algorithm to clean data allows individual growth trajectories to vary from the population by using repeated longitudinal measurements, identifies consecutive errors or those within the first data entry, avoids the requirement for a minimum number of data entries, preserves data where possible by correcting errors rather than deleting them and removes duplications intelligently. This algorithm is broadly applicable to data cleaning anthropometric data in different mammalian species and could be adapted for use in a range of other domains.

]]>
<![CDATA[All of gene expression (AOE): An integrated index for public gene expression databases]]> https://www.researchpad.co/article/N65b3f432-723a-4d59-a70d-2c0d696b62b7

Gene expression data have been archived as microarray and RNA-seq datasets in two public databases, Gene Expression Omnibus (GEO) and ArrayExpress (AE). In 2018, the DNA DataBank of Japan started a similar repository called the Genomic Expression Archive (GEA). These databases are useful resources for the functional interpretation of genes, but have been separately maintained and may lack RNA-seq data, while the original sequence data are available in the Sequence Read Archive (SRA). We constructed an index for those gene expression data repositories, called All Of gene Expression (AOE), to integrate publicly available gene expression data. The web interface of AOE can graphically query data in addition to the application programming interface. By collecting gene expression data from RNA-seq in the SRA, AOE also includes data not included in GEO and AE. AOE is accessible as a search tool from the GEA website and is freely available at https://aoe.dbcls.jp/.

]]>
<![CDATA[Exponential random graph model parameter estimation for very large directed networks]]> https://www.researchpad.co/article/N437fb42a-ebf8-44aa-9399-d12b1354408e

Exponential random graph models (ERGMs) are widely used for modeling social networks observed at one point in time. However the computational difficulty of ERGM parameter estimation has limited the practical application of this class of models to relatively small networks, up to a few thousand nodes at most, with usually only a few hundred nodes or fewer. In the case of undirected networks, snowball sampling can be used to find ERGM parameter estimates of larger networks via network samples, and recently published improvements in ERGM network distribution sampling and ERGM estimation algorithms have allowed ERGM parameter estimates of undirected networks with over one hundred thousand nodes to be made. However the implementations of these algorithms to date have been limited in their scalability, and also restricted to undirected networks. Here we describe an implementation of the recently published Equilibrium Expectation (EE) algorithm for ERGM parameter estimation of large directed networks. We test it on some simulated networks, and demonstrate its application to an online social network with over 1.6 million nodes.

]]>
<![CDATA[Sensitivity analysis of agent-based simulation utilizing massively parallel computation and interactive data visualization]]> https://www.researchpad.co/article/5c8823e3d5eed0c484639255

An essential step in the analysis of agent-based simulation is sensitivity analysis, which namely examines the dependency of parameter values on simulation results. Although a number of approaches have been proposed for sensitivity analysis, they still have limitations in exhaustivity and interpretability. In this study, we propose a novel methodology for sensitivity analysis of agent-based simulation, MASSIVE (Massively parallel Agent-based Simulations and Subsequent Interactive Visualization-based Exploration). MASSIVE takes a unique paradigm, which is completely different from those of sensitivity analysis methods developed so far, By combining massively parallel computation and interactive data visualization, MASSIVE enables us to inspect a broad parameter space intuitively. We demonstrated the utility of MASSIVE by its application to cancer evolution simulation, which successfully identified conditions that generate heterogeneous tumors. We believe that our approach would be a de facto standard for sensitivity analysis of agent-based simulation in an era of evergrowing computational technology. All the results form our MASSIVE analysis are available at https://www.hgc.jp/~niiyan/massive.

]]>
<![CDATA[Normalization of large-scale behavioural data collected from zebrafish]]> https://www.researchpad.co/article/5c706738d5eed0c4847c6c63

Many contemporary neuroscience experiments utilize high-throughput approaches to simultaneously collect behavioural data from many animals. The resulting data are often complex in structure and are subjected to systematic biases, which require new approaches for analysis and normalization. This study addressed the normalization need by establishing an approach based on linear-regression modeling. The model was established using a dataset of visual motor response (VMR) obtained from several strains of wild-type (WT) zebrafish collected at multiple stages of development. The VMR is a locomotor response triggered by drastic light change, and is commonly measured repeatedly from multiple larvae arrayed in 96-well plates. This assay is subjected to several systematic variations. For example, the light emitted by the machine varies slightly from well to well. In addition to the light-intensity variation, biological replication also created batch-batch variation. These systematic variations may result in differences in the VMR and must be normalized. Our normalization approach explicitly modeled the effect of these systematic variations on VMR. It also normalized the activity profiles of different conditions to a common baseline. Our approach is versatile, as it can incorporate different normalization needs as separate factors. The versatility was demonstrated by an integrated normalization of three factors: light-intensity variation, batch-batch variation and baseline. After normalization, new biological insights were revealed from the data. For example, we found larvae of TL strain at 6 days post-fertilization (dpf) responded to light onset much stronger than the 9-dpf larvae, whereas previous analysis without normalization shows that their responses were relatively comparable. By removing systematic variations, our model-based normalization can facilitate downstream statistical comparisons and aid detecting true biological differences in high-throughput studies of neurobehaviour.

]]>
<![CDATA[An open source algorithm to detect natural gas leaks from mobile methane survey data]]> https://www.researchpad.co/article/5c6dc9e7d5eed0c48452a459

The data collected by mobile methane (CH4) sensors can be used to find natural gas (NG) leaks in urban distribution systems. Extracting actionable insights from the large volumes of data collected by these sensors requires several data processing steps. While these survey platforms are commercially available, the associated data processing software largely constitute a black box due to their proprietary nature. In this paper we describe a step-by-step algorithm for developing leak indications using data from mobile CH4 surveys, providing an under-the-hood look at the choices and challenges associated with data analysis. We also describe how our algorithm has evolved over time, and the data-driven insights that have prompted these changes. Applying our algorithm to data collected in 15 cities produced more than 6100 leak indications and estimates of the leaks’ size. We use these results to characterize the distribution of leak sizes in local NG distribution systems. Mobile surveys are already an effective and necessary tool for managing NG distribution systems, but improvements in the technology and software will continue to increase its value.

]]>
<![CDATA[Grouping effects in numerosity perception under prolonged viewing conditions]]> https://www.researchpad.co/article/5c6dc9b1d5eed0c48452a00f

Humans can estimate numerosities–such as the number sheep in a flock–without deliberate counting. A number of biases have been identified in these estimates, which seem primarily rooted in the spatial organization of objects (grouping, symmetry, etc). Most previous studies on the number sense used static stimuli with extremely brief exposure times. However, outside the laboratory, visual scenes are often dynamic and freely viewed for prolonged durations (e.g., a flock of moving sheep). The purpose of the present study is to examine grouping-induced numerosity biases in stimuli that more closely mimic these conditions. To this end, we designed two experiments with limited-dot-lifetime displays (LDDs), in which each dot is visible for a brief period of time and replaced by a new dot elsewhere after its disappearance. The dynamic nature of LDDs prevents subjects from counting even when they are free-viewing a stimulus under prolonged presentation. Subjects estimated the number of dots in arrays that were presented either as a single group or were segregated into two groups by spatial clustering, dot size, dot color, or dot motion. Grouping by color and motion reduced perceived numerosity compared to viewing them as a single group. Moreover, the grouping effect sizes between these two features were correlated, which suggests that the effects may share a common, feature-invariant mechanism. Finally, we find that dot size and total stimulus area directly affect perceived numerosity, which makes it difficult to draw reliable conclusions about grouping effects induced by spatial clustering and dot size. Our results provide new insights into biases in numerosity estimation and they demonstrate that the use of LDDs is an effective method to study the human number sense under prolonged viewing.

]]>
<![CDATA[Advantages offered by the double magnetic loops versus the conventional single ones]]> https://www.researchpad.co/article/5c6c7571d5eed0c4843cfdb2

Due to their simplicity and operating mode, magnetic loops are one of the most used traffic sensors in Intelligent Transportation Systems (ITS). However, at this moment, their potential is not being fully exploited, as neither the speed nor the length of the vehicles can be surely ascertained with the use of a single magnetic loop. In this way, nowadays the vast majority of them are only being used to measure traffic flow and count vehicles on urban and interurban roads. This is the reason why we presented in a previous paper the double magnetic loop, capable of improving the features and functionalities of the conventional single loop without increasing the cost or introducing additional complexity. In that paper, it was introduced their design and peculiarities, how to calculate their magnetic field and three different methods to calculate their inductance. Therefore, with the purpose of improving the existing infrastructure and providing it with greater potential and reliability, this paper will focus on justifying and demonstrating the advantages offered by these double loops versus the conventional ones. This will involve analyzing the magnetic profiles generated by the passage of vehicles over double loops and comparing them with those already known. Moreover, it will be shown how the vehicle speed, the traffic direction and many other data can be obtained more easily and with less margin of error by using these new inductance signatures.

]]>
<![CDATA[A unified framework for unconstrained and constrained ordination of microbiome read count data]]> https://www.researchpad.co/article/5c6dc9a3d5eed0c484529f4e

Explorative visualization techniques provide a first summary of microbiome read count datasets through dimension reduction. A plethora of dimension reduction methods exists, but many of them focus primarily on sample ordination, failing to elucidate the role of the bacterial species. Moreover, implicit but often unrealistic assumptions underlying these methods fail to account for overdispersion and differences in sequencing depth, which are two typical characteristics of sequencing data. We combine log-linear models with a dispersion estimation algorithm and flexible response function modelling into a framework for unconstrained and constrained ordination. The method is able to cope with differences in dispersion between taxa and varying sequencing depths, to yield meaningful biological patterns. Moreover, it can correct for observed technical confounders, whereas other methods are adversely affected by these artefacts. Unlike distance-based ordination methods, the assumptions underlying our method are stated explicitly and can be verified using simple diagnostics. The combination of unconstrained and constrained ordination in the same framework is unique in the field and facilitates microbiome data exploration. We illustrate the advantages of our method on simulated and real datasets, while pointing out flaws in existing methods. The algorithms for fitting and plotting are available in the R-package RCM.

]]>
<![CDATA[On the synchronization techniques of chaotic oscillators and their FPGA-based implementation for secure image transmission]]> https://www.researchpad.co/article/5c648cedd5eed0c484c81aca

Synchronizing chaotic oscillators has been a challenge to guarantee successful applications in secure communications. That way, three synchronization techniques are applied herein to twenty two chaotic oscillators, three of them based on piecewise-linear functions and nineteen proposed by Julien C. Sprott. These chaotic oscillators are simulated to generate chaotic time series that are used to evaluate their Lyapunov exponents and Kaplan-Yorke dimension to rank their unpredictability. The oscillators with the high positive Lyapunov exponent are implemented into a field-programmable gate array (FPGA), and afterwards they are synchronized in a master-slave topology applying three techniques: the seminal work introduced by Pecora-Carroll, Hamiltonian forms and observer approach, and open-plus-closed-loop (OPCL). These techniques are compared with respect to their synchronization error and latency that is associated to the FPGA implementation. Finally, the chaotic oscillators providing the high positive Lyapunov exponent are synchronized and applied to a communication system with chaotic masking to perform a secure image transmission. Correlation analysis is performed among the original image, the chaotic channel and the recovered image for the three synchronization schemes. The experimental results show that both Hamiltonian forms and OPCL can recover the original image and its correlation with the chaotic channel is as low as 0.00002, demonstrating the advantage of synchronizing chaotic oscillators with high positive Lyapunov exponent to guarantee high security in data transmission.

]]>
<![CDATA[Factors influencing sedentary behaviour: A system based analysis using Bayesian networks within DEDIPAC]]> https://www.researchpad.co/article/5c5b525bd5eed0c4842bc6d7

Background

Decreasing sedentary behaviour (SB) has emerged as a public health priority since prolonged sitting increases the risk of non-communicable diseases. Mostly, the independent association of factors with SB has been investigated, although lifestyle behaviours are conditioned by interdependent factors. Within the DEDIPAC Knowledge Hub, a system of sedentary behaviours (SOS)-framework was created to take interdependency among multiple factors into account. The SOS framework is based on a system approach and was developed by combining evidence synthesis and expert consensus. The present study conducted a Bayesian network analysis to investigate and map the interdependencies between factors associated with SB through the life-course from large scale empirical data.

Methods

Data from the Eurobarometer survey (80.2, 2013) that included the International physical activity questionnaire (IPAQ) short as well as socio-demographic information and questions on perceived environment, health, and psychosocial information were enriched with macro-level data from the Eurostat database. Overall, 33 factors were identified aligned to the SOS-framework to represent six clusters on the individual or regional level: 1) physical health and wellbeing, 2) social and cultural context, 3) built and natural environment, 4) psychology and behaviour, 5) institutional and home settings, 6) policy and economics. A Bayesian network analysis was conducted to investigate conditional associations among all factors and to determine their importance within these networks. Bayesian networks were estimated for the complete (23,865 EU-citizens with complete data) sample and for sex- and four age-specific subgroups. Distance and centrality were calculated to determine importance of factors within each network around SB.

Results

In the young (15–25), adult (26–44), and middle-aged (45–64) groups occupational level was directly associated with SB for both, men and women. Consistently, social class and educational level were indirectly associated within male adult groups, while in women factors of the family context were indirectly associated with SB. Only in older adults, factors of the built environment were relevant with regard to SB, while factors of the home and institutional settings were less important compared to younger age groups.

Conclusion

Factors of the home and institutional settings as well as the social and cultural context were found to be important in the network of associations around SB supporting the priority for future research in these clusters. Particularly, occupational status was found to be the main driver of SB through the life-course. Investigating conditional associations by Bayesian networks gave a better understanding of the complex interplay of factors being associated with SB. This may provide detailed insights in the mechanisms behind the burden of SB to effectively inform policy makers for detailed intervention planning. However, considering the complexity of the issue, there is need for a more comprehensive system of data collection including objective measures of sedentary time.

]]>
<![CDATA[The Light Sword Lens - A novel method of presbyopia compensation: Pilot clinical study]]> https://www.researchpad.co/article/5c61e8fdd5eed0c48496f5bb

Purpose

Clinical assessment of a new optical element for presbyopia correction–the Light Sword Lens.

Methods

Healthy dominant eyes of 34 presbyopes were examined for visual performance in 3 trials: reference (with lens for distance correction); stenopeic (distance correction with a pinhole ϕ = 1.25 mm) and Light Sword Lens (distance correction with a Light Sword Lens). In each trial, visual acuity was assessed in 7 tasks for defocus from 0.2D to 3.0D while contrast sensitivity in 2 tasks for defocus 0.3D and 2.5D. The Early Treatment Diabetic Retinopathy Study protocol and Pelli-Robson method were applied. Within visual acuity and contrast sensitivity results degree of homogeneity through defocus was determined. Reference and stenopeic trials were compared to Light Sword Lens results. Friedman analysis of variance, Nemenyi post-hoc, Wilcoxon tests were used, p-value < 0.05 was considered significant.

Results

In Light Sword Lens trial visual acuity was stable in tested defocus range [20/25–20/32], Stenopeic trial exhibited a limited range of degradation [20/25–20/40]. Light Sword Lens and reference trials contrast sensitivity was high [1.9–2.0 logCS] for both defocus cases, but low in stenopeic condition [1.5–1.7 logCS]. Between-trials comparisons of visual acuity results showed significant differences only for Light Sword Lens versus reference trials and in contrast sensitivity only for Light Sword Lens versus stenopeic trials.

Conclusions

Visual acuity achieved with Light Sword Lens correction in presbyopic eye is comparable to stenopeic but exhibits none significant loss in contrast sensitivity. Such correction method seems to be very promising for novel contact lenses and intraocular lenses design.

]]>
<![CDATA[Spatial visual function in anomalous trichromats: Is less more?]]> https://www.researchpad.co/article/5c5217ddd5eed0c48479472c

Color deficiency is a common inherited disorder affecting 8% of Caucasian males with anomalous trichromacy (AT); it is the most common type of inherited color vision deficiency. Anomalous trichromacy is caused by alteration of one of the three cone-opsins’ spectral sensitivity; it is usually considered to impose marked limitations for daily life as well as for choice of occupation. Nevertheless, we show here that anomalous trichromat subjects have superior basic visual functions such as visual acuity (VA), contrast sensitivity (CS), and stereo acuity, compared with participants with normal color vision. Both contrast sensitivity and stereo acuity performance were correlated with the severity of color deficiency. We further show that subjects with anomalous trichromacy exhibit a better ability to detect objects camouflaged in natural gray scale figures. The advantages of color-deficient subjects in spatial vision performance could explain the relatively high prevalence of color-vision polymorphism in humans.

]]>
<![CDATA[Movement and conformity interact to establish local behavioural traditions in animal populations]]> https://www.researchpad.co/article/5c254510d5eed0c48442bde0

The social transmission of information is critical to the emergence of animal culture. Two processes are predicted to play key roles in how socially-transmitted information spreads in animal populations: the movement of individuals across the landscape and conformist social learning. We develop a model that, for the first time, explicitly integrates these processes to investigate their impacts on the spread of behavioural preferences. Our results reveal a strong interplay between movement and conformity in determining whether locally-variable traditions establish across a landscape or whether a single preference dominates the whole population. The model is able to replicate a real-world cultural diffusion experiment in great tits Parus major, but also allows for a range of predictions for the emergence of animal culture under various initial conditions, habitat structure and strength of conformist bias to be made. Integrating social behaviour with ecological variation will be important for understanding the stability and diversity of culture in animals.

]]>
<![CDATA[Cross-sectional survey of off-label and unlicensed prescribing for inpatients at a paediatric teaching hospital in Western Australia]]> https://www.researchpad.co/article/5c3e4f70d5eed0c484d754d2

Objectives

To evaluate the prevalence of off-label and unlicensed prescribing in inpatients at a major paediatric teaching hospital in Western Australia and to identify which drugs are commonly prescribed off-label or unlicensed, including factors influencing such prescribing.

Methods

A retrospective cross-sectional study was conducted in June, 2013. Patient and prescribing data were collected from 190 inpatient medication chart records which had been randomly selected from all admissions during the second week of February 2013. Drugs were categorised as licensed, off-label or unlicensed, according to their approved Australian registration product information (PI). All drugs were classified according to the Anatomical Therapeutic Chemical (ATC) code.

Results

There were 120 male and 70 female inpatients. The average age was 6.0 years (± 4.7). The study included 1160 prescribed drugs suitable for analysis. The number of drugs prescribed per patient ranged from 1 to 25 with an average of 6.1 (± 4.3). More than half (54%) were prescribed off-label. Oxycodone, clonidine, parecoxib and midazolam were always prescribed off-label. The most common off-label drugs were ondansetron (18.5%), fentanyl (12.9%), oxycodone (8.8%) and paracetamol (6.1%). Many ATC classifications included high off-label proportions especially the genitourinary system and sex hormones, respiratory system drugs, systemic hormonal preparations and alimentary tract and metabolism drugs.

Conclusions

This study highlights that prescribing of paediatric drugs needs to be better supported by existing and new evidence. Incentives should be established to foster the conduct of evidence-based studies in the paediatric population. The current level of off-label prescribing raises issues of unexpected toxicity and adverse drug effects in children that are in some cases severely ill.

]]>
<![CDATA[An analysis of network brokerage and geographic location in fifteenth-century AD Northern Iroquoia]]> https://www.researchpad.co/article/5c3fa5dcd5eed0c484ca96cc

Iroquoian villagers living in present-day Jefferson County, New York, at the headwaters of the St. Lawrence River and the east shore of Lake Ontario, played important roles in regional interactions during the fifteenth century AD, as brokers linking populations on the north shore of Lake Ontario with populations in eastern New York. This study employs a social network analysis and least cost path analysis to assess the degree to which geographical location may have facilitated the brokerage positions of site clusters within pan-Iroquoian social networks. The results indicate that location was a significant factor in determining brokerage. In the sixteenth century AD, when Jefferson County was abandoned, measurable increases in social distance between other Iroquoian populations obtained. These results add to our understandings of the dynamic social landscape of fifteenth and sixteenth century AD northern Iroquoia, complementing recent analyses elsewhere of the roles played in regional interaction networks by populations located along geopolitical frontiers.

]]>