ResearchPad - computerized-simulations Default RSS Feed en-us © 2020 Newgen KnowledgeWorks <![CDATA[A content analysis-based approach to explore simulation verification and identify its current challenges]]> Verification is a crucial process to facilitate the identification and removal of errors within simulations. This study explores semantic changes to the concept of simulation verification over the past six decades using a data-supported, automated content analysis approach. We collect and utilize a corpus of 4,047 peer-reviewed Modeling and Simulation (M&S) publications dealing with a wide range of studies of simulation verification from 1963 to 2015. We group the selected papers by decade of publication to provide insights and explore the corpus from four perspectives: (i) the positioning of prominent concepts across the corpus as a whole; (ii) a comparison of the prominence of verification, validation, and Verification and Validation (V&V) as separate concepts; (iii) the positioning of the concepts specifically associated with verification; and (iv) an evaluation of verification’s defining characteristics within each decade. Our analysis reveals unique characterizations of verification in each decade. The insights gathered helped to identify and discuss three categories of verification challenges as avenues of future research, awareness, and understanding for researchers, students, and practitioners. These categories include conveying confidence and maintaining ease of use; techniques’ coverage abilities for handling increasing simulation complexities; and new ways to provide error feedback to model users.

<![CDATA[Video loss prediction model in wireless networks]]>

This work discusses video communications over wireless networks (IEEE 802.11ac standard). The videos are in three different resolutions: 720p, 1080p, and 2160p. It is essential to study the performance of these media in access technologies to enhance the current coding and communications techniques. This study sets out a video quality prediction model that includes the different resolutions that are based on wireless network terms and conditions, an approach that has not previously been adopted in the literature. The model involves obtaining Service and Experience Quality Metrics, such as PSNR (Peak Signal-to-Noise Ratio) and packet loss. This article outlines a methodology and mathematical model for video quality loss in the wireless network from simulated data and its accuracy is ensured through the use of performance metrics (RMSE and Standard Deviation). The methodology is based on two mathematical functions, (logarithmic and exponential), and their parameters are defined by linear regression. The model obtained RMSE values and standard deviation of 2.32 dB and 2.2 dB for the predicted values, respectively. The results should lead to a CODEC (Coder-Decoder) improvement and contribute to a better wireless networks design.

<![CDATA[qTorch: The quantum tensor contraction handler]]>

Classical simulation of quantum computation is necessary for studying the numerical behavior of quantum algorithms, as there does not yet exist a large viable quantum computer on which to perform numerical tests. Tensor network (TN) contraction is an algorithmic method that can efficiently simulate some quantum circuits, often greatly reducing the computational cost over methods that simulate the full Hilbert space. In this study we implement a tensor network contraction program for simulating quantum circuits using multi-core compute nodes. We show simulation results for the Max-Cut problem on 3- through 7-regular graphs using the quantum approximate optimization algorithm (QAOA), successfully simulating up to 100 qubits. We test two different methods for generating the ordering of tensor index contractions: one is based on the tree decomposition of the line graph, while the other generates ordering using a straight-forward stochastic scheme. Through studying instances of QAOA circuits, we show the expected result that as the treewidth of the quantum circuit’s line graph decreases, TN contraction becomes significantly more efficient than simulating the whole Hilbert space. The results in this work suggest that tensor contraction methods are superior only when simulating Max-Cut/QAOA with graphs of regularities approximately five and below. Insight into this point of equal computational cost helps one determine which simulation method will be more efficient for a given quantum circuit. The stochastic contraction method outperforms the line graph based method only when the time to calculate a reasonable tree decomposition is prohibitively expensive. Finally, we release our software package, qTorch (Quantum TensOR Contraction Handler), intended for general quantum circuit simulation. For a nontrivial subset of these quantum circuits, 50 to 100 qubits can easily be simulated on a single compute node.

<![CDATA[Walking along the road with anonymous users in similar attributes]]>

Recently, the ubiquitousness of smartphones and tablet computers have changed the style of people’s daily life. With this tendency, location based service (LBS) has become one of the prosperous types of service along with the wireless and positioning technology development. However, as the LBS server needs precise location information about the user to provide service result, the procedure of LBS may reveal location privacy, especially when a user is utilizing continuous query along the road. In continuous query, attributes of the user are released inadvertently with per-query, and the information can be collected by an adversary as background knowledge to correlate the location trajectory and infer the personal privacy. Although, a user can employ a central server (CS) to provide privacy preservation for his location, the trustfulness of CS still is without testified and it is usually considered as an un-trusted entity. Thus, in this paper, the trustfulness of CS is verified by a game tree, and then with the result we propose a hash based attribute anonymous scheme (short for HBAA) to obfuscate the attributes released in each query along the road. With the help of HBAA, the CS has no opportunity to get any information about the user who sends his query for generalization service. Furthermore, as the set of attributes is transmitted into a fixed length of hash value, the processing time that spent in attribute generalization is stripped down and the performance of executive efficiency is improved. At last, security analysis and simulation experiment are proposed, and then results of security proving as well as simulation experiments further reflect the superiority of our proposed scheme.

<![CDATA[A novel 3D ray launching technique for radio propagation prediction in indoor environments]]>

Radio propagation prediction simulation methods based on deterministic technique such as ray launching is extensively used to accomplish radio channel characterization. However, the superiority of the simulation depends on the number of rays launched and received. This paper presented the indoor three-dimensional (3D) Minimum Ray Launching Maximum Accuracy (MRLMA) technique, which is applicable for an efficient indoor radio wave propagation prediction. Utilizing the novel MRLMA technique in the simulation environment for ray lunching and tracing can drastically reduce the number of rays that need to be traced, and improve the efficiency of ray tracing. Implementation and justification of MRLMA presented in the paper. An indoor office 3D layouts are selected and simulations have been performed using the MRLMA and other reference techniques. Results showed that the indoor 3D MRLMA model is appropriate for wireless communications network systems design and optimization process with respect to efficiency, coverage, number of rays launching, number of rays received by the mobile station, and simulation time.

<![CDATA[Electric Imaging through Evolution, a Modeling Study of Commonalities and Differences]]>

Modeling the electric field and images in electric fish contributes to a better understanding of the pre-receptor conditioning of electric images. Although the boundary element method has been very successful for calculating images and fields, complex electric organ discharges pose a challenge for active electroreception modeling. We have previously developed a direct method for calculating electric images which takes into account the structure and physiology of the electric organ as well as the geometry and resistivity of fish tissues. The present article reports a general application of our simulator for studying electric images in electric fish with heterogeneous, extended electric organs. We studied three species of Gymnotiformes, including both wave-type (Apteronotus albifrons) and pulse-type (Gymnotus obscurus and Gymnotus coropinae) fish, with electric organs of different complexity. The results are compared with the African (Gnathonemus petersii) and American (Gymnotus omarorum) electric fish studied previously. We address the following issues: 1) how to calculate equivalent source distributions based on experimental measurements, 2) how the complexity of the electric organ discharge determines the features of the electric field and 3) how the basal field determines the characteristics of electric images. Our findings allow us to generalize the hypothesis (previously posed for G. omarorum) in which the perioral region and the rest of the body play different sensory roles. While the “electrosensory fovea” appears suitable for exploring objects in detail, the rest of the body is likened to a “peripheral retina” for detecting the presence and movement of surrounding objects. We discuss the commonalities and differences between species. Compared to African species, American electric fish show a weaker field. This feature, derived from the complexity of distributed electric organs, may endow Gymnotiformes with the ability to emit site-specific signals to be detected in the short range by a conspecific and the possibility to evolve predator avoidance strategies.

<![CDATA[BiP Clustering Facilitates Protein Folding in the Endoplasmic Reticulum]]>

The chaperone BiP participates in several regulatory processes within the endoplasmic reticulum (ER): translocation, protein folding, and ER-associated degradation. To facilitate protein folding, a cooperative mechanism known as entropic pulling has been proposed to demonstrate the molecular-level understanding of how multiple BiP molecules bind to nascent and unfolded proteins. Recently, experimental evidence revealed the spatial heterogeneity of BiP within the nuclear and peripheral ER of S. cerevisiae (commonly referred to as ‘clusters’). Here, we developed a model to evaluate the potential advantages of accounting for multiple BiP molecules binding to peptides, while proposing that BiP's spatial heterogeneity may enhance protein folding and maturation. Scenarios were simulated to gauge the effectiveness of binding multiple chaperone molecules to peptides. Using two metrics: folding efficiency and chaperone cost, we determined that the single binding site model achieves a higher efficiency than models characterized by multiple binding sites, in the absence of cooperativity. Due to entropic pulling, however, multiple chaperones perform in concert to facilitate the resolubilization and ultimate yield of folded proteins. As a result of cooperativity, multiple binding site models used fewer BiP molecules and maintained a higher folding efficiency than the single binding site model. These insilico investigations reveal that clusters of BiP molecules bound to unfolded proteins may enhance folding efficiency through cooperative action via entropic pulling.

<![CDATA[A Likelihood-Based Approach to Identifying Contaminated Food Products Using Sales Data: Performance and Challenges]]>

Foodborne disease outbreaks of recent years demonstrate that due to increasingly interconnected supply chains these type of crisis situations have the potential to affect thousands of people, leading to significant healthcare costs, loss of revenue for food companies, and—in the worst cases—death. When a disease outbreak is detected, identifying the contaminated food quickly is vital to minimize suffering and limit economic losses. Here we present a likelihood-based approach that has the potential to accelerate the time needed to identify possibly contaminated food products, which is based on exploitation of food products sales data and the distribution of foodborne illness case reports. Using a real world food sales data set and artificially generated outbreak scenarios, we show that this method performs very well for contamination scenarios originating from a single “guilty” food product. As it is neither always possible nor necessary to identify the single offending product, the method has been extended such that it can be used as a binary classifier. With this extension it is possible to generate a set of potentially “guilty” products that contains the real outbreak source with very high accuracy. Furthermore we explore the patterns of food distributions that lead to “hard-to-identify” foods, the possibility of identifying these food groups a priori, and the extent to which the likelihood-based method can be used to quantify uncertainty. We find that high spatial correlation of sales data between products may be a useful indicator for “hard-to-identify” products.

<![CDATA[A Small Fraction of Strongly Cooperative Sodium Channels Boosts Neuronal Encoding of High Frequencies]]>

Generation of action potentials (APs) is a crucial step in neuronal information processing. Existing biophysical models for AP generation almost universally assume that individual voltage-gated sodium channels operate statistically independently, and their avalanche-like opening that underlies AP generation is coordinated only through the transmembrane potential. However, biological ion channels of various types can exhibit strongly cooperative gating when clustered. Cooperative gating of sodium channels has been suggested to explain rapid onset dynamics and large threshold variability of APs in cortical neurons. It remains however unknown whether these characteristic properties of cortical APs can be reproduced if only a fraction of channels express cooperativity, and whether the presence of cooperative channels has an impact on encoding properties of neuronal populations. To address these questions we have constructed a conductance-based neuron model in which we continuously varied the size of a fraction of sodium channels expressing cooperativity and the strength of coupling between cooperative channels . We show that starting at a critical value of the coupling strength , the activation curve of sodium channels develops a discontinuity at which opening of all coupled channels becomes an all-or-none event, leading to very rapid AP onsets. Models with a small fraction, , of strongly cooperative channels generate APs with the most rapid onset dynamics. In this regime APs are triggered by simultaneous opening of the cooperative channel fraction and exhibit a pronounced biphasic waveform often observed in cortical neurons. We further show that presence of a small fraction of cooperative Na+ channels significantly improves the ability of neuronal populations to phase-lock their firing to high frequency input fluctuation. We conclude that presence of a small fraction of strongly coupled sodium channels can explain characteristic features of cortical APs and has a functional impact of enhancing the spike encoding of rapidly varying signals.

<![CDATA[A Mathematical Model for Eph/Ephrin-Directed Segregation of Intermingled Cells]]>

Eph receptors, the largest family of receptor tyrosine kinases, control cell-cell adhesion/de-adhesion, cell morphology and cell positioning through interaction with cell surface ephrin ligands. Bi-directional signalling from the Eph and ephrin complexes on interacting cells have a significant role in controlling normal tissue development and oncogenic tissue patterning. Eph-mediated tissue patterning is based on the fine-tuned balance of adhesion and de-adhesion reactions between distinct Eph- and ephrin-expressing cell populations, and adhesion within like populations (expressing either Eph or ephrin). Here we develop a stochastic, Lagrangian model that is based on Eph/ephrin biology: incorporating independent Brownian motion to describe cell movement and a deterministic term (the drift term) to represent repulsive and adhesive interactions between neighbouring cells. Comparison between the experimental and computer simulated Eph/ephrin cell patterning events shows that the model recapitulates the dynamics of cell-cell segregation and cell cluster formation. Moreover, by modulating the term for Eph/ephrin-mediated repulsion, the model can be tuned to match the actual behaviour of cells with different levels of Eph expression or activity. Together the results of our experiments and modelling suggest that the complexity of Eph/ephrin signalling mechanisms that control cell-cell interactions can be described well by a mathematical model with a single term balancing adhesion and de-adhesion between interacting cells. This model allows reliable prediction of Eph/ephrin-dependent control of cell patterning behaviour.

<![CDATA[Increased Vulnerability of Human Ventricle to Re-entrant Excitation in hERG-linked Variant 1 Short QT Syndrome]]>

The short QT syndrome (SQTS) is a genetically heterogeneous condition characterized by abbreviated QT intervals and an increased susceptibility to arrhythmia and sudden death. This simulation study identifies arrhythmogenic mechanisms in the rapid-delayed rectifier K+ current (IKr)-linked SQT1 variant of the SQTS. Markov chain (MC) models were found to be superior to Hodgkin-Huxley (HH) models in reproducing experimental data regarding effects of the N588K mutation on KCNH2-encoded hERG. These ionic channel models were then incorporated into human ventricular action potential (AP) models and into 1D and 2D idealised and realistic transmural ventricular tissue simulations and into a 3D anatomical model. In single cell models, the N588K mutation abbreviated ventricular cell AP duration at 90% repolarization (APD90) and decreased the maximal transmural voltage heterogeneity (δV) during APs. This resulted in decreased transmural heterogeneity of APD90 and of the effective refractory period (ERP): effects that are anticipated to be anti-arrhythmic rather than pro-arrhythmic. However, with consideration of transmural heterogeneity of IKr density in the intact tissue model based on the ten Tusscher-Noble-Noble-Panfilov ventricular model, not only did the N588K mutation lead to QT-shortening and increases in T-wave amplitude, but δV was found to be augmented in some local regions of ventricle tissue, resulting in increased tissue vulnerability for uni-directional conduction block and predisposing to formation of re-entrant excitation waves. In 2D and 3D tissue models, the N588K mutation facilitated and maintained re-entrant excitation waves due to the reduced substrate size necessary for sustaining re-entry. Thus, in SQT1 the N588K-hERG mutation facilitates initiation and maintenance of ventricular re-entry, increasing the lifespan of re-entrant spiral waves and the stability of scroll waves in 3D tissue.

<![CDATA[Bystander Responses to a Violent Incident in an Immersive Virtual Environment]]>

Under what conditions will a bystander intervene to try to stop a violent attack by one person on another? It is generally believed that the greater the size of the crowd of bystanders, the less the chance that any of them will intervene. A complementary model is that social identity is critical as an explanatory variable. For example, when the bystander shares common social identity with the victim the probability of intervention is enhanced, other things being equal. However, it is generally not possible to study such hypotheses experimentally for practical and ethical reasons. Here we show that an experiment that depicts a violent incident at life-size in immersive virtual reality lends support to the social identity explanation. 40 male supporters of Arsenal Football Club in England were recruited for a two-factor between-groups experiment: the victim was either an Arsenal supporter or not (in-group/out-group), and looked towards the participant for help or not during the confrontation. The response variables were the numbers of verbal and physical interventions by the participant during the violent argument. The number of physical interventions had a significantly greater mean in the in-group condition compared to the out-group. The more that participants perceived that the Victim was looking to them for help the greater the number of interventions in the in-group but not in the out-group. These results are supported by standard statistical analysis of variance, with more detailed findings obtained by a symbolic regression procedure based on genetic programming. Verbal interventions made during their experience, and analysis of post-experiment interview data suggest that in-group members were more prone to confrontational intervention compared to the out-group who were more prone to make statements to try to diffuse the situation.

<![CDATA[Hands-On Parameter Search for Neural Simulations by a MIDI-Controller]]>

Computational neuroscientists frequently encounter the challenge of parameter fitting – exploring a usually high dimensional variable space to find a parameter set that reproduces an experimental data set. One common approach is using automated search algorithms such as gradient descent or genetic algorithms. However, these approaches suffer several shortcomings related to their lack of understanding the underlying question, such as defining a suitable error function or getting stuck in local minima. Another widespread approach is manual parameter fitting using a keyboard or a mouse, evaluating different parameter sets following the users intuition. However, this process is often cumbersome and time-intensive. Here, we present a new method for manual parameter fitting. A MIDI controller provides input to the simulation software, where model parameters are then tuned according to the knob and slider positions on the device. The model is immediately updated on every parameter change, continuously plotting the latest results. Given reasonably short simulation times of less than one second, we find this method to be highly efficient in quickly determining good parameter sets. Our approach bears a close resemblance to tuning the sound of an analog synthesizer, giving the user a very good intuition of the problem at hand, such as immediate feedback if and how results are affected by specific parameter changes. In addition to be used in research, our approach should be an ideal teaching tool, allowing students to interactively explore complex models such as Hodgkin-Huxley or dynamical systems.

<![CDATA[A Computational Approach to Evaluate the Androgenic Affinity of Iprodione, Procymidone, Vinclozolin and Their Metabolites]]>

Our research is aimed at devising and assessing a computational approach to evaluate the affinity of endocrine active substances (EASs) and their metabolites towards the ligand binding domain (LBD) of the androgen receptor (AR) in three distantly related species: human, rat, and zebrafish. We computed the affinity for all the selected molecules following a computational approach based on molecular modelling and docking. Three different classes of molecules with well-known endocrine activity (iprodione, procymidone, vinclozolin, and a selection of their metabolites) were evaluated. Our approach was demonstrated useful as the first step of chemical safety evaluation since ligand-target interaction is a necessary condition for exerting any biological effect. Moreover, a different sensitivity concerning AR LBD was computed for the tested species (rat being the least sensitive of the three). This evidence suggests that, in order not to over−/under-estimate the risks connected with the use of a chemical entity, further in vitro and/or in vivo tests should be carried out only after an accurate evaluation of the most suitable cellular system or animal species. The introduction of in silico approaches to evaluate hazard can accelerate discovery and innovation with a lower economic effort than with a fully wet strategy.

<![CDATA[Neuronal Chains for Actions in the Parietal Lobe: A Computational Model]]>

The inferior part of the parietal lobe (IPL) is known to play a very important role in sensorimotor integration. Neurons in this region code goal-related motor acts performed with the mouth, with the hand and with the arm. It has been demonstrated that most IPL motor neurons coding a specific motor act (e.g., grasping) show markedly different activation patterns according to the final goal of the action sequence in which the act is embedded (grasping for eating or grasping for placing). Some of these neurons (parietal mirror neurons) show a similar selectivity also during the observation of the same action sequences when executed by others. Thus, it appears that the neuronal response occurring during the execution and the observation of a specific grasping act codes not only the executed motor act, but also the agent's final goal (intention).

In this work we present a biologically inspired neural network architecture that models mechanisms of motor sequences execution and recognition. In this network, pools composed of motor and mirror neurons that encode motor acts of a sequence are arranged in form of action goal-specific neuronal chains. The execution and the recognition of actions is achieved through the propagation of activity bursts along specific chains modulated by visual and somatosensory inputs.

The implemented spiking neuron network is able to reproduce the results found in neurophysiological recordings of parietal neurons during task performance and provides a biologically plausible implementation of the action selection and recognition process.

Finally, the present paper proposes a mechanism for the formation of new neural chains by linking together in a sequential manner neurons that represent subsequent motor acts, thus producing goal-directed sequences.

<![CDATA[Population-Based Stroke Atlas for Outcome Prediction: Method and Preliminary Results for Ischemic Stroke from CT]]>

Background and Purpose

Knowledge of outcome prediction is important in stroke management. We propose a lesion size and location-driven method for stroke outcome prediction using a Population-based Stroke Atlas (PSA) linking neurological parameters with neuroimaging in population. The PSA aggregates data from previously treated patients and applies them to currently treated patients. The PSA parameter distribution in the infarct region of a treated patient enables prediction. We introduce a method for PSA calculation, quantify its performance, and use it to illustrate ischemic stroke outcome prediction of modified Rankin Scale (mRS) and Barthel Index (BI).


The preliminary PSA was constructed from 128 ischemic stroke cases calculated for 8 variants (various data aggregation schemes) and 3 case selection variables (infarct volume, NIHSS at admission, and NIHSS at day 7), each in 4 ranges. Outcome prediction for 9 parameters (mRS at 7th, and mRS and BI at 30th, 90th, 180th, 360th day) was studied using a leave-one-out approach, requiring 589,824 PSA maps to be analyzed.


Outcomes predicted for different PSA variants are statistically equivalent, so the simplest and most efficient variant aiming at parameter averaging is employed. This variant allows the PSA to be pre-calculated before prediction. The PSA constrained by infarct volume and NIHSS reduces the average prediction error (absolute difference between the predicted and actual values) by a fraction of 0.796; the use of 3 patient-specific variables further lowers it by 0.538. The PSA-based prediction error for mild and severe outcomes (mRS = [2][5]) is (0.5–0.7). Prediction takes about 8 seconds.


PSA-based prediction of individual and group mRS and BI scores over time is feasible, fast and simple, but its clinical usefulness requires further studies. The case selection operation improves PSA predictability. A multiplicity of PSAs can be computed independently for different datasets at various centers and easily merged, which enables building powerful PSAs over the community.

<![CDATA[Consistent Selection towards Low Activity Phenotypes When Catchability Depends on Encounters among Human Predators and Fish]]>

Together with life-history and underlying physiology, the behavioural variability among fish is one of the three main trait axes that determines the vulnerability to fishing. However, there are only a few studies that have systematically investigated the strength and direction of selection acting on behavioural traits. Using in situ fish behaviour revealed by telemetry techniques as input, we developed an individual-based model (IBM) that simulated the Lagrangian trajectory of prey (fish) moving within a confined home range (HR). Fishers exhibiting various prototypical fishing styles targeted these fish in the model. We initially hypothesised that more active and more explorative individuals would be systematically removed under all fished conditions, in turn creating negative selection differentials on low activity phenotypes and maybe on small HR. Our results partly supported these general predictions. Standardised selection differentials were, on average, more negative on HR than on activity. However, in many simulation runs, positive selection pressures on HR were also identified, which resulted from the stochastic properties of the fishes’ movement and its interaction with the human predator. In contrast, there was a consistent negative selection on activity under all types of fishing styles. Therefore, in situations where catchability depends on spatial encounters between human predators and fish, we would predict a consistent selection towards low activity phenotypes and have less faith in the direction of the selection on HR size. Our study is the first theoretical investigation on the direction of fishery-induced selection of behaviour using passive fishing gears. The few empirical studies where catchability of fish was measured in relation to passive fishing techniques, such as gill-nets, traps or recreational fishing, support our predictions that fish in highly exploited situations are, on average, characterised by low swimming activity, stemming, in part, from negative selection on swimming activity.

<![CDATA[An Integrated In Silico Approach to Design Specific Inhibitors Targeting Human Poly(A)-Specific Ribonuclease]]>

Poly(A)-specific ribonuclease (PARN) is an exoribonuclease/deadenylase that degrades 3′-end poly(A) tails in almost all eukaryotic organisms. Much of the biochemical and structural information on PARN comes from the human enzyme. However, the existence of PARN all along the eukaryotic evolutionary ladder requires further and thorough investigation. Although the complete structure of the full-length human PARN, as well as several aspects of the catalytic mechanism still remain elusive, many previous studies indicate that PARN can be used as potent and promising anti-cancer target. In the present study, we attempt to complement the existing structural information on PARN with in-depth bioinformatics analyses, in order to get a hologram of the molecular evolution of PARNs active site. In an effort to draw an outline, which allows specific drug design targeting PARN, an unequivocally specific platform was designed for the development of selective modulators focusing on the unique structural and catalytic features of the enzyme. Extensive phylogenetic analysis based on all the publicly available genomes indicated a broad distribution for PARN across eukaryotic species and revealed structurally important amino acids which could be assigned as potentially strong contributors to the regulation of the catalytic mechanism of PARN. Based on the above, we propose a comprehensive in silico model for the PARN’s catalytic mechanism and moreover, we developed a 3D pharmacophore model, which was subsequently used for the introduction of DNP-poly(A) amphipathic substrate analog as a potential inhibitor of PARN. Indeed, biochemical analysis revealed that DNP-poly(A) inhibits PARN competitively. Our approach provides an efficient integrated platform for the rational design of pharmacophore models as well as novel modulators of PARN with therapeutic potential.

<![CDATA[Smart Swarms of Bacteria-Inspired Agents with Performance Adaptable Interactions]]>

Collective navigation and swarming have been studied in animal groups, such as fish schools, bird flocks, bacteria, and slime molds. Computer modeling has shown that collective behavior of simple agents can result from simple interactions between the agents, which include short range repulsion, intermediate range alignment, and long range attraction. Here we study collective navigation of bacteria-inspired smart agents in complex terrains, with adaptive interactions that depend on performance. More specifically, each agent adjusts its interactions with the other agents according to its local environment – by decreasing the peers' influence while navigating in a beneficial direction, and increasing it otherwise. We show that inclusion of such performance dependent adaptable interactions significantly improves the collective swarming performance, leading to highly efficient navigation, especially in complex terrains. Notably, to afford such adaptable interactions, each modeled agent requires only simple computational capabilities with short-term memory, which can easily be implemented in simple swarming robots.

<![CDATA[An Integrated Computational Approach to Rationalize the Activity of Non-Zinc-Binding MMP-2 Inhibitors]]>

Matrix metalloproteinases are a family of Zn-proteases involved in tissue remodeling and in many pathological conditions. Among them MMP-2 is one of the most relevant target in anticancer therapy. Commonly, MMP inhibitors contain a functional group able to bind the zinc ion and responsible for undesired side effects. The discovery of potent and selective MMP inhibitors not bearing a zinc-binding group is arising for some MMP family members and represents a new opportunity to find selective and non toxic inhibitors.

In this work we attempted to get more insight on the inhibition process of MMP-2 by two non-zinc-binding inhibitors, applying a general protocol that combines several computational tools (docking, Molecular Dynamics and Quantum Chemical calculations), that all together contribute to rationalize experimental inhibition data. Molecular Dynamics studies showed both structural and mechanical-dynamical effects produced by the ligands not disclosed by docking analysis. Thermodynamic Integration provided relative binding free energies consistent with experimentally observed activity data. Quantum Chemical calculations of the tautomeric equilibrium involving the most active ligand completed the picture of the binding process. Our study highlights the crucial role of the specificity loop and suggests that enthalpic effect predominates over the entropic one.