ResearchPad - optimization https://www.researchpad.co Default RSS Feed en-us © 2020 Newgen KnowledgeWorks <![CDATA[Robust pollution source parameter identification based on the artificial bee colony algorithm using a wireless sensor network]]> https://www.researchpad.co/article/elastic_article_14751 Pollution source parameter identification (PSPI) is significant for pollution control, since it can provide important information and save a lot of time for subsequent pollution elimination works. For solving the PSPI problem, a large number of pollution sensor nodes can be rapidly deployed to cover a large area and form a wireless sensor network (WSN). Based on the measurements of WSN, least-squares estimation methods can solve the PSPI problem by searching for the solution that minimize the sum of squared measurement noises. They are independent of the measurement noise distribution, i.e., robust to the noise distribution. To search for the least-squares solution, population-based parallel search techniques usually can overcome the premature convergence problem, which can stagnate the single-point search algorithm. In this paper, we adapt the relatively newly presented artificial bee colony (ABC) algorithm to solve the WSN-based PSPI problem and verifies its feasibility and robustness. Extensive simulation results show that the ABC and the particle swarm optimization (PSO) algorithm obtained similar identification results in the same simulation scenario. Moreover, the ABC and the PSO achieved much better performance than a traditionally used single-point search algorithm, i.e., the trust-region reflective algorithm.

]]>
<![CDATA[A deadline constrained scheduling algorithm for cloud computing system based on the driver of dynamic essential path]]> https://www.researchpad.co/article/5c8c195bd5eed0c484b4d4af

To solve the problem of the deadline-constrained task scheduling in the cloud computing system, this paper proposes a deadline-constrained scheduling algorithm for cloud computing based on the driver of dynamic essential path (Deadline-DDEP). According to the changes of the dynamic essential path of each task node in the scheduling process, the dynamic sub-deadline strategy is proposed. The strategy assigns different sub-deadline values to every task node to meet the constraint relations among task nodes and the user’s defined deadline. The strategy fully considers the dynamic sub-deadline affected by the dynamic essential path of task node in the scheduling process. The paper proposed the quality assessment of optimization cost strategy to solve the problem of selecting server for each task node. Based on the sub-deadline urgency and the relative execution cost in the scheduling process, the strategy selects the server that not only meets the sub-deadline but also obtains much lower execution cost. In this way, the proposed algorithm will make the task graph complete within its deadline, and minimize its total execution cost. Finally, we demonstrate the proposed algorithm via the simulation experiments using Matlab tools. The experimental results show that, the proposed algorithm produces remarkable performance improvement rate on the total execution cost that ranges between 10.3% and 30.8% under meeting the deadline constraint. In view of the experimental results, the proposed algorithm provides better-quality scheduling solution that is suitable for scientific application task execution in the cloud computing environment than IC-PCP, DCCP and CD-PCP.

]]>
<![CDATA[Training set optimization of genomic prediction by means of EthAcc]]> https://www.researchpad.co/article/5c75ac8dd5eed0c484d08a24

Genomic prediction is a useful tool for plant and animal breeding programs and is starting to be used to predict human diseases as well. A shortcoming that slows down the genomic selection deployment is that the accuracy of the prediction is not known a priori. We propose EthAcc (Estimated THeoretical ACCuracy) as a method for estimating the accuracy given a training set that is genotyped and phenotyped. EthAcc is based on a causal quantitative trait loci model estimated by a genome-wide association study. This estimated causal model is crucial; therefore, we compared different methods to find the one yielding the best EthAcc. The multilocus mixed model was found to perform the best. We compared EthAcc to accuracy estimators that can be derived via a mixed marker model. We showed that EthAcc is the only approach to correctly estimate the accuracy. Moreover, in case of a structured population, in accordance with the achieved accuracy, EthAcc showed that the biggest training set is not always better than a smaller and closer training set. We then performed training set optimization with EthAcc and compared it to CDmean. EthAcc outperformed CDmean on real datasets from sugar beet, maize, and wheat. Nonetheless, its performance was mainly due to the use of an optimal but inaccessible set as a start of the optimization algorithm. EthAcc’s precision and algorithm issues prevent it from reaching a good training set with a random start. Despite this drawback, we demonstrated that a substantial gain in accuracy can be obtained by performing training set optimization.

]]>
<![CDATA[Resolution invariant wavelet features of melanoma studied by SVM classifiers]]> https://www.researchpad.co/article/5c648cd2d5eed0c484c81893

This article refers to the Computer Aided Diagnosis of the melanoma skin cancer. We derive wavelet-based features of melanoma from the dermoscopic images of pigmental skin lesions and apply binary C-SVM classifiers to discriminate malignant melanoma from dysplastic nevus. The aim of this research is to select the most efficient model of the SVM classifier for various image resolutions and to search for the best resolution-invariant wavelet bases. We show AUC as a function of the wavelet number and SVM kernels optimized by the Bayesian search for two independent data sets. Our results are compatible with the previous experiments to discriminate melanoma in dermoscopy images with ensembling and feed-forward neural networks.

]]>
<![CDATA[Non-sequential protein structure alignment by conformational space annealing and local refinement]]> https://www.researchpad.co/article/5c5b52e5d5eed0c4842bd224

Protein structure alignment is an important tool for studying evolutionary biology and protein modeling. A tool which intensively searches for the globally optimal non-sequential alignments is rarely found. We propose ALIGN-CSA which shows improvement in scores, such as DALI-score, SP-score, SO-score and TM-score over the benchmark set including 286 cases. We performed benchmarking of existing popular alignment scoring functions, where the dependence of the search algorithm was effectively eliminated by using ALIGN-CSA. For the benchmarking, we set the minimum block size to 4 to prevent much fragmented alignments where the biological relevance of small alignment blocks is hard to interpret. With this condition, globally optimal alignments were searched by ALIGN-CSA using the four scoring functions listed above, and TM-score is found to be the most effective in generating alignments with longer match lengths and smaller RMSD values. However, DALI-score is the most effective in generating alignments similar to the manually curated reference alignments, which implies that DALI-score is more biologically relevant score. Due to the high demand on computational resources of ALIGN-CSA, we also propose a relatively fast local refinement method, which can control the minimum block size and whether to allow the reverse alignment. ALIGN-CSA can be used to obtain much improved alignment at the cost of relatively more extensive computation. For faster alignment, we propose a refinement protocol that improves the score of a given alignment obtained by various external tools. All programs are available from http://lee.kias.re.kr.

]]>
<![CDATA[Tensor framelet based iterative image reconstruction algorithm for low-dose multislice helical CT]]> https://www.researchpad.co/article/5c424392d5eed0c4845e0633

In this study, we investigate the feasibility of improving the imaging quality for low-dose multislice helical computed tomography (CT) via iterative reconstruction with tensor framelet (TF) regularization. TF based algorithm is a high-order generalization of isotropic total variation regularization. It is implemented on a GPU platform for a fast parallel algorithm of X-ray forward band backward projections, with the flying focal spot into account. The solution algorithm for image reconstruction is based on the alternating direction method of multipliers or the so-called split Bregman method. The proposed method is validated using the experimental data from a Siemens SOMATOM Definition 64-slice helical CT scanner, in comparison with FDK, the Katsevich and the total variation (TV) algorithm. To test the algorithm performance with low-dose data, ACR and Rando phantoms were scanned with different dosages and the data was equally undersampled with various factors. The proposed method is robust for the low-dose data with 25% undersampling factor. Quantitative metrics have demonstrated that the proposed algorithm achieves superior results over other existing methods.

]]>
<![CDATA[Intervention on default contagion under partial information in a financial network]]> https://www.researchpad.co/article/5c478c36d5eed0c484bd0d24

We study the optimal interventions of a regulator (a central bank or government) on the illiquidity default contagion process in a large, heterogeneous, unsecured interbank lending market. The regulator has only partial information on the interbank connections and aims to minimize the fraction of final defaults with minimal interventions. We derive the analytical results of the asymptotic optimal intervention policy and the asymptotic magnitude of default contagion in terms of the network characteristics. We extend the results of Amini, Cont and Minca’s work to incorporate interventions and adopt the dynamics of Amini, Minca and Sulem’s model to build heterogeneous networks with degree sequences and initial equity levels drawn from arbitrary distributions. Our results generate insights that the optimal intervention policy is “monotonic” in terms of the intervention cost, the closeness to invulnerability and connectivity. The regulator should prioritize interventions on banks that are systematically important or close to invulnerability. Moreover, the regulator should keep intervening on a bank once having intervened on it. Our simulation results show a good agreement with the theoretical results.

]]>
<![CDATA[Clustering algorithms: A comparative approach]]> https://www.researchpad.co/article/5c478c94d5eed0c484bd335e

Many real-world systems can be studied in terms of pattern recognition tasks, so that proper use (and understanding) of machine learning methods in practical applications becomes essential. While many classification methods have been proposed, there is no consensus on which methods are more suitable for a given dataset. As a consequence, it is important to comprehensively compare methods in many possible scenarios. In this context, we performed a systematic comparison of 9 well-known clustering methods available in the R language assuming normally distributed data. In order to account for the many possible variations of data, we considered artificial datasets with several tunable properties (number of classes, separation between classes, etc). In addition, we also evaluated the sensitivity of the clustering methods with regard to their parameters configuration. The results revealed that, when considering the default configurations of the adopted methods, the spectral approach tended to present particularly good performance. We also found that the default configuration of the adopted implementations was not always accurate. In these cases, a simple approach based on random selection of parameters values proved to be a good alternative to improve the performance. All in all, the reported approach provides subsidies guiding the choice of clustering algorithms.

]]>
<![CDATA[Two-dimensional local Fourier image reconstruction via domain decomposition Fourier continuation method]]> https://www.researchpad.co/article/5c3fa5aed5eed0c484ca744f

The MRI image is obtained in the spatial domain from the given Fourier coefficients in the frequency domain. It is costly to obtain the high resolution image because it requires higher frequency Fourier data while the lower frequency Fourier data is less costly and effective if the image is smooth. However, the Gibbs ringing, if existent, prevails with the lower frequency Fourier data. We propose an efficient and accurate local reconstruction method with the lower frequency Fourier data that yields sharp image profile near the local edge. The proposed method utilizes only the small number of image data in the local area. Thus the method is efficient. Furthermore the method is accurate because it minimizes the global effects on the reconstruction near the weak edges shown in many other global methods for which all the image data is used for the reconstruction. To utilize the Fourier method locally based on the local non-periodic data, the proposed method is based on the Fourier continuation method. This work is an extension of our previous 1D Fourier domain decomposition method to 2D Fourier data. The proposed method first divides the MRI image in the spatial domain into many subdomains and applies the Fourier continuation method for the smooth periodic extension of the subdomain of interest. Then the proposed method reconstructs the local image based on L2 minimization regularized by the L1 norm of edge sparsity to sharpen the image near edges. Our numerical results suggest that the proposed method should be utilized in dimension-by-dimension manner instead of in a global manner for both the quality of the reconstruction and computational efficiency. The numerical results show that the proposed method is effective when the local reconstruction is sought and that the solution is free of Gibbs oscillations.

]]>
<![CDATA[Kidney-inspired algorithm with reduced functionality treatment for classification and time series prediction]]> https://www.researchpad.co/article/5c390bfed5eed0c48491f4af

Optimization of an artificial neural network model through the use of optimization algorithms is the common method employed to search for an optimum solution for a broad variety of real-world problems. One such optimization algorithm is the kidney-inspired algorithm (KA) which has recently been proposed in the literature. The algorithm mimics the four processes performed by the kidneys: filtration, reabsorption, secretion, and excretion. However, a human with reduced kidney function needs to undergo additional treatment to improve kidney performance. In the medical field, the glomerular filtration rate (GFR) test is used to check the health of kidneys. The test estimates the amount of blood that passes through the glomeruli each minute. In this paper, we mimic this kidney function test and the GFR result is used to select a suitable step to add to the basic KA process. This novel imitation is designed for both minimization and maximization problems. In the proposed method, depends on GFR test result which is less than 15 or falls between 15 and 60 or is more than 60 a particular action is performed. These additional processes are applied as required with the aim of improving exploration of the search space and increasing the likelihood of the KA finding the optimum solution. The proposed method is tested on test functions and its results are compared with those of the basic KA. Its performance on benchmark classification and time series prediction problems is also examined and compared with that of other available methods in the literature. In addition, the proposed method is applied to a real-world water quality prediction problem. The statistical analysis of all these applications showed that the proposed method had a ability to improve the optimization outcome.

]]>
<![CDATA[On variational solutions for whole brain serial-section histology using a Sobolev prior in the computational anatomy random orbit model]]> https://www.researchpad.co/article/5c2d2ebcd5eed0c484d9b572

This paper presents a variational framework for dense diffeomorphic atlas-mapping onto high-throughput histology stacks at the 20 μm meso-scale. The observed sections are modelled as Gaussian random fields conditioned on a sequence of unknown section by section rigid motions and unknown diffeomorphic transformation of a three-dimensional atlas. To regularize over the high-dimensionality of our parameter space (which is a product space of the rigid motion dimensions and the diffeomorphism dimensions), the histology stacks are modelled as arising from a first order Sobolev space smoothness prior. We show that the joint maximum a-posteriori, penalized-likelihood estimator of our high dimensional parameter space emerges as a joint optimization interleaving rigid motion estimation for histology restacking and large deformation diffeomorphic metric mapping to atlas coordinates. We show that joint optimization in this parameter space solves the classical curvature non-identifiability of the histology stacking problem. The algorithms are demonstrated on a collection of whole-brain histological image stacks from the Mouse Brain Architecture Project.

]]>
<![CDATA[Accurate, robust and harmonized implementation of morpho-functional imaging in treatment planning for personalized radiotherapy]]> https://www.researchpad.co/article/5c3fa589d5eed0c484ca5858

In this work we present a methodology able to use harmonized PET/CT imaging in dose painting by number (DPBN) approach by means of a robust and accurate treatment planning system. Image processing and treatment planning were performed by using a Matlab-based platform, called CARMEN, in which a full Monte Carlo simulation is included. Linear programming formulation was developed for a voxel-by-voxel robust optimization and a specific direct aperture optimization was designed for an efficient adaptive radiotherapy implementation. DPBN approach with our methodology was tested to reduce the uncertainties associated with both, the absolute value and the relative value of the information in the functional image. For the same H&N case, a single robust treatment was planned for dose prescription maps corresponding to standardized uptake value distributions from two different image reconstruction protocols: One to fulfill EARL accreditation for harmonization of [18F]FDG PET/CT image, and the other one to use the highest available spatial resolution. Also, a robust treatment was planned to fulfill dose prescription maps corresponding to both approaches, the dose painting by contour based on volumes and our voxel-by-voxel DPBN. Adaptive planning was also carried out to check the suitability of our proposal.

Different plans showed robustness to cover a range of scenarios for implementation of harmonizing strategies by using the highest available resolution. Also, robustness associated to discretization level of dose prescription according to the use of contours or numbers was achieved. All plans showed excellent quality index histogram and quality factors below 2%. Efficient solution for adaptive radiotherapy based directly on changes in functional image was obtained. We proved that by using voxel-by-voxel DPBN approach it is possible to overcome typical drawbacks linked to PET/CT images, providing to the clinical specialist confidence enough for routinely implementation of functional imaging for personalized radiotherapy.

]]>
<![CDATA[Deriving the priority weights from probabilistic linguistic preference relation with unknown probabilities]]> https://www.researchpad.co/article/5c181398d5eed0c4847754e9

Generally, the probabilistic linguistic term set (PLTS) provides more accurate descriptive properties than the hesitant fuzzy linguistic term set does. The probabilistic linguistic preference relation (PLPR), which is applied to deal with complex decision-making problems, can be constructed for PLTSs. However, it is difficult for decision makers to provide the probabilities of occurrence for PLPR. To deal with this problem, we propose a definition of expected consistency for PLPR and establish a probability computing model to derive probabilities of occurrence in PLPR with priority weights for alternatives. A consistency-improving iterative algorithm is presented to examine whether or not the PLPR is at an acceptable consistency. Moreover, the consistency-improving iterative algorithm should obtain the satisfaction consistency level for the unacceptable consistency PLPR. Finally, a real-world employment-city selection is used to demonstrate the effectiveness of the proposed method of deriving priority weights from PLPR.

]]>
<![CDATA[Bayesian adaptive dual control of deep brain stimulation in a computational model of Parkinson’s disease]]> https://www.researchpad.co/article/5c12cf9ed5eed0c484914a99

In this paper, we present a novel Bayesian adaptive dual controller (ADC) for autonomously programming deep brain stimulation devices. We evaluated the Bayesian ADC’s performance in the context of reducing beta power in a computational model of Parkinson’s disease, in which it was tasked with finding the set of stimulation parameters which optimally reduced beta power as fast as possible. Here, the Bayesian ADC has dual goals: (a) to minimize beta power by exploiting the best parameters found so far, and (b) to explore the space to find better parameters, thus allowing for better control in the future. The Bayesian ADC is composed of two parts: an inner parameterized feedback stimulator and an outer parameter adjustment loop. The inner loop operates on a short time scale, delivering stimulus based upon the phase and power of the beta oscillation. The outer loop operates on a long time scale, observing the effects of the stimulation parameters and using Bayesian optimization to intelligently select new parameters to minimize the beta power. We show that the Bayesian ADC can efficiently optimize stimulation parameters, and is superior to other optimization algorithms. The Bayesian ADC provides a robust and general framework for tuning stimulation parameters, can be adapted to use any feedback signal, and is applicable across diseases and stimulator designs.

]]>
<![CDATA[qTorch: The quantum tensor contraction handler]]> https://www.researchpad.co/article/5c181399d5eed0c48477553c

Classical simulation of quantum computation is necessary for studying the numerical behavior of quantum algorithms, as there does not yet exist a large viable quantum computer on which to perform numerical tests. Tensor network (TN) contraction is an algorithmic method that can efficiently simulate some quantum circuits, often greatly reducing the computational cost over methods that simulate the full Hilbert space. In this study we implement a tensor network contraction program for simulating quantum circuits using multi-core compute nodes. We show simulation results for the Max-Cut problem on 3- through 7-regular graphs using the quantum approximate optimization algorithm (QAOA), successfully simulating up to 100 qubits. We test two different methods for generating the ordering of tensor index contractions: one is based on the tree decomposition of the line graph, while the other generates ordering using a straight-forward stochastic scheme. Through studying instances of QAOA circuits, we show the expected result that as the treewidth of the quantum circuit’s line graph decreases, TN contraction becomes significantly more efficient than simulating the whole Hilbert space. The results in this work suggest that tensor contraction methods are superior only when simulating Max-Cut/QAOA with graphs of regularities approximately five and below. Insight into this point of equal computational cost helps one determine which simulation method will be more efficient for a given quantum circuit. The stochastic contraction method outperforms the line graph based method only when the time to calculate a reasonable tree decomposition is prohibitively expensive. Finally, we release our software package, qTorch (Quantum TensOR Contraction Handler), intended for general quantum circuit simulation. For a nontrivial subset of these quantum circuits, 50 to 100 qubits can easily be simulated on a single compute node.

]]>
<![CDATA[Low rank and sparsity constrained method for identifying overlapping functional brain networks]]> https://www.researchpad.co/article/5c0841fad5eed0c484fcb573

Analysis of functional magnetic resonance imaging (fMRI) data has revealed that brain regions can be grouped into functional brain networks (fBNs) or communities. A community in fMRI analysis signifies a group of brain regions coupled functionally with one another. In neuroimaging, functional connectivity (FC) measure can be utilized to quantify such functionally connected regions for disease diagnosis and hence, signifies the need of devising novel FC estimation methods. In this paper, we propose a novel method of learning FC by constraining its rank and the sum of non-zero coefficients. The underlying idea is that fBNs are sparse and can be embedded in a relatively lower dimension space. In addition, we propose to extract overlapping networks. In many instances, communities are characterized as combinations of disjoint brain regions, although recent studies indicate that brain regions may participate in more than one community. In this paper, large-scale overlapping fBNs are identified on resting state fMRI data by employing non-negative matrix factorization. Our findings support the existence of overlapping brain networks.

]]>
<![CDATA[Bi-objective inventory allocation planning problem with supplier selection and carbon trading under uncertainty]]> https://www.researchpad.co/article/5c23ff93d5eed0c4840929b1

Concern is growing that business enterprises focus primarily on their economic activities while disregarding the adverse environmental and social effects of these activities. To contribute to the literature on this matter, this study investigates a novel bi-objective inventory allocation planning problem with supplier selection and carbon trading across multiple periods under uncertainty. The concepts of a carbon credit price and a carbon cap are proposed to demonstrate the effect of carbon emissions costs on inventory allocation network costs. Demands of manufacturers, transport price, and defect rate of materials that should be rejected are set as random variables. We combine normalized normal constraint method, differential evolution algorithm, and uncertainty simulation to deal with the complex model. One representative case shows the effectiveness and practicability of this model and proposed method. The Pareto frontier is generated by solving the bi-objective model. We extend the results of numerical examples in large scale problems, and compare the solution method results with exact solutions. The environmental objective across the inventory allocation network varies with changes of the carbon cap and the carbon credit price.

]]>
<![CDATA[An Enhanced Region Proposal Network for object detection using deep learning method]]> https://www.researchpad.co/article/5c0e9891d5eed0c484eaadaf

Faster Region-based Convolutional Network (Faster R-CNN) is a state-of-the-art object detection method. However, the object detection effect of Faster R-CNN is not good based on the Region Proposal Network (RPN). Inspired by RPN of Faster R-CNN, we propose a novel proposal generation method called Enhanced Region Proposal Network (ERPN). Four improvements are presented in ERPN. Firstly, our proposed deconvolutional feature pyramid network (DFPN) is introduced to improve the quality of region proposals. Secondly, novel anchor boxes are designed with interspersed scales and adaptive aspect ratios. Thereafter, the capability of object localization is increased. Thirdly, a particle swarm optimization (PSO) based support vector machine (SVM), termed PSO-SVM, is developed to distinguish the positive and negative anchor boxes. Fourthly, the classification part of multi-task loss function in RPN is improved. Consequently, the effect of classification loss is strengthened. In this study, our proposed ERPN is compared with five object detection methods on both PASCAL VOC and COCO data sets. For the VGG-16 model, our ERPN obtains 78.6% mAP on VOC 2007 data set, 74.4% mAP on VOC 2012 data set and 31.7% on COCO data set. The performance of ERPN is the best among the comparison object detection methods. Furthermore, the detection speed of ERPN is 5.8 fps. Additionally, ERPN obtains good effect on small object detection.

]]>
<![CDATA[Modeling functional specialization of a cell colony under different fecundity and viability rates and resource constraint]]> https://www.researchpad.co/article/5b87837840307c3c45097673

The emergence of functional specialization is a core problem in biology. In this work we focus on the emergence of reproductive (germ) and vegetative viability-enhancing (soma) cell functions (or germ-soma specialization). We consider a group of cells and assume that they contribute to two different evolutionary tasks, fecundity and viability. The potential of cells to contribute to fitness components is traded off. As embodied in current models, the curvature of the trade-off between fecundity and viability is concave in small-sized organisms and convex in large-sized multicellular organisms. We present a general mathematical model that explores how the division of labor in a cell colony depends on the trade-off curvatures, a resource constraint and different fecundity and viability rates. Moreover, we consider the case of different trade-off functions for different cells. We describe the set of all possible solutions of the formulated mathematical programming problem and show some interesting examples of optimal specialization strategies found for our objective fitness function. Our results suggest that the transition to specialized organisms can be achieved in several ways. The evolution of Volvocalean green algae is considered to illustrate the application of our model. The proposed model can be generalized to address a number of important biological issues, including the evolution of specialized enzymes and the emergence of complex organs.

]]>
<![CDATA[On the design of power gear trains: Insight regarding number of stages and their respective ratios]]> https://www.researchpad.co/article/5c032df9d5eed0c4844f8a6a

This paper presents a formulation for selecting the stage ratios and number of stages in a multistage transmission with a given desired total transmission ratio in a manner that maximizes efficiency, maximizes acceleration, or minimizes the mass of the transmission. The formulation is used to highlight several implications for gear train design, including the fact that minimizing rotational inertia and mass are competing objectives with respect to optimal selection of stage ratios, and that both rotational inertia and mass can often be minimized by increasing the total number of stages beyond a minimum realizable number. Additionally, a multistage transmission will generally provide maximum acceleration when the stage ratios increase monotonically from the motor to the load. The transmission will have minimum mass when the stage ratios decrease monotonically. The transmission will also provide maximum efficiency when the corresponding stages employ constant stage ratios. This paper aims to use this optimization formulation to elucidate tradeoffs between various common objectives in gear train design (efficiency, acceleration, and mass).

]]>