ResearchPad - image-processing https://www.researchpad.co Default RSS Feed en-us © 2020 Newgen KnowledgeWorks <![CDATA[Automatic three-dimensional reconstruction of fascicles in peripheral nerves from histological images]]> https://www.researchpad.co/article/elastic_article_14591 Computational studies can be used to support the development of peripheral nerve interfaces, but currently use simplified models of nerve anatomy, which may impact the applicability of simulation results. To better quantify and model neural anatomy across the population, we have developed an algorithm to automatically reconstruct accurate peripheral nerve models from histological cross-sections. We acquired serial median nerve cross-sections from human cadaveric samples, staining one set with hematoxylin and eosin (H&E) and the other using immunohistochemistry (IHC) with anti-neurofilament antibody. We developed a four-step processing pipeline involving registration, fascicle detection, segmentation, and reconstruction. We compared the output of each step to manual ground truths, and additionally compared the final models to commonly used extrusions, via intersection-over-union (IOU). Fascicle detection and segmentation required the use of a neural network and active contours in H&E-stained images, but only simple image processing methods for IHC-stained images. Reconstruction achieved an IOU of 0.42±0.07 for H&E and 0.37±0.16 for IHC images, with errors partially attributable to global misalignment at the registration step, rather than poor reconstruction. This work provides a quantitative baseline for fully automatic construction of peripheral nerve models. Our models provided fascicular shape and branching information that would be lost via extrusion.

]]>
<![CDATA[Prediction of transient tumor enlargement using MRI tumor texture after radiosurgery on vestibular schwannoma]]> https://www.researchpad.co/article/elastic_article_6968 Vestibular schwannomas (VSs) are uncommon benign brain tumors, generally treated using Gamma Knife radiosurgery (GKRS). However, due to the possible adverse effect of transient tumor enlargement (TTE), large VS tumors are often surgically removed instead of treated radiosurgically. Since microsurgery is highly invasive and results in a significant increased risk of complications, GKRS is generally preferred. Therefore, prediction of TTE for large VS tumors can improve overall VS treatment and enable physicians to select the most optimal treatment strategy on an individual basis. Currently, there are no clinical factors known to be predictive for TTE. In this research, we aim at predicting TTE following GKRS using texture features extracted from MRI scans.MethodsWe analyzed clinical data of patients with VSs treated at our Gamma Knife center. The data was collected prospectively and included patient‐ and treatment‐related characteristics and MRI scans obtained at day of treatment and at follow‐up visits, 6, 12, 24 and 36 months after treatment. The correlations of the patient‐ and treatment‐related characteristics to TTE were investigated using statistical tests. From the treatment scans, we extracted the following MRI image features: first‐order statistics, Minkowski functionals (MFs), and three‐dimensional gray‐level co‐occurrence matrices (GLCMs). These features were applied in a machine learning environment for classification of TTE, using support vector machines.ResultsIn a clinical data set, containing 61 patients presenting obvious non‐TTE and 38 patients presenting obvious TTE, we determined that patient‐ and treatment‐related characteristics do not show any correlation to TTE. Furthermore, first‐order statistical MRI features and MFs did not significantly show prognostic values using support vector machine classification. However, utilizing a set of 4 GLCM features, we achieved a sensitivity of 0.82 and a specificity of 0.69, showing their prognostic value of TTE. Moreover, these results increased for larger tumor volumes obtaining a sensitivity of 0.77 and a specificity of 0.89 for tumors larger than 6 cm3.ConclusionsThe results found in this research clearly show that MRI tumor texture provides information that can be employed for predicting TTE. This can form a basis for individual VS treatment selection, further improving overall treatment results. Particularly in patients with large VSs, where the phenomenon of TTE is most relevant and our predictive model performs best, these findings can be implemented in a clinical workflow such that for each patient, the most optimal treatment strategy can be determined. ]]> <![CDATA[Land use change affects water erosion in the Nepal Himalayas]]> https://www.researchpad.co/article/N98261953-1324-4322-aaeb-9737bf3bbcea

Soil erosion is a global environmental threat, and Land Use Land Cover Changes (LUCC) have significant impacts on it. Nepal, being a mountainous country, has significant soil erosion issues. To examine the effects of LUCC on water erosion, we studied the LUCC in Sarada, Rapti and Thuli Bheri river basins of Nepal during the 1995–2015 period using the Remote Sensing. We calculated the average annual soil loss using the Revised Universal Soil Loss Equation and Geographical Information System. Our results suggest that an increase in the agricultural lands at the expense of bare lands and forests escalated the soil erosion through the years; rates being 5.35, 5.47 and 6.03 t/ha/year in 1995, 2007 and 2015, respectively. Of the different land uses, agricultural land experienced the most erosion, whereas the forests experienced the least erosion. Agricultural lands, particularly those on the steeper slopes, were severely degraded and needed urgent soil and water conservation measures. Our study confirms that the long term LUCC has considerable impacts on soil loss, and these results can be implemented in similar river basins in other parts of the country.

]]>
<![CDATA[Feasibility of imaging 90Y microspheres at diagnostic activity levels for hepatic radioembolization treatment planning]]> https://www.researchpad.co/article/Nb653b986-dda1-46ff-9ae3-30a2248e2e85

Purpose

Prior to 90Y hepatic radioembolization, a dosage of 99mTc‐macroaggregated albumin (99mTc‐MAA) is administered to simulate the distribution of the 90Y‐loaded microspheres. This pretreatment procedure enables lung shunt estimation, detection of potential extrahepatic depositions, and estimation of the intrahepatic dose distribution. However, the predictive accuracy of the MAA particle distribution is often limited. Ideally, 90Y microspheres would also be used for the pretreatment procedure. Based on previous research, the pretreatment activity should be limited to the estimated safety threshold of 100 MBq, making imaging challenging. The purpose of this study was to evaluate the quality of intra‐ and extrahepatic imaging of 90Y‐based pretreatment positron emission tomography/computed tomography (PET/CT) and quantitative single photon emission computed tomography (SPECT)/CT scans, by means of phantom experiments and a patient study.

Methods

An anthropomorphic phantom with three extrahepatic depositions was filled with 90Y chloride to simulate a lung shunt fraction (LSF) of 5.3% and a tumor to nontumor ratio (T/N) of 7.9. PET /CT (Siemens Biograph mCT) and Bremsstrahlung SPECT/CT (Siemens Symbia T16) images were acquired at activities ranging from 1999 MBq down to 24 MBq, representing post‐ and pretreatment activities. PET/CT images were reconstructed with the clinical protocol and SPECT/CT images were reconstructed with a quantitative Monte Carlo‐based reconstruction protocol. Estimated LSF, T/N, contrast to noise ratio of all extrahepatic depositions, and liver parenchymal and tumor dose were compared with the phantom ground truth. A clinically reconstructed SPECT/CT of 150 MBq 99mTc represented the current clinical standard. In addition, a 90Y pretreatment scan was simulated for a patient by acquiring posttreatment PET/CT and SPECT/CT data with shortened acquisition times.

Results

At an activity of 100 MBq 90Y, PET/CT overestimated LSF [+10 percentage point (pp)], underestimated liver parenchymal dose (−3 Gy/GBq), and could not detect the extrahepatic depositions. SPECT/CT more accurately estimated LSF (−0.7 pp), parenchymal dose (−0.3 Gy/GBq) and could detect all three extrahepatic depositions. 99mTc SPECT/CT showed similar accuracy as 90Y SPECT/CT (LSF: +0.2 pp, parenchymal dose: +0.4 Gy/GBq, all extrahepatic depositions visible), although the noise level in the liver compartment was considerably lower for 99mTc SPECT/CT compared to 90Y SPECT/CT. The patient’s SPECT/CT simulating a pretreatment 90Y procedure accurately represented the posttreatment 90Y microsphere distribution.

Conclusions

Quantitative SPECT/CT of 100 MBq 90Y could accurately estimate LSF, T/N, parenchymal and tumor dose, and visualize extrahepatic depositions.

]]>
<![CDATA[Automated 3D geometry segmentation of the healthy and diseased carotid artery in free‐hand, probe tracked ultrasound images]]> https://www.researchpad.co/article/N7d305a02-c096-47a8-8888-44ec19fbc52e

Purpose

Rupture of an arterosclerotic plaque in the carotid artery is a major cause of stroke. Biomechanical analysis of plaques is under development aiming to aid the clinician in the assessment of plaque vulnerability. Patient‐specific three‐dimensional (3D) geometry assessment of the carotid artery, including the bifurcation, is required as input for these biomechanical models. This requires a high‐resolution, 3D, noninvasive imaging modality such as ultrasound (US). In this study, a high‐resolution two‐dimensional (2D) linear array in combination with a magnetic probe tracking device and automatic segmentation method was used to assess the geometry of the carotid artery. The advantages of using this system over a 3D ultrasound probe are its higher resolution (spatial and temporal) and its larger field of view.

Methods

A slow sweep (v = ± 5 mm/s) was made over the subject’s neck so that the full geometry of the bifurcated geometry of the carotid artery is captured. An automated segmentation pipeline was developed. First, the Star‐Kalman method was used to approximate the center and size of the vessels for every frame. Images were filtered with a Gaussian high‐pass filter before conversion into the 2D monogenic signals, and multiscale asymmetry features were extracted from these data, enhancing low lateral wall‐lumen contrast. These images, in combination with the initial ellipse contours, were used for an active deformable contour model to segment the vessel lumen. To segment the lumen–plaque boundary, Otsu’s automatic thresholding method was used. Distension of the wall due to the change in blood pressure was removed using a filter approach. Finally, the contours were converted into a 3D hexahedral mesh for a patient‐specific solid mechanics model of the complete arterial wall.

Results

The method was tested on 19 healthy volunteers and on 3 patients. The results were compared to manual segmentation performed by three experienced observers. Results showed an average Hausdorff distance of 0.86 mm and an average similarity index of 0.91 for the common carotid artery (CCA) and 0.88 for the internal and external carotid artery. For the total algorithm, the success rate was 89%, in 4 out of 38 datasets the ICA and ECA were not sufficient visible in the US images. Accurate 3D hexahedral meshes were successfully generated from the segmented images .

Conclusions

With this method, a subject‐specific biomechanical model can be constructed directly from a hand‐held 2D US measurement, within 10 min, with a minimal user input. The performance of the proposed segmentation algorithm is comparable to or better than algorithms previously described in literature. Moreover, the algorithm is able to segment the CCA, ICA, and ECA including the carotid bifurcation in transverse B‐mode images in both healthy and diseased arteries.

]]>
<![CDATA[Automatic coronary artery plaque thickness comparison between baseline and follow‐up CCTA images]]> https://www.researchpad.co/article/Nb2aab993-bec8-446c-97b1-7978c3c89a4a

Purpose

Currently, coronary plaque changes are manually compared between a baseline and follow‐up coronary computed tomography angiography (CCTA) images for long‐term coronary plaque development investigation. We propose an automatic method to measure the plaque thickness change over time.

Methods

We model the lumen and vessel wall for both the baseline coronary artery tree (CAT‐BL) and follow‐up coronary artery tree (CAT‐FU) as smooth three‐dimensional (3D) surfaces using a subdivision fitting scheme with the same coarse meshes by which the correspondence among these surface points is generated. Specifically, a rigid point set registration is used to transform the coarse mesh from the CAT‐FU to CAT‐BL. The plaque thickness and the thickness difference is calculated as the distance between corresponding surface points. To evaluate the registration accuracy, the average distance between manually defined markers on clinical scans is calculated. Artificial CAT‐BL and CAT‐FU pairs were created to simulate the plaque decrease and increase over time.

Results

For 116 pairs of markers from nine clinical scans, the average marker distance after registration was 0.95 ± 0.98 mm (two times the voxel size). On the 10 artificial pairs of datasets, the proposed method successfully located the plaque changes. The average of the calculated plaque thickness difference is the same as the corresponding created value (standard deviation ± 0.1 mm).

Conclusions

The proposed method automatically calculates local coronary plaque thickness differences over time and can be used for 3D visualization of plaque differences. The analysis and reporting of coronary plaque progression and regression will benefit from an automatic plaque thickness comparison.

]]>
<![CDATA[Multiparametric deep learning tissue signatures for a radiological biomarker of breast cancer: Preliminary results]]> https://www.researchpad.co/article/Nc463c54b-61fa-4a4f-ad36-d8aaca5e28c7

Purpose

Deep learning is emerging in radiology due to the increased computational capabilities available to reading rooms. These computational developments have the ability to mimic the radiologist and may allow for more accurate tissue characterization of normal and pathological lesion tissue to assist radiologists in defining different diseases. We introduce a novel tissue signature model based on tissue characteristics in breast tissue from multiparametric magnetic resonance imaging (mpMRI). The breast tissue signatures are used as inputs in a stacked sparse autoencoder (SSAE) multiparametric deep learning (MPDL) network for segmentation of breast mpMRI.

Methods

We constructed the MPDL network from SSAE with 5 layers with 10 nodes at each layer. A total cohort of 195 breast cancer subjects were used for training and testing of the MPDL network. The cohort consisted of a training dataset of 145 subjects and an independent validation set of 50 subjects. After segmentation, we used a combined SAE‐support vector machine (SAE‐SVM) learning method for classification. Dice similarity (DS) metrics were calculated between the segmented MPDL and dynamic contrast enhancement (DCE) MRI‐defined lesions. Sensitivity, specificity, and area under the curve (AUC) metrics were used to classify benign from malignant lesions.

Results

The MPDL segmentation resulted in a high DS of 0.87 ± 0.05 for malignant lesions and 0.84 ± 0.07 for benign lesions. The MPDL had excellent sensitivity and specificity of 86% and 86% with positive predictive and negative predictive values of 92% and 73%, respectively, and an AUC of 0.90.

Conclusions

Using a new tissue signature model as inputs into the MPDL algorithm, we have successfully validated MPDL in a large cohort of subjects and achieved results similar to radiologists.

]]>
<![CDATA[Technical Note: Ontology‐guided radiomics analysis workflow (O‐RAW)]]> https://www.researchpad.co/article/Nf8027750-2a5f-41fd-9376-7ae566b10522

Purpose

Radiomics is the process to automate tumor feature extraction from medical images. This has shown potential for quantifying the tumor phenotype and predicting treatment response. The three major challenges of radiomics research and clinical adoption are: (a) lack of standardized methodology for radiomics analyses, (b) lack of a universal lexicon to denote features that are semantically equivalent, and (c) lists of feature values alone do not sufficiently capture the details of feature extraction that might nonetheless strongly affect feature values (e.g. image normalization or interpolation parameters). These barriers hamper multicenter validation studies applying subtly different imaging protocols, preprocessing steps and radiomics software. We propose an open‐source ontology‐guided radiomics analysis workflow (O‐RAW) to address the above challenges in the following manner: (a) distributing a free and open‐source software package for radiomics analysis, (b) deploying a standard lexicon to uniquely describe features in common usage and (c) provide methods to publish radiomic features as a semantically interoperable data graph object complying to FAIR (findable accessible interoperable reusable) data principles.

Methods

O‐RAW was developed in Python, and has three major modules using open‐source component libraries (PyRadiomics Extension and PyRadiomics). First, PyRadiomics Extension takes standard DICOM‐RT (Radiotherapy) input objects (i.e. a DICOM series and an RTSTRUCT file) and parses them as arrays of voxel intensities and a binary mask corresponding to a volume of interest (VOI). Next, these arrays are passed into PyRadiomics, which performs the feature extraction procedure and returns a Python dictionary object. Lastly, PyRadiomics Extension parses this dictionary as a W3C‐compliant Semantic Web “triple store” (i.e., list of subject‐predicate‐object statements) with relevant semantic meta‐labels drawn from the radiation oncology ontology and radiomics ontology. The output can be published on an SPARQL endpoint, and can be remotely examined via SPARQL queries or to a comma separated file for further analysis.

Results

We showed that O‐RAW executed efficiently on four datasets with different modalities, RIDER (CT), MMD (CT), CROSS (PET) and THUNDER (MR). The test was performed on an HP laptop running Windows 7 operating system and 8GB RAM on which we noted execution time including DICOM images and associated RTSTRUCT matching, binary mask conversion of a single VOI, batch‐processing of feature extraction (105 basic features in PyRadiomics), and the conversion to an resource description framework (RDF) object. The results were (RIDER) 407.3, (MMD) 123.5, (CROSS) 513.2 and (THUNDER) 128.9 s for a single VOI. In addition, we demonstrated a use case, taking images from a public repository and publishing the radiomics results as FAIR data in this study on http://www.radiomics.org. Finally, we provided a practical instance to show how a user could query radiomic features and track the calculation details based on the RDF graph object created by O‐RAW via a simple SPARQL query.

Conclusions

We implemented O‐RAW for FAIR radiomics analysis, and successfully published radiomic features from DICOM‐RT objects as semantic web triples. Its practicability and flexibility can greatly increase the development of radiomics research and ease transfer to clinical practice.

]]>
<![CDATA[Advances in geometric techniques for analyzing blebbing in chemotaxing Dictyostelium cells]]> https://www.researchpad.co/article/5c6f1522d5eed0c48467ae3b

We present a technical platform that allows us to monitor and measure cortex and membrane dynamics during bleb-based chemotaxis. Using D. discoideum cells expressing LifeAct-GFP and crawling under agarose containing RITC-dextran, we were able to simultaneously visualize the actin cortex and the cell membrane throughout bleb formation. Using these images, we then applied edge detect to generate points on the cell boundary with coordinates in a coordinate plane. Then we fitted these points to a curve with known x and y coordinate functions. The result was to parameterize the cell outline. With the parameterization, we demonstrate how to compute data for geometric features such as cell area, bleb area and edge curvature. This allows us to collect vital data for the analysis of blebbing.

]]>
<![CDATA[Sensitivity and specificity of computer vision classification of eyelid photographs for programmatic trachoma assessment]]> https://www.researchpad.co/article/5c6b2655d5eed0c48428986e

Background/aims

Trachoma programs base treatment decisions on the community prevalence of the clinical signs of trachoma, assessed by direct examination of the conjunctiva. Automated assessment could be more standardized and more cost-effective. We tested the hypothesis that an automated algorithm could classify eyelid photographs better than chance.

Methods

A total of 1,656 field-collected conjunctival images were obtained from clinical trial participants in Niger and Ethiopia. Images were scored for trachomatous inflammation—follicular (TF) and trachomatous inflammation—intense (TI) according to the simplified World Health Organization grading system by expert raters. We developed an automated procedure for image enhancement followed by application of a convolutional neural net classifier for TF and separately for TI. One hundred images were selected for testing TF and TI, and these images were not used for training.

Results

The agreement score for TF and TI tasks for the automated algorithm relative to expert graders was κ = 0.44 (95% CI: 0.26 to 0.62, P < 0.001) and κ = 0.69 (95% CI: 0.55 to 0.84, P < 0.001), respectively.

Discussion

For assessing the clinical signs of trachoma, a convolutional neural net performed well above chance when tested against expert consensus. Further improvements in specificity may render this method suitable for field use.

]]>
<![CDATA[Quantity and quality of image artifacts in optical coherence tomography angiography]]> https://www.researchpad.co/article/5c6448bed5eed0c484c2ed28

Objective

To analyze quality and frequency of OCTA artifacts and to evaluate their impact on the interpretability of OCTA images.

Design

75 patients with diabetic retinopathy (DR), retinal artery occlusion (RAO), retinal vein occlusion (RVO), or neovascular age-related macular degeneration (nAMD) and healthy controls were enrolled in this cross-sectional study in the outpatient department of a tertiary eye care center.

Methods

All participants underwent an OCTA examination (spectral domain OCT Cirrus 5000 equipped with the AngioPlex module). OCTA scans were analyzed independently by two experienced ophthalmologists. Frequency of various artifacts for the entire OCTA scan and for different segmentation layers and the grading of OCTA interpretability were investigated.

Results

The analysis of 75 eyes of 38 women and 37 men between 24 and 94 years were included. Six eyes had no retinal disease, 19 eyes had nAMD, 16 had DR, 19 eyes had RVO, and 15 eyes showed RAO. A macular edema (ME) was present in 40 of the diseased eyes. Projection artifacts occurred in all eyes in any structure below the superficial retinal vessel layer, segmentation and motion artifacts were found in 55% (41/75) and 49% (37/75) of eyes, respectively. Other artifacts occurred less frequently. Segmentation artifacts were significantly more frequent in diseased than in healthy eyes (p<0.01). Qualitative assessment of OCTA images was graded as excellent in 65% and sufficient in 25% of cases, adding up to 91% images deemed acceptable for examination. Presence of ME was associated with a significantly poorer interpretability (p<0.01).

Conclusion and Relevance

Various artifacts appear at different frequencies in OCTA images. Nevertheless, a qualitative assessment of the OCTA images is almost always possible. Good knowledge of possible artifacts and critical analysis of the complete OCTA dataset are essential for correct clinical interpretation and determining a precise clinical diagnosis.

]]>
<![CDATA[Automatic microarray image segmentation with clustering-based algorithms]]> https://www.researchpad.co/article/5c50c44bd5eed0c4845e8467

Image segmentation, as a key step of microarray image processing, is crucial for obtaining the spot expressions simultaneously. However, state-of-art clustering-based segmentation algorithms are sensitive to noises. To solve this problem and improve the segmentation accuracy, in this article, several improvements are introduced into the fast and simple clustering methods (K-means and Fuzzy C means). Firstly, a contrast enhancement algorithm is implemented in image preprocessing to improve the gridding precision. Secondly, the data-driven means are proposed for cluster center initialization, instead of usual random setting. The third improvement is that the multi features, including intensity features, spatial features, and shape features, are implemented in feature selection to replace the sole pixel intensity feature used in the traditional clustering-based methods to avoid taking noises as spot pixels. Moreover, the principal component analysis is adopted for various feature extraction. Finally, an adaptive adjustment algorithm is proposed based on data mining and learning for further dealing with the missing spots or low contrast spots. Experiments on real and simulation data sets indicate that the proposed improvements made our proposed method obtains higher segmented precision than the traditional K-means and Fuzzy C means clustering methods.

]]>
<![CDATA[High-pitch, 120 kVp/30 mAs, low-dose dual-source chest CT with iterative reconstruction: Prospective evaluation of radiation dose reduction and image quality compared with those of standard-pitch low-dose chest CT in healthy adult volunteers]]> https://www.researchpad.co/article/5c5369d9d5eed0c484a46906

Purpose

Objective of this study was to evaluate the effectiveness of the iterative reconstruction of high-pitch dual-source chest CT (IR-HP-CT) scanned with low radiation exposure compared with low dose chest CT (LDCT).

Materials and methods

This study was approved by the institutional review board. Thirty healthy adult volunteers (mean age 44 years) were enrolled in this study. All volunteers underwent both IR-HP-CT and LDCT. IR-HP-CT was scanned with 120 kVp tube voltage, 30 mAs tube current and pitch 3.2 and reconstructed with sinogram affirmed iterative reconstruction. LDCT was scanned with 120 kVp tube voltage, 40 mAs tube current and pitch 0.8 and reconstructed with B50 filtered back projection. Image noise, and signal to noise ratio (SNR) of the infraspinatus muscle, subcutaneous fat and lung parenchyma were calculated. Cardiac motion artifact, overall image quality and artifacts was rated by two blinded readers using 4-point scale. The dose-length product (DLP) (mGy∙cm) were obtained from each CT dosimetry table. Scan length was calculated from the DLP results. The DLP parameter was a metric of radiation output, not of patient dose. Size-specific dose estimation (SSDE, mGy) was calculated using the sum of the anteroposterior and lateral dimensions and effective radiation dose (ED, mSv) were calculated using CT dosimetry index.

Results

Approximately, mean 40% of SSDE (2.1 ± 0.2 mGy vs. 3.5 ± 0.3 mGy) and 34% of ED (1.0 ± 0.1 mSv vs. 1.5 ± 0.1 mSv) was reduced in IR-HP-CT compared to LDCT (P < 0.0001). Image noise was reduced in the IR-HP-CT (16.8 ± 2.8 vs. 19.8 ± 3.4, P = 0.0001). SNR of lung and aorta of IR-HP-CT showed better results compared with that of LDCT (22.2 ± 5.9 vs. 33.0 ± 7.8, 1.9 ± 0.4 vs 1.1 ± 0.3, P < 0.0001). The score of cardiac pulsation artifacts were significantly reduced on IR-HP-CT (3.8 ± 0.4, 95% confidence interval, 3.7‒4.0) compared with LDCT (1.6 ± 0.6, 95% confidence interval, 1.3‒1.8) (P < 0.0001). SNR of muscle and fat, beam hardening artifact and overall subjective image quality of the mediastinum, lung and chest wall were comparable on both scans (P ≥ 0.05).

Conclusion

IR-HP-CT with 120 kVp and 30 mAs tube setting in addition to an iterative reconstruction reduced cardiac motion artifact and radiation exposure while representing similar image quality compared with LDCT.

]]>
<![CDATA[A fast threshold segmentation method for froth image base on the pixel distribution characteristic]]> https://www.researchpad.co/article/5c40f77bd5eed0c484386242

With the increase of the camera resolution, the number of pixels contained in froth image is increased, which brings many challenges to image segmentation. Froth size and distribution are the important index in froth flotation. The segmentation of froth images is always a problem in building flotation model. In segmenting froth images, Otsu method is usually used to get a binary image for classification of froth images, this method can get a satisfactory segmentation result. However, each gray level is required to calculate each of the between-class variance, it takes a longer time in froth images with a large number of pixels. To solve this problem, an improved method is proposed in this paper. Most froth images have the pixel distribution characteristic that the gray histogram curve is a sawtooth shape. The proposed method uses polynomial to fit the curve of gray histogram and takes the characteristic of gray histogram's valley into consideration in Otsu method. Two performance comparison methods are introduced and used. Experimental comparison between Otsu method and the proposed method shows that the proposed method has a satisfactory image segmentation with a low computing time.

]]>
<![CDATA[An algorithm of image mosaic based on binary tree and eliminating distortion error]]> https://www.researchpad.co/article/5c3d010fd5eed0c484037edc

The traditional image mosaic result based on SIFT feature points extraction, to some extent, has distortion errors: the larger the input image set, the greater the spliced panoramic distortion. To achieve the goal of creating a high-quality panorama, a new and improved algorithm based on the A-KAZE feature is proposed in this paper. This includes changing the way reference image are selected and putting forward a method for selecting a reference image based on the binary tree model, which takes the input image set as the leaf node set of a binary tree and uses the bottom-up approach to construct a complete binary tree. The root node image of the binary tree is the ultimate panorama obtained by stitching. Compared with the traditional way, the novel method improves the accuracy of feature points detection and enhances the stitching quality of the panorama. Additionally, the improved method proposes an automatic image straightening model to rectify the panorama, which further improves the panoramic distortion. The experimental results show that the proposed method cannot only enhance the efficiency of image stitching processing, but also reduce the panoramic distortion errors and obtain a better quality panoramic result.

]]>
<![CDATA[cellSTORM—Cost-effective super-resolution on a cellphone using dSTORM]]> https://www.researchpad.co/article/5c3fa5b7d5eed0c484ca7b36

High optical resolution in microscopy usually goes along with costly hardware components, such as lenses, mechanical setups and cameras. Several studies proved that Single Molecular Localization Microscopy can be made affordable, relying on off-the-shelf optical components and industry grade CMOS cameras. Recent technological advantages have yielded consumer-grade camera devices with surprisingly good performance. The camera sensors of smartphones have benefited of this development. Combined with computing power smartphones provide a fantastic opportunity for “imaging on a budget”. Here we show that a consumer cellphone is capable of optical super-resolution imaging by (direct) Stochastic Optical Reconstruction Microscopy (dSTORM), achieving optical resolution better than 80 nm. In addition to the use of standard reconstruction algorithms, we used a trained image-to-image generative adversarial network (GAN) to reconstruct video sequences under conditions where traditional algorithms provide sub-optimal localization performance directly on the smartphone. We believe that “cellSTORM” paves the way to make super-resolution microscopy not only affordable but available due to the ubiquity of cellphone cameras.

]]>
<![CDATA[Third harmonic generation imaging and analysis of the effect of low gravity on the lacuno-canalicular network of mouse bone]]> https://www.researchpad.co/article/5c3667cbd5eed0c4841a6455

The lacuno-canalicular network (LCN) hosting the osteocytes in bone tissue represents a biological signature of the mechanotransduction activity in response to external biomechanical loading. Using third-harmonic generation (THG) microscopy with sub-micrometer resolution, we investigate the impact of microgravity on the 3D LCN structure in mice following space flight. A specific analytical procedure to extract the LCN characteristics from THG images is described for ex vivo studies of bone sections. The analysis conducted in different anatomical quadrants of femoral cortical bone didn’t reveal any statistical differences between the control, habitat control and flight groups, suggesting that the LCN connectivity is not affected by one month space flight. However, significant variations are systematically observed within each sample. We show that our current lack of understanding of the extent of the LCN heterogeneity at the organ level hinders the interpretation of such investigations based on a limited number of samples and we discuss the implications for future biomedical studies.

]]>
<![CDATA[A software tool for the quantification of metastatic colony growth dynamics and size distributions in vitro and in vivo]]> https://www.researchpad.co/article/5c2e7fd5d5eed0c48451b9a6

The majority of cancer-related deaths are due to metastasis, hence improved methods to biologically and computationally model metastasis are required. Computational models rely on robust data that is machine-readable. The current methods used to model metastasis in mice involve generating primary tumors by injecting human cells into immune-compromised mice, or by examining genetically engineered mice that are pre-disposed to tumor development and that eventually metastasize. The degree of metastasis can be measured using flow cytometry, bioluminescence imaging, quantitative PCR, and/or by manually counting individual lesions from metastatic tissue sections. The aforementioned methods are time-consuming and do not provide information on size distribution or spatial localization of individual metastatic lesions. In this work, we describe and provide a MATLAB script for an image-processing based method designed to obtain quantitative data from tissue sections comprised of multiple subpopulations of disseminated cells localized at metastatic sites in vivo. We further show that this method can be easily adapted for high throughput imaging of live or fixed cells in vitro under a multitude of conditions in order to assess clonal fitness and evolution. The inherent variation in mouse studies, increasing complexity in experimental design which incorporate fate-mapping of individual cells, result in the need for a large cohort of mice to generate a robust dataset. High-throughput imaging techniques such as the one that we describe will enhance the data that can be used as input for the development of computational models aimed at modeling the metastatic process.

]]>
<![CDATA[TAMMiCol: Tool for analysis of the morphology of microbial colonies]]> https://www.researchpad.co/article/5c0ed741d5eed0c484f13db7

Many microbes are studied by examining colony morphology via two-dimensional top-down images. The quantification of such images typically requires each pixel to be labelled as belonging to either the colony or background, producing a binary image. While this may be achieved manually for a single colony, this process is infeasible for large datasets containing thousands of images. The software Tool for Analysis of the Morphology of Microbial Colonies (TAMMiCol) has been developed to efficiently and automatically convert colony images to binary. TAMMiCol exploits the structure of the images to choose a thresholding tolerance and produce a binary image of the colony. The images produced are shown to compare favourably with images processed manually, while TAMMiCol is shown to outperform standard segmentation methods. Multiple images may be imported together for batch processing, while the binary data may be exported as a CSV or MATLAB MAT file for quantification, or analysed using statistics built into the software. Using the in-built statistics, it is found that images produced by TAMMiCol yield values close to those computed from binary images processed manually. Analysis of a new large dataset using TAMMiCol shows that colonies of Saccharomyces cerevisiae reach a maximum level of filamentous growth once the concentration of ammonium sulfate is reduced to 200 μM. TAMMiCol is accessed through a graphical user interface, making it easy to use for those without specialist knowledge of image processing, statistical methods or coding.

]]>
<![CDATA[Detection of size of manufactured sand particles based on digital image processing]]> https://www.researchpad.co/article/5c1d5bd0d5eed0c4846eca9b

The size distribution of manufactured sand particles has a significant influence on the quality of concrete. To overcome the shortcomings of the traditional vibration-sieving method, a manufactured sand casting/dispersing system was developed, based on the characteristics of the sand particle contours (as determined by backlit image acquisition) and an extraction mechanism. Algorithms for eliminating particles from the image that had be repeatedly captured, as well as for identifying incomplete particles at the boundaries of the image, granular contour segmentation, and the determination of an equivalent particle size, are studied. The hardware and software for the image-based detection device were developed. A particle size repeatability experiment was carried out on the single-grade sands, grading the size fractions of the manufactured sand over a range of 0.6–4.75 mm. A method of particle-size correction is proposed to compensate for the difference in the results obtained by the image-based method and those obtained by the sieving method. The experimental results show that the maximum repeatability error of single-grade fractions is 3.46% and the grading size fraction is 0.51%. After the correction of the image method, the error between the grading size fractions obtained by the two methods was reduced from 7.22%, 6.10% and 5% to 1.47%, 1.65%, and 3.23%, respectively. The accuracy of the particle-size detection can thus satisfy real-world measuring requirements.

]]>