ResearchPad - quantitative-imaging-and-image-processing https://www.researchpad.co Default RSS Feed en-us © 2020 Newgen KnowledgeWorks <![CDATA[Prediction of transient tumor enlargement using MRI tumor texture after radiosurgery on vestibular schwannoma]]> https://www.researchpad.co/article/elastic_article_6968 Vestibular schwannomas (VSs) are uncommon benign brain tumors, generally treated using Gamma Knife radiosurgery (GKRS). However, due to the possible adverse effect of transient tumor enlargement (TTE), large VS tumors are often surgically removed instead of treated radiosurgically. Since microsurgery is highly invasive and results in a significant increased risk of complications, GKRS is generally preferred. Therefore, prediction of TTE for large VS tumors can improve overall VS treatment and enable physicians to select the most optimal treatment strategy on an individual basis. Currently, there are no clinical factors known to be predictive for TTE. In this research, we aim at predicting TTE following GKRS using texture features extracted from MRI scans.MethodsWe analyzed clinical data of patients with VSs treated at our Gamma Knife center. The data was collected prospectively and included patient‐ and treatment‐related characteristics and MRI scans obtained at day of treatment and at follow‐up visits, 6, 12, 24 and 36 months after treatment. The correlations of the patient‐ and treatment‐related characteristics to TTE were investigated using statistical tests. From the treatment scans, we extracted the following MRI image features: first‐order statistics, Minkowski functionals (MFs), and three‐dimensional gray‐level co‐occurrence matrices (GLCMs). These features were applied in a machine learning environment for classification of TTE, using support vector machines.ResultsIn a clinical data set, containing 61 patients presenting obvious non‐TTE and 38 patients presenting obvious TTE, we determined that patient‐ and treatment‐related characteristics do not show any correlation to TTE. Furthermore, first‐order statistical MRI features and MFs did not significantly show prognostic values using support vector machine classification. However, utilizing a set of 4 GLCM features, we achieved a sensitivity of 0.82 and a specificity of 0.69, showing their prognostic value of TTE. Moreover, these results increased for larger tumor volumes obtaining a sensitivity of 0.77 and a specificity of 0.89 for tumors larger than 6 cm3.ConclusionsThe results found in this research clearly show that MRI tumor texture provides information that can be employed for predicting TTE. This can form a basis for individual VS treatment selection, further improving overall treatment results. Particularly in patients with large VSs, where the phenomenon of TTE is most relevant and our predictive model performs best, these findings can be implemented in a clinical workflow such that for each patient, the most optimal treatment strategy can be determined. ]]> <![CDATA[Feasibility of imaging 90Y microspheres at diagnostic activity levels for hepatic radioembolization treatment planning]]> https://www.researchpad.co/article/Nb653b986-dda1-46ff-9ae3-30a2248e2e85

Purpose

Prior to 90Y hepatic radioembolization, a dosage of 99mTc‐macroaggregated albumin (99mTc‐MAA) is administered to simulate the distribution of the 90Y‐loaded microspheres. This pretreatment procedure enables lung shunt estimation, detection of potential extrahepatic depositions, and estimation of the intrahepatic dose distribution. However, the predictive accuracy of the MAA particle distribution is often limited. Ideally, 90Y microspheres would also be used for the pretreatment procedure. Based on previous research, the pretreatment activity should be limited to the estimated safety threshold of 100 MBq, making imaging challenging. The purpose of this study was to evaluate the quality of intra‐ and extrahepatic imaging of 90Y‐based pretreatment positron emission tomography/computed tomography (PET/CT) and quantitative single photon emission computed tomography (SPECT)/CT scans, by means of phantom experiments and a patient study.

Methods

An anthropomorphic phantom with three extrahepatic depositions was filled with 90Y chloride to simulate a lung shunt fraction (LSF) of 5.3% and a tumor to nontumor ratio (T/N) of 7.9. PET /CT (Siemens Biograph mCT) and Bremsstrahlung SPECT/CT (Siemens Symbia T16) images were acquired at activities ranging from 1999 MBq down to 24 MBq, representing post‐ and pretreatment activities. PET/CT images were reconstructed with the clinical protocol and SPECT/CT images were reconstructed with a quantitative Monte Carlo‐based reconstruction protocol. Estimated LSF, T/N, contrast to noise ratio of all extrahepatic depositions, and liver parenchymal and tumor dose were compared with the phantom ground truth. A clinically reconstructed SPECT/CT of 150 MBq 99mTc represented the current clinical standard. In addition, a 90Y pretreatment scan was simulated for a patient by acquiring posttreatment PET/CT and SPECT/CT data with shortened acquisition times.

Results

At an activity of 100 MBq 90Y, PET/CT overestimated LSF [+10 percentage point (pp)], underestimated liver parenchymal dose (−3 Gy/GBq), and could not detect the extrahepatic depositions. SPECT/CT more accurately estimated LSF (−0.7 pp), parenchymal dose (−0.3 Gy/GBq) and could detect all three extrahepatic depositions. 99mTc SPECT/CT showed similar accuracy as 90Y SPECT/CT (LSF: +0.2 pp, parenchymal dose: +0.4 Gy/GBq, all extrahepatic depositions visible), although the noise level in the liver compartment was considerably lower for 99mTc SPECT/CT compared to 90Y SPECT/CT. The patient’s SPECT/CT simulating a pretreatment 90Y procedure accurately represented the posttreatment 90Y microsphere distribution.

Conclusions

Quantitative SPECT/CT of 100 MBq 90Y could accurately estimate LSF, T/N, parenchymal and tumor dose, and visualize extrahepatic depositions.

]]>
<![CDATA[Automated 3D geometry segmentation of the healthy and diseased carotid artery in free‐hand, probe tracked ultrasound images]]> https://www.researchpad.co/article/N7d305a02-c096-47a8-8888-44ec19fbc52e

Purpose

Rupture of an arterosclerotic plaque in the carotid artery is a major cause of stroke. Biomechanical analysis of plaques is under development aiming to aid the clinician in the assessment of plaque vulnerability. Patient‐specific three‐dimensional (3D) geometry assessment of the carotid artery, including the bifurcation, is required as input for these biomechanical models. This requires a high‐resolution, 3D, noninvasive imaging modality such as ultrasound (US). In this study, a high‐resolution two‐dimensional (2D) linear array in combination with a magnetic probe tracking device and automatic segmentation method was used to assess the geometry of the carotid artery. The advantages of using this system over a 3D ultrasound probe are its higher resolution (spatial and temporal) and its larger field of view.

Methods

A slow sweep (v = ± 5 mm/s) was made over the subject’s neck so that the full geometry of the bifurcated geometry of the carotid artery is captured. An automated segmentation pipeline was developed. First, the Star‐Kalman method was used to approximate the center and size of the vessels for every frame. Images were filtered with a Gaussian high‐pass filter before conversion into the 2D monogenic signals, and multiscale asymmetry features were extracted from these data, enhancing low lateral wall‐lumen contrast. These images, in combination with the initial ellipse contours, were used for an active deformable contour model to segment the vessel lumen. To segment the lumen–plaque boundary, Otsu’s automatic thresholding method was used. Distension of the wall due to the change in blood pressure was removed using a filter approach. Finally, the contours were converted into a 3D hexahedral mesh for a patient‐specific solid mechanics model of the complete arterial wall.

Results

The method was tested on 19 healthy volunteers and on 3 patients. The results were compared to manual segmentation performed by three experienced observers. Results showed an average Hausdorff distance of 0.86 mm and an average similarity index of 0.91 for the common carotid artery (CCA) and 0.88 for the internal and external carotid artery. For the total algorithm, the success rate was 89%, in 4 out of 38 datasets the ICA and ECA were not sufficient visible in the US images. Accurate 3D hexahedral meshes were successfully generated from the segmented images .

Conclusions

With this method, a subject‐specific biomechanical model can be constructed directly from a hand‐held 2D US measurement, within 10 min, with a minimal user input. The performance of the proposed segmentation algorithm is comparable to or better than algorithms previously described in literature. Moreover, the algorithm is able to segment the CCA, ICA, and ECA including the carotid bifurcation in transverse B‐mode images in both healthy and diseased arteries.

]]>
<![CDATA[Automatic coronary artery plaque thickness comparison between baseline and follow‐up CCTA images]]> https://www.researchpad.co/article/Nb2aab993-bec8-446c-97b1-7978c3c89a4a

Purpose

Currently, coronary plaque changes are manually compared between a baseline and follow‐up coronary computed tomography angiography (CCTA) images for long‐term coronary plaque development investigation. We propose an automatic method to measure the plaque thickness change over time.

Methods

We model the lumen and vessel wall for both the baseline coronary artery tree (CAT‐BL) and follow‐up coronary artery tree (CAT‐FU) as smooth three‐dimensional (3D) surfaces using a subdivision fitting scheme with the same coarse meshes by which the correspondence among these surface points is generated. Specifically, a rigid point set registration is used to transform the coarse mesh from the CAT‐FU to CAT‐BL. The plaque thickness and the thickness difference is calculated as the distance between corresponding surface points. To evaluate the registration accuracy, the average distance between manually defined markers on clinical scans is calculated. Artificial CAT‐BL and CAT‐FU pairs were created to simulate the plaque decrease and increase over time.

Results

For 116 pairs of markers from nine clinical scans, the average marker distance after registration was 0.95 ± 0.98 mm (two times the voxel size). On the 10 artificial pairs of datasets, the proposed method successfully located the plaque changes. The average of the calculated plaque thickness difference is the same as the corresponding created value (standard deviation ± 0.1 mm).

Conclusions

The proposed method automatically calculates local coronary plaque thickness differences over time and can be used for 3D visualization of plaque differences. The analysis and reporting of coronary plaque progression and regression will benefit from an automatic plaque thickness comparison.

]]>
<![CDATA[Multiparametric deep learning tissue signatures for a radiological biomarker of breast cancer: Preliminary results]]> https://www.researchpad.co/article/Nc463c54b-61fa-4a4f-ad36-d8aaca5e28c7

Purpose

Deep learning is emerging in radiology due to the increased computational capabilities available to reading rooms. These computational developments have the ability to mimic the radiologist and may allow for more accurate tissue characterization of normal and pathological lesion tissue to assist radiologists in defining different diseases. We introduce a novel tissue signature model based on tissue characteristics in breast tissue from multiparametric magnetic resonance imaging (mpMRI). The breast tissue signatures are used as inputs in a stacked sparse autoencoder (SSAE) multiparametric deep learning (MPDL) network for segmentation of breast mpMRI.

Methods

We constructed the MPDL network from SSAE with 5 layers with 10 nodes at each layer. A total cohort of 195 breast cancer subjects were used for training and testing of the MPDL network. The cohort consisted of a training dataset of 145 subjects and an independent validation set of 50 subjects. After segmentation, we used a combined SAE‐support vector machine (SAE‐SVM) learning method for classification. Dice similarity (DS) metrics were calculated between the segmented MPDL and dynamic contrast enhancement (DCE) MRI‐defined lesions. Sensitivity, specificity, and area under the curve (AUC) metrics were used to classify benign from malignant lesions.

Results

The MPDL segmentation resulted in a high DS of 0.87 ± 0.05 for malignant lesions and 0.84 ± 0.07 for benign lesions. The MPDL had excellent sensitivity and specificity of 86% and 86% with positive predictive and negative predictive values of 92% and 73%, respectively, and an AUC of 0.90.

Conclusions

Using a new tissue signature model as inputs into the MPDL algorithm, we have successfully validated MPDL in a large cohort of subjects and achieved results similar to radiologists.

]]>
<![CDATA[Technical Note: Ontology‐guided radiomics analysis workflow (O‐RAW)]]> https://www.researchpad.co/article/Nf8027750-2a5f-41fd-9376-7ae566b10522

Purpose

Radiomics is the process to automate tumor feature extraction from medical images. This has shown potential for quantifying the tumor phenotype and predicting treatment response. The three major challenges of radiomics research and clinical adoption are: (a) lack of standardized methodology for radiomics analyses, (b) lack of a universal lexicon to denote features that are semantically equivalent, and (c) lists of feature values alone do not sufficiently capture the details of feature extraction that might nonetheless strongly affect feature values (e.g. image normalization or interpolation parameters). These barriers hamper multicenter validation studies applying subtly different imaging protocols, preprocessing steps and radiomics software. We propose an open‐source ontology‐guided radiomics analysis workflow (O‐RAW) to address the above challenges in the following manner: (a) distributing a free and open‐source software package for radiomics analysis, (b) deploying a standard lexicon to uniquely describe features in common usage and (c) provide methods to publish radiomic features as a semantically interoperable data graph object complying to FAIR (findable accessible interoperable reusable) data principles.

Methods

O‐RAW was developed in Python, and has three major modules using open‐source component libraries (PyRadiomics Extension and PyRadiomics). First, PyRadiomics Extension takes standard DICOM‐RT (Radiotherapy) input objects (i.e. a DICOM series and an RTSTRUCT file) and parses them as arrays of voxel intensities and a binary mask corresponding to a volume of interest (VOI). Next, these arrays are passed into PyRadiomics, which performs the feature extraction procedure and returns a Python dictionary object. Lastly, PyRadiomics Extension parses this dictionary as a W3C‐compliant Semantic Web “triple store” (i.e., list of subject‐predicate‐object statements) with relevant semantic meta‐labels drawn from the radiation oncology ontology and radiomics ontology. The output can be published on an SPARQL endpoint, and can be remotely examined via SPARQL queries or to a comma separated file for further analysis.

Results

We showed that O‐RAW executed efficiently on four datasets with different modalities, RIDER (CT), MMD (CT), CROSS (PET) and THUNDER (MR). The test was performed on an HP laptop running Windows 7 operating system and 8GB RAM on which we noted execution time including DICOM images and associated RTSTRUCT matching, binary mask conversion of a single VOI, batch‐processing of feature extraction (105 basic features in PyRadiomics), and the conversion to an resource description framework (RDF) object. The results were (RIDER) 407.3, (MMD) 123.5, (CROSS) 513.2 and (THUNDER) 128.9 s for a single VOI. In addition, we demonstrated a use case, taking images from a public repository and publishing the radiomics results as FAIR data in this study on http://www.radiomics.org. Finally, we provided a practical instance to show how a user could query radiomic features and track the calculation details based on the RDF graph object created by O‐RAW via a simple SPARQL query.

Conclusions

We implemented O‐RAW for FAIR radiomics analysis, and successfully published radiomic features from DICOM‐RT objects as semantic web triples. Its practicability and flexibility can greatly increase the development of radiomics research and ease transfer to clinical practice.

]]>