
Showing 1 - 50 of 459
- IEEE MemberUS $11.00
- Society MemberUS $0.00
- IEEE Student MemberUS $11.00
- Non-IEEE MemberUS $15.00
Image-based Simulations of Tubular Network Formation
The image-based simulations in biomedicine play an important role as the real image data are difficult to be fully and precisely annotated. An increasing capability of contemporary computers allows to model and simulate reasonably complicated structures and in the last years also the dynamic processes. In this paper, we introduce a complex 3D model that describes the structure and dynamics of the population of endothelial cells. The model is based on standard cellular Potts model. It describes the formation process of a complex tubular network of endothelial cells fully in 3D together with the simulation of the cell death called apoptosis. The generated network imitates the structure and behavior that can be observed in real phase-contrast microscopy. The generated image data may serve as a benchmark dataset for newly designed detection or tracking algorithms.
- IEEE MemberUS $11.00
- Society MemberUS $0.00
- IEEE Student MemberUS $11.00
- Non-IEEE MemberUS $15.00
BAENET: A Brain Age Estimation Network with 3D Skipping and Outlier Constraint Loss
The potential pattern changes in brain micro-structure can be used for the brain development assessment in children and adolescents by MRI scans. In this paper, we propose a highly accurate and efficient end-to-end brain age estimation network (BAENET) on T1-weighted MRI images. On the network, 3D skipping and outlier constraint loss are designed to accommodate deeper network and increase the robustness. Besides, we incorporate the neuroimage domain knowledge into stratified sampling for better generalization ability for datasets with different age distributions, and gender learning for more gender-specific features during modeling. We verify the effectiveness of the proposed method on the public ABIDE2 and ADHD200 benchmark, consisting of 382 and 378 normal children scans respectively. Our BAENET achieves MAE of 1.11 and 1.16, significantly outperforming the best reported methods by 5.1% and 9.4%.
- IEEE MemberUS $11.00
- Society MemberUS $0.00
- IEEE Student MemberUS $11.00
- Non-IEEE MemberUS $15.00
Polyp Detection in Colonoscopy Videos by Bootstrapping Via Temporal Consistency
Computer-aided polyp detection during colonoscopy is beneficial to reduce the risk of colorectal cancers. Deep learning techniques have made significant process in natural object detection. However, when applying those fully supervised methods to polyp detection, the performance is greatly depressed by the deficiency of labeled data. In this paper, we propose a novel bootstrapping method for polyp detection in colonoscopy videos by augmenting training data with temporal consistency. For a detection network that is trained on a small set of annotated polyp images, we fine-tune it with new samples selected from the test video itself, in order to more effectively represent the polyp morphology of current video. A strategy of selecting new samples is proposed by considering temporal consistency in the test video. Evaluated on 11954 endoscopic frames of the CVC-ClinicVideoDB dataset, our method yields great improvement on polyp detection for several detection networks, and achieves state-of-the-art performance on the benchmark dataset.
- IEEE MemberUS $11.00
- Society MemberUS $0.00
- IEEE Student MemberUS $11.00
- Non-IEEE MemberUS $15.00
Mitigating Adversarial Attacks on Medical Image Understanding Systems
Deep learning systems are now being widely used to analyze lung cancer. However, recent work has shown a deep learning system can be easily fooled by intentionally adding some noise in the image. This is called as Adversarial attack. This paper presents an adversarial attack for malignancy prediction of lung nodules. We found that the adversarial attack can cause significant changes in lung nodule malignancy prediction accuracy. An ensemble-based defense strategy was developed to reduce the effect of an adversarial attack. A multi-initialization based CNN ensemble was utilized. We also explored adding adversarial images in the training set, which eventually reduced the rate of mis-classification and made the CNN models more robust to an adversarial attack. A subset of cases from the National Lung Screening Trial (NLST) dataset were used in our study. Initially, 75.1%, 75.5% and 76% classification accuracy were obtained from the three CNNs on original images (without an adversarial attack). Fast Gradient Sign Method (FGSM) and one-pixel attacks were analyzed. After the FGSM attack, 46.4%, 39.24%, and 39.71% accuracy was obtained from the 3 CNNs. Whereas, after a one pixel attack 72.15%, 73%, and 73% classification accuracy was achieved. FGSM caused much more damaged to CNN prediction. With a multi-initialization based ensemble and including adversarial images in the training set, 82.27% and 81.43% classification accuracy were attained after FGSM and one-pixel attacks respectively.
- IEEE MemberUS $11.00
- Society MemberUS $0.00
- IEEE Student MemberUS $11.00
- Non-IEEE MemberUS $15.00
Knowledge Transfer between Datasets for Learning-Based Tissue Microstructure Estimation
Learning-based approaches, especially those based on deep networks, have enabled high-quality estimation of tissue microstructure from low-quality diffusion magnetic resonance imaging (dMRI) scans, which are acquired with a limited number of diffusion gradients and a relatively poor spatial resolution. These learning-based approaches to tissue microstructure estimation require acquisitions of training dMRI scans with high-quality diffusion signals, which are densely sampled in the q-space and have a high spatial resolution. However, the acquisition of training scans may not be available for all datasets. Therefore, we explore knowledge transfer between different dMRI datasets so that learning-based tissue microstructure estimation can be applied for datasets where training scans are not acquired. Specifically, for a target dataset of interest, where only low-quality diffusion signals are acquired without training scans, we exploit the information in a source dMRI dataset acquired with high-quality diffusion signals. We interpolate the diffusion signals in the source dataset in the q-space using a dictionary-based signal representation, so that the interpolated signals match the acquisition scheme of the target dataset. Then, the interpolated signals are used together with the high-quality tissue microstructure computed from the source dataset to train deep networks that perform tissue microstructure estimation for the target dataset. Experiments were performed on brain dMRI scans with low-quality diffusion signals, where the benefit of the proposed strategy is demonstrated.
- IEEE MemberUS $11.00
- Society MemberUS $0.00
- IEEE Student MemberUS $11.00
- Non-IEEE MemberUS $15.00
Joint Optimization of Sampling Pattern and Priors in Model-Based Deep Learning
Deep learning methods are emerging as powerful alternatives for compressed sensing MRI to recover images from highly undersampled data. Unlike compressed sensing, the image redundancies that are captured by these models are not well understood. The lack of theoretical understanding also makes it challenging to choose the sampling pattern that would yield the best possible recovery. To overcome these challenges, we propose to optimize the sampling patterns and the parameters of the reconstruction block in a model-based deep learning framework. We show that the joint optimization by the model-based strategy results in improved performance than direct inversion CNN schemes due to better decoupling of the effect of sampling and image properties. The quantitative and qualitative results confirm the benefits of joint optimization by the model-based scheme over the direct inversion strategy.
- IEEE MemberUS $11.00
- Society MemberUS $0.00
- IEEE Student MemberUS $11.00
- Non-IEEE MemberUS $15.00
Deep Learning Based Detection of Acute Aortic Syndrome in Contrast CT Images
Acute aortic syndrome (AAS) is a group of life threatening conditions of the aorta. We have developed an end-to-end automatic approach to detect AAS in computed tomography (CT) images. Our approach consists of two steps. At first, we extract $N$ cross sections along the segmented aorta centerline for each CT scan. These cross sections are stacked together to form a new volume which is then classified using two different classifiers, a 3D convolutional neural network (3D CNN) and a multiple instance learning (MIL). We trained, validated, and compared two models on 2291 contrast CT volumes. We tested on a set aside cohort of 230 normal and 50 positive CT volumes. Our models detected AAS with an Area under Receiver Operating Characteristic curve (AUC) of 0.965 and 0.985 using 3DCNN and MIL, respectively.
- IEEE MemberUS $11.00
- Society MemberUS $0.00
- IEEE Student MemberUS $11.00
- Non-IEEE MemberUS $15.00
An Improved Deep Learning Approach for Thyroid Nodule Diagnosis
Although thyroid ultrasonography (US) has been widely applied, it is still difficult to distinguish benign and malignant nodules. Currently, convolutional neural network (CNN) based methods have been proposed and shown promising performance for benign and malignant nodules classification. It is known that the US images are usually captured in multi-angles, and the same thyroid in different US images have inconsistent content. However, most of the existing CNN based methods extract features using fixed convolution kernels, which could be a big issue for processing US images. Moreover, fully-connected (FC) layers are usually adopted in CNN, which could cause the loss of inter-pixel relations. In this paper, we propose a new CNN which is integrated with squeeze-and-excitation (SE) module and maximum retention of inter-pixel relations module (CNN-SE-MPR). It can adaptively select features from different US images and preserve the inter-pixel relations. Moreover, we introduce transfer learning to avoid problems such as local optimum and data insufficiency. The proposed network is tested on 407 thyroid US images collected from cooperated hospitals. Confirmed by ablation experiments and the comparison experiments under the state-of-the-art methods, it is shown that our method improves the accuracy of the diagnosis results.
- IEEE MemberUS $11.00
- Society MemberUS $0.00
- IEEE Student MemberUS $11.00
- Non-IEEE MemberUS $15.00
Towards Fully Automatic 2d Us to 3d CT/MR Registration: A Novel Segmentation-Based Strategy
2D-US to 3D-CT/MR registration is a crucial module during minimally invasive ultrasound-guided liver tumor ablations. Many modern registration methods still require manual or semi-automatic slice pose initialization due to insufficient robustness of automatic methods. The state-of-the-art regression networks do not work well for liver 2D US to 3DCT/MR registration because of the tremendous inter-patientvariability of the liver anatomy. To address this unsolved problem, we propose a deep learning network pipeline which? instead of a regression ? starts with a classification network to recognize the coarse ultrasound transducer pose followed by a segmentation network to detect the target plane of the US image in the CT/MR volume. The rigid registration result is derived using plane regression. In contrast to the state-of-the-art regression networks, we do not estimate registration parameters from multi-modal images directly, but rather focus on segmenting the target slice plane in the volume. The experiments reveal that this novel registration strategy can identify the initial slice phase in a 3D volume more reliably than the standard regression-based techniques. The proposed method was evaluated with 1035 US images from 52 patients. We achieved angle and distance errors of 12.7?6.2? and 4.9?3.1 mm, clearly outperforming state-of-the-art re-gression strategy which results in 37.0?15.6? angle error and 19.0?11.6 mm distance error.
- IEEE MemberUS $11.00
- Society MemberUS $0.00
- IEEE Student MemberUS $11.00
- Non-IEEE MemberUS $15.00
A 3D CNN with a Learnable Adaptive Shape Prior for Accurate Segmentation of Bladder Wall Using MR Images
A 3D deep learning-based convolution neural network (CNN)is developed for accurate segmentation of pathological bladder(both wall border and pathology) using T2-weighted magnetic resonance imaging (T2W-MRI). Our system starts with a preprocessing step for data normalization to a unique space and extraction of a region-of-interest (ROI). The major stage utilizes a 3D CNN for pathological bladder segmentation, which contains a network, called CNN1, aims to segment the bladder wall (BW) with pathology. However, due to the similar visual appearance of BW and pathology, the CNN1 can not separate them. Thus, we developed another network (CNN2) with an additional pathway to extract BW only. The second pathway in CNN2 is fed with a 3Dlearnable adaptive shape prior model. To remove noisy and scattered predictions, the networks? soft outputs are refined using a fully connected conditional random field. Our framework achieved accurate segmentation results for the BW and tumor as documented by the Dice similarity coefficient and Hausdorff distance. Moreover, comparative results against the other segmentation approach documented the superiority of our framework to provide accurate results for pathological BW segmentation.
- IEEE MemberUS $11.00
- Society MemberUS $0.00
- IEEE Student MemberUS $11.00
- Non-IEEE MemberUS $15.00
Reflection Ultrasound Tomography Using Localized Freehand Scans
Speed of sound (SOS) is a biomarker that aides clinicians in tracking the onset and progression of diseases such as breast cancer and fatty liver disease. In this paper, we propose a framework to generate accurate, 2D SOS maps with a commercial ultrasound probe. We simulate freehand ultrasound probe motion and use a multi-look framework for reflection travel time tomography. In these simulations, the ``measured'' travel times are computed using a bent-ray Eikonal solver and direct inversion for compressional speed of sound is performed. We have shown that the assumption of straight rays breaks down for large velocity perturbations (greater than 1 percent). The error increases 70 fold for a velocity perturbation increase of 1.5 percent. Moreover, the use of multiple looks greatly aides the inversion process. Simulated RMSE drops by roughly 15 dB when the maximum scanning angle is increased from 0 to 45 degrees.
- IEEE MemberUS $11.00
- Society MemberUS $0.00
- IEEE Student MemberUS $11.00
- Non-IEEE MemberUS $15.00
Association between Dynamic Functional Connectivity and Intelligence
Several studies have explored the relationship between intelligence and neuroimaging features. However, little is known about whether the temporal variations of functional connectivity of the brain regions at rest are relevant to the differences in intelligence. In this study, we have used the fMRI data and intelligence scores of 50 healthy adult subjects from the Human Connectome Project (HCP) database. We have investigated the correlation between individual intelligence scores and the total power of the high frequency components of the Fast Fourier transform (FFT) of the dynamic functional connectivity time series of the brain regions. We have found temporal variations of specific functional connections highly correlated with the individual intelligence scores. In other words, functional connections of individuals with high levels of the intelligence have smoother temporal variation or higher temporal stability than those of the individuals with low intelligence levels.
- IEEE MemberUS $11.00
- Society MemberUS $0.00
- IEEE Student MemberUS $11.00
- Non-IEEE MemberUS $15.00
Robust Detection of Adversarial Attacs on Medical Images
Although deep learning systems trained on medical images have shown state-of-the-art performance in many clinical pre- diction tasks, recent studies demonstrate that these systems can be fooled by carefully crafted adversarial images. It has raised concerns on the practical deployment of deep learning based medical image classification systems. To tackle this problem, we propose an unsupervised learning approach to detect adversarial attacks on medical images. Our approach is capable of detecting a wide range of adversarial attacks without knowing the attackers nor sacrificing the classification performance. More importantly, our approach can be easily embedded into any deep learning-based medical imaging system as a module to improve the system?s robustness. Experiments on a public chest X-ray dataset demonstrate the strong performance of our approach in defending adversarial attacks under both white-box and black-box settings.
- IEEE MemberUS $11.00
- Society MemberUS $0.00
- IEEE Student MemberUS $11.00
- Non-IEEE MemberUS $15.00
Deep Variational Autoencoder for Modeling Functional Brain Networks and ADHD Identification
In the neuroimaging and brain mapping communities, researchers have proposed a variety of computational methods and tools to learn functional brain networks (FBNs). Recently, it has already been proven that deep learning can be applied on fMRI data with superb representation power over traditional machine learning methods. Limited by the high-dimension of fMRI volumes, deep learning suffers from the lack of data and overfitting. Generative models are known to have intrinsic ability of modeling small dataset and a deep variational autoencoder (DVAE) was proposed in this work to tackle the challenge of insufficient data and incomplete supervision. The FBNs learned from fMRI were examined to be interpretable and meaningful and it was proven that DVAE has better performance on neuroimaging dataset over traditional models. With an evaluation on ADHD200 dataset, DVAE performed excellent on classification accuracies on 4 sites.
- IEEE MemberUS $11.00
- Society MemberUS $0.00
- IEEE Student MemberUS $11.00
- Non-IEEE MemberUS $15.00
Segmentation of Bone Vessels in 3d Micro-Ct Images Using the Monogenic Signal Phase and Watershed
We propose an algorithm based on marker-controlled watershed and the monogenic signal phase asymmetry for the segmentation of bone and micro-vessels in mouse bone. The images are acquired using synchrotron radiation micro-computed tomography (SR-?CT). The marker image is generated with hysteresis thresholding and morphological filters. The control surface is generated using the phase asymmetry of the monogenic signal in order to detect edge-like structures only, as well as improving detection in low contrast areas, such as bone-vessel interfaces. The quality of segmentation is evaluated by comparing to manually segmented images using the Dice coefficient. The proposed method shows substantial improvement compared to a previously proposed method based on hysteresis thresholding, as well as compared to watershed using the gradient image as control surface. The algorithm was applied to images of healthy and metastatic bone, permitting quantification of both bone and vessel structures.
- IEEE MemberUS $11.00
- Society MemberUS $0.00
- IEEE Student MemberUS $11.00
- Non-IEEE MemberUS $15.00
Substituting Gadolinium in Brain MRI Using DeepContrast
Cerebral blood volume (CBV) is a hemodynamic correlate of oxygen metabolism and reflects brain activity and function. High-resolution CBV maps can be generated using the steady-state gadolinium-enhanced MRI technique. Such technique requires an intravenous injection of exogenous gadolinium based contrast agent (GBCA) and recent studies suggest that the GBCA can accumulate in the brain after frequent use. We hypothesize that endogenous sources of contrast might exist within the most conventional and commonly acquired structural MRI, potentially obviating the need for exogenous contrast. Here, we test this hypothesis by developing and optimizing a deep learning algorithm, which we call DeepContrast, in mice. We find that DeepContrast performs equally well as exogenous GBCA in mapping CBV of the normal brain tissue and enhancing glioblastoma. Together, these studies validate our hypothesis that a deep learning approach can potentially replace the need for GBCAs in brain MRI.
- IEEE MemberUS $11.00
- Society MemberUS $0.00
- IEEE Student MemberUS $11.00
- Non-IEEE MemberUS $15.00
Bone Structures Extraction and Enhancement in Chest Radiographs Via CNN Trained on Synthetic Data
In this paper, we present a deep learning based image processing technique for extraction of bone structures in chest radiographs using a U-Net FCNN. The U-Net was trained to accomplish the task in a fully supervised setting. To create the training image pairs, we employed simulated X-Ray or Digitally Reconstructed Radiographs (DRR), derived from 664 CT scans belonging to the LIDC-IDRI dataset. Using HU based segmentation of bone structures in the CT domain,a synthetic 2D ?Bone x-ray? DRR is produced and used for training the network. For the reconstruction loss, we utilize two loss functions- L1 Loss and perceptual loss. Once the bone structures are extracted, the original image can be enhanced by fusing the original input x-ray and the synthesized ?Bone X-ray?. We show that our enhancement technique is applicable to real x-ray data, and display our results on the NIH Chest X-Ray-14 dataset.
- IEEE MemberUS $11.00
- Society MemberUS $0.00
- IEEE Student MemberUS $11.00
- Non-IEEE MemberUS $15.00
Memory-Augmented Anomaly Generative Adversarial Network for Retinal Oct Images Screening
Optical coherence tomography (OCT) plays an important role in retinal disease screening. Traditional classification-based screening methods require complicated annotation works. Due to the difficulty of collecting abnormal samples, some anomaly detection methods have been applied to screen retinal lesions only based on normal samples. However, most existing anomaly detection methods are time consuming and easily misjudging abnormal OCT images with implicit lesions like small drusen. To solve these problems, we propose a memory-augmented anomaly generative adversarial network (MA-GAN) for retinal OCT screening. Within the generator, we establish a memory module to enhance the detail expressing abilities of typical OCT normal patterns. Meanwhile, the discriminator of MA-GAN is decomposed orthogonally so that it has the encoding ability simultaneously. As a result, the abnormal image can be screened by the greater difference in the distribution of pixels and features between the original image and its reconstructed image. The model trained with 13000 normal OCT images reaches 0.875 AUC on the test set of 2000 normal images and 1000 anomalous images. And the inference time only takes 35 milliseconds for each image. Compared to other anomaly detection methods, our MA-GAN has the advantages in model accuracy and computation time for retinal OCT screening.
- IEEE MemberUS $11.00
- Society MemberUS $0.00
- IEEE Student MemberUS $11.00
- Non-IEEE MemberUS $15.00
Assessment of Lung Biomechanics in COPD Using Image Registration
Lung biomechanical properties can be used to detect disease, assess abnormal lung function, and track disease progression.In this work, we used computed tomography (CT) imaging to measure three biomechanical properties in the lungs of subjects with varying degrees of chronic obstructive pulmonary disease (COPD): the Jacobian determinant (J), a measure of volumetric expansion or contraction; the anisotropic deformation index (ADI), a measure of the magnitude of anisotropic deformation; and the the slab-rod index (SRI), a measure of the nature of anisotropy (i.e., whether the volume is deformed to a rod-like or slab-like shape). We analyzed CT data from247 subjects collected as part of the Subpopulations and Inter-mediate Outcome Measures in COPD Study (SPIROMICS). The results show that the mean J and mean ADI decrease as disease severity increases, indicating less volumetric expansion and more isotroic expansion with increased disease. No differences in average SRI index were observed across the different levels of disease. The methods and analysis described in this study may provide new insights into our understanding of the biomechanical behavior of the lung and the changesthat occur with COPD.
- IEEE MemberUS $11.00
- Society MemberUS $0.00
- IEEE Student MemberUS $11.00
- Non-IEEE MemberUS $15.00
Airway Segmentation in Speech MRI Using the U-Net Architecture
We develop a fully automated airway segmentation method to segment the vocal tract airway from surrounding soft tissue in speech MRI. We train a U-net architecture to learn the end to end mapping between a mid-sagittal image (at the input), and the manually segmented airway (at the output). We base our training on the open source University of Southern California?s (USC) speech morphology MRI database consisting of speakers producing a variety of sustained vowel and consonant sounds. Once trained, our model performs fast airway segmentations on unseen images at the order of 210 ms/slice on a modern CPU with 12 cores. Using manual segmentation as a reference, we evaluate the performances of the proposed U-net airway segmentation, against existing seed-growing segmentation, and manual segmentation from a different user. We demonstrate improved DICE similarity with U-net compared to seed-growing, and minor differences in DICE similarity of U-net compared to manual segmentation from the second user.
- IEEE MemberUS $11.00
- Society MemberUS $0.00
- IEEE Student MemberUS $11.00
- Non-IEEE MemberUS $15.00
Weakly Supervised Prostate TMA Classification Via Graph Convolutional Networks
Histology-based grade classification is clinically important for many cancer types in stratifying patients into distinct treatment groups. In prostate cancer, the Gleason score is a grading system used to measure the aggressiveness of prostate cancer from the spatial organization of cells and the distribution of glands. However, the subjective interpretation of Gleason score often suffers from large interobserver and intraobserver variability. Previous work in deep learning-based objective Gleason grading requires manual pixel-level annotation. In this work, we propose a weakly-supervised approach for grade classification in tissue micro-arrays (TMA) using graph convolutional networks (GCNs), in which we model the spatial organization of cells as a graph to better capture the proliferation and community structure of tumor cells. We learn the morphometry of each cell using a contrastive predictive coding (CPC)-based self-supervised approach. Using five-fold cross-validation we demonstrate that our method can achieve a 0.9637?0.0131 AUC using only TMA-level labels. Our method also demonstrates a 36.36% improvement in AUC over standard GCNs with texture features and a 15.48% improvement over GCNs with VGG19 features. Our proposed pipeline can be used to objectively stratify low and high-risk cases, reducing inter- and intra-observer variability and pathologist workload.
- IEEE MemberUS $11.00
- Society MemberUS $0.00
- IEEE Student MemberUS $11.00
- Non-IEEE MemberUS $15.00
MR Imaging and Spectroscopy for Biomarker Characterization in Golden Retriever Muscular Dystrophy Tissue Samples
Custom double-tuned birdcage coils were constructed to enable concurrent evaluation of a number of NMR indices in the golden retriever muscular dystrophy (GRMD) model of Duchenne muscular dystrophy (DMD). Seven rectus femoris muscle samples from dogs with ages ranging from 3 to 30 months were studied. 1H T1-weighted (T1w) and T2-weighted (T2w) images, 23Na images, and 31P spectra were acquired for each sample. 1H T1w and T2w images showed a decrease in T2w/T1w signal ratio for the four older (=12 months) samples when compared to younger samples. Other NMR indices unexpectedly showed no significant correlation with age. The collection time of samples and varying levels of disease severity may have attributed to these results. Regardless, the associated custom coils and positioner developed to enable multi-nuclear studies will enable future work to investigate NMR-based biomarkers in the numerous GRMD samples available to our group.
- IEEE MemberUS $11.00
- Society MemberUS $0.00
- IEEE Student MemberUS $11.00
- Non-IEEE MemberUS $15.00
Dynamic Missing-Data Completion Reduces Leakage of Motion Artifact Caused by Temporal Filtering That Remains after Scrubbing
Functional magnetic resonance imaging (fMRI) is commonly used to better understand brain function. Data becomes contaminated with motion artifact when a subject moves during an fMRI acquisition. Numerous methods have been suggested to target motion artifacts in fMRI. One of these methods, ?scrubbing?, removes motion-corrupted volumes but must be performed after temporal filtering since it creates temporal discontinuities. Thus, it does not prevent the spread of corrupted time samples from high motion volumes to their neighbors during temporal filtering. To mitigate this spread, which we refer to as ``leakage'', we propose a novel method, Dynamic Missing-data Completion (DMC), that replaces motion-corrupted volumes with synthetic data before temporal filtering. We analyzed the effect of DMC on an exemplary timeseries from a resting state fMRI (rsfMRI) and compared functional connectivity results of six rsfMRI scans from a single subject with different levels of subject motion. Our results suggest that DMC provides added benefit in further reduction of motion contamination that remains after scrubbing. DMC reduced the standard deviation of signal near scrubbed volumes by about 10% compared to scrubbing only, yielding this average closer to that of uncorrupted no motion volumes.
- IEEE MemberUS $11.00
- Society MemberUS $0.00
- IEEE Student MemberUS $11.00
- Non-IEEE MemberUS $15.00
Confocal Imaging of Intercellular Calcium in HeLa Cells for Monitoring Drug-Response: Biophysical Framework for Visualization of the Time-Lapse Images
Recent advancements in biomedical imaging focus on fluorescent imaging using laser scanning confocal microscopy. However, high-resolution imaging of cellular activity remains considerably expensive for both in vitro and in vivo model. In this context, integration of mathematical modeling and imaging data analysis to predict the cellular activity may aid understanding of cell signaling. Here we performed dynamic imaging using confocal microscopy and propose a model considering cell to cell connectivity that can predict the effect of drug on Ca2+ oscillations. The proposed model consists of large number of ordinary differential (ODE) equations and uses the concept of adjoint matrix containing coupling factors to capture the activity of cells with random arrangement. The results show that the cell-to-cell connection plays a crucial role in controlling the calcium oscillations through a diffusion-based mechanism. The present simulation tool can be used as generalized framework to generate and visualize the time-lapse videos required for in vitro drug testing for various drug doses.
- IEEE MemberUS $11.00
- Society MemberUS $0.00
- IEEE Student MemberUS $11.00
- Non-IEEE MemberUS $15.00
Dense Correlation Network for Automated Multi-Label Ocular Disease Detection with Paired Color Fundus Photographs
In ophthalmology, color fundus photography is an economic and effective tool for early-stage ocular disease screening. Since the left and right eyes are highly correlated, we utilize paired color fundus photographs for our task of automated multi-label ocular disease detection. We propose a Dense Correlation Network (DCNet) to exploit the dense spatial correlations between the paired CFPs. Specifically, DCNet is composed of a backbone Convolutional Neural Network (CNN), a Spatial Correlation Module (SCM), and a classifier. The SCM can capture the dense correlations between the features extracted from the paired CFPs in a pixel-wise manner, and fuse the relevant feature representations. Experiments on a public dataset show that our proposed DCNet can achieve better performance compared to the respective baselines regardless of the backbone CNN architectures.
- IEEE MemberUS $11.00
- Society MemberUS $0.00
- IEEE Student MemberUS $11.00
- Non-IEEE MemberUS $15.00
Identification of Lung Tissue Patterns in Subjects with Chronic Obstructive Pulmonary Disease Using a Convolutional Deep Learning Model
We have developed a three-dimensional (3D) convolutional autoencoder (CAE) in conjunction with the exploratory factor analysis to predict pulmonary functions and discover possible tissue patterns which are associated with alterations of pulmonary functions.
- IEEE MemberUS $11.00
- Society MemberUS $0.00
- IEEE Student MemberUS $11.00
- Non-IEEE MemberUS $15.00
OCT Segmentation using Convolutional Neural Network
OCT of retina indicates presence of macular edema, variation in thickness of different layers of retina and choroid. The size of the edema and the thickness of the choroid layers can be ascertained by proper segmentation of the OCT images. In this work, we propose a model using Convolutional Neural Network (CNN) similar to the SegNet architecture for segmenting choroid layers and edema in OCT images. Our CNN model is an encoder - decoder architecture designed specifically for pixel wise classification of images where boundary delineation is vital. To this end, first we train the CNN to obtain the pixel wise label of the choroid and edema. Once these labels are obtained, the pixel labels are converted into binary segmentation using morphological operations followed by edge detection. The results show good consistency when compared with the ophthalmologist's labeling. The idea can be extended to develop a tool to detect retinal disorders automatically.
- IEEE MemberUS $11.00
- Society MemberUS $0.00
- IEEE Student MemberUS $11.00
- Non-IEEE MemberUS $15.00
Bringing Machine Learning to the Clinic: Opportunities and Challenges
Machine learning and especially deep learning hold great promise to improve patient care. In several domains, algorithms perform as good as or better than fellowship trained radiologists for identification of abnormalities in clinically acquired images. However, there are much broader applications beyond image analysis such as patient selection and examination scheduling, image acquisition and reconstruction, using image data for prognostic purposes, and combing image data with information from electronic health records, laboratory and genetic data. Furthermore, in order for algorithms to be broadly accepted, there are many scenarios where it is important for the clinician that results are explainable. In addition, clinical deployment and workflow should be taken into consideration when designing the algorithm and bringing it to clinical practice. In my lecture I will focus on these aspects from a cardiovascular imaging perspective. [1] Hampe N, Wolterink JM, van Velzen SGM, Leiner T, I?gum I. Machine Learning for Assessment of Coronary Artery Disease in Cardiac CT: A Survey. Front Cardiovasc Med, 26 November 2019 https://doi.org/10.3389/fcvm.2019.00172 [2] Leiner T, Rueckert D, Suinesiaputra A, Bae?ler B, Nezafat R, I?gum I, Young AA. Machine learning in cardiovascular magnetic resonance: basic concepts and applications. J Cardiovasc Magn Reson 21(1):61, 2019.
- IEEE MemberUS $11.00
- Society MemberUS $0.00
- IEEE Student MemberUS $11.00
- Non-IEEE MemberUS $15.00
Shape Constrained CNNs for Hippocampus Sub-Field Segmentation
The segmentation of hippocampus subfields from MR images is challenging due to the small size of the organ, multiple interconnected regions, and low contrast. Motivated by the success of active shape models in segmentation, we propose a shape constrained CNN algorithm. We use a two-step strategy, where we first use an auto-encoder CNN to learn the anatomical shape priors from labels. After shape training, the decoder compactly represents the shape manifold; it generates the labels from a compact code. We then train an encoder CNN to predict the code from MR images. The decoder (shape generator) is fixed during the segmentation training.
- IEEE MemberUS $11.00
- Society MemberUS $0.00
- IEEE Student MemberUS $11.00
- Non-IEEE MemberUS $15.00
An Effective Deep Learning Architecture Combination for Tissue Microarray Spots Classification of H&E Stained Colorectal Images
Tissue microarray (TMA) assessment of histomorphological biomarkers contributes to more accurate prediction of outcome of patients with colorectal cancer (CRC), a common disease. Unfortunately, a typical problem with the use of TMAs is that the material contained in each TMA spot changes as the TMA block is cut repeatedly. A re-classification of the content within each spot would be necessary for accurate biomarker evaluation. The major challenges to this end however lie in the high heterogeneity of TMA quality and of tissue characterization in structure, size, appearance and tissue type. In this work, we propose an end-to-end framework using deep learning for TMA spot classification into three classes: tumor, normal epithelium and other tissue types. It includes a detection of TMA spots in an image, an extraction of overlapping tiles from each TMA spot image and a classification integrated into two effective deep learning architectures: convolutional neural network (CNN) and Capsule Network with the prior information of intended tissue type. A set of digitized H&E stained images from 410 CRC patients with clinicopathological information is used for the validation of the proposed method. We show experimentally that our approach brings state-of-the-art performances for several relevant CRC H&E tissue classification and that these results are promising for use in clinical practice.
- IEEE MemberUS $11.00
- Society MemberUS $0.00
- IEEE Student MemberUS $11.00
- Non-IEEE MemberUS $15.00
Modeling Heterogeneity in Feature Selection for MCI Classification
Conventional methods designed for Mild Cognitive Impairment (MCI) classification usually assume that the MCI subjects are homogeneous. However, recent discoveries indicate that MCI has heterogeneous neuropathological origins which may contribute to the sub-optimal performance of conventional methods. To compensate for the limitations of existing methods, we propose Maximum Margin Heterogeneous Feature Selection (MMHFS) by explicitly considering the heterogeneous distribution of MCI data. More specifically, the proposed method simultaneously performs unsupervised clustering discovery on MCI data and conducts discriminant feature selection to help classify MCI from Normal Control (NC). It is worth noting that these two processes can benefit from each other, thus enabling the proposed method to achieve better performance.
- IEEE MemberUS $11.00
- Society MemberUS $0.00
- IEEE Student MemberUS $11.00
- Non-IEEE MemberUS $15.00
A Computational Diffusion Mri Framework for Biomarker Discovery in a Rodent Model of Post-Traumatic Epileptogenesis
Epilepsy is a debilitating neurological disorder that directly impacts millions of people and exerts a tremendous economic burden on society at large. While traumatic brain injury (TBI) is a common cause, there remain many open questions regarding its pathological mechanism. The goal of the Epilepsy Bioinformatics Study for Antiepileptogenic Therapy (EpiBioS4Rx) is to identify epileptogenic biomarkers through a comprehensive project spanning multiple species, modalities, and research institutions; in particular, diffusion magnetic resonance imaging (MRI) is a critical component, as it probes tissue microstructure and structural connectivity. The project includes in vivo imaging of a rodent fluid-percussion model of TBI, and we developed a computational diffusion MRI framework for EpiBioS4Rx which employs advanced techniques for preprocessing, modeling, spatial normalization, region analysis, and tractography to derive imaging metrics at group and individual levels. We describe the system's design, present characteristic results from a longitudinal cohort, and discuss its role in biomarker discovery and further studies.
- IEEE MemberUS $11.00
- Society MemberUS $0.00
- IEEE Student MemberUS $11.00
- Non-IEEE MemberUS $15.00
Noise Redistribution and 3D Shearlet Filtering for Speckle Reduction in Optical Coherence Tomography
Optical coherence tomography (OCT) is a micrometer-resolution, cross-sectional imaging modality for biological tissue. It has been widely applied for retinal imaging in ophthalmology. However, the large speckle noise affects the analysis of OCT retinal images and their diagnostic utility. In this article, we present a new speckle reduction algorithm for 3D OCT images. The OCT speckle noise is approximated as Poisson distribution, which is dif?cult to be removed for its signal-dependent characteristic. Thus our algorithm is consisted by two steps: ?rst, a variance-stabilizing trans-formation, named Anscombe transformation, is applied to redistribute the multiplicative speckle noise into an additive Gaussian noise; then the transformed data is decomposed and ?ltered in 3D Shearlet domain, which provides better representation of the edge information of the retinal layers than wavelet and curvelet. The proposed method is evaluated through the three parameters using high-de?nition B-scans as the ground truth. Quantitative experimental results show that our method gives out the best evaluation parameters, and highest edge contrast, compared with state-of-the-art OCT denoising algorithms.
- IEEE MemberUS $11.00
- Society MemberUS $0.00
- IEEE Student MemberUS $11.00
- Non-IEEE MemberUS $15.00
Semi-Supervised Brain Lesion Segmentation Using Training Images with and without Lesions
Semi-supervised approaches have been developed to improve brain lesion segmentation based on convolutional neural networks (CNNs) when annotated data is scarce. Existing methods have exploited unannotated images with lesions to improve the training of CNNs. In this work, we explore semi-supervised brain lesion segmentation by further incorporating images without lesions. Specifically, using information learned from annotated and unannotated scans with lesions, we propose a framework to generate synthesized lesions and their annotations simultaneously. Then, we attach them to normal-appearing scans using a statistical model to produce synthesized training samples, which are used together with true annotations to train CNNs for segmentation. Experimental results show that our method outperforms competing semi-supervised brain lesion segmentation approaches.
- IEEE MemberUS $11.00
- Society MemberUS $0.00
- IEEE Student MemberUS $11.00
- Non-IEEE MemberUS $15.00
A Deep Learning-Facilitated Radiomics Solution for the Prediction of Lung Lesion Shrinkage in Non-Small Cell Lung Cancer Trials
Herein we propose a deep learning-based approach for the prediction of lung lesion response based on radiomic features extracted from clinical CT scans of patients in non-small cell lung cancer trials. The approach starts with the classification of lung lesions from the set of primary and metastatic lesions at various anatomic locations. Focusing on the lung lesions, we perform automatic segmentation to extract their 3D volumes. Radiomic features are then extracted from the lesion on the pre-treatment scan and the first follow-up scan to predict which lesions will shrink at least 30% in diameter during treatment (either pembrolizumab or combinations of chemotherapy and pembrolizumab), which is defined as a partial response by the Response Evaluation Criteria In Solid Tumors (RECIST) guidelines. A 5-fold cross validation on the training set led to an AUC of 0.84 ? 0.03, and the prediction on the testing dataset reached AUC of 0.73 ? 0.02 for the outcome of 30% diameter shrinkage.
- IEEE MemberUS $11.00
- Society MemberUS $0.00
- IEEE Student MemberUS $11.00
- Non-IEEE MemberUS $15.00
Efficient Detection of EMVI in Rectal Cancer via Richer Context Information and Feature Fusion
It is vital to automatically detect the Extramural Vascular Invasion (EMVI) in rectal cancer before surgery, which facilitates to guide the patient's treatment planning. Nevertheless, there are few studies about EMVI detection through magnetic resonance imaging (MRI). Moreover, since EMVI has three main characteristics: highly-variable appearances, relatively-small sizes and similar shapes with surrounding tissues, current deep learning based methods can not be directly used. In this paper, we propose a novel and efficient EMVI detection framework, which gives rise to three main contributions. Firstly, we introduce a self-attention module to capture dependencies ranging from local to global. Secondly, we design a parallel atrous convolution (PAC) block and a global pyramid pooling (GPP) module to encode richer context information at multiple scales. Thirdly, we fuse the whole-scene and local-region information together to improve the feature representation ability. Experimental results show that our framework can significantly improve the detection accuracy and outperform other state-of-the-art methods.
- IEEE MemberUS $11.00
- Society MemberUS $0.00
- IEEE Student MemberUS $11.00
- Non-IEEE MemberUS $15.00
Blind Deconvolution of Fundamental and Harmonic Ultrasound Images
Restoring the tissue reflectivity function (TRF) from ultrasound (US) images is an extensively explored research field. It is well-known that human tissues and contrast agents have a non-linear behavior when interacting with US waves. In this work, we investigate this non-linearity and the interest of including harmonic US images in the TRF restoration process. Therefore, we introduce a new US image restoration method taking advantage of the fundamental and harmonic components of the observed radiofrequency (RF) image. The depth information contained in the fundamental component and the good resolution of the harmonic image are combined to create an image with better properties than the fundamental and harmonic images considered separately. Under the hypothesis of weak scattering, the RF image is modeled as the 2D convolution between the TRF and the system point spread function (PSF). An inverse problem is formulated based on this model able to jointly estimate the TRF and the PSF. The interest of the proposed blind deconvolution algorithm is shown through an in vivo result and compared to a conventional US restoration method.
- IEEE MemberUS $11.00
- Society MemberUS $0.00
- IEEE Student MemberUS $11.00
- Non-IEEE MemberUS $15.00
Signet Ring Cells Detection in Histology Images with Similarity Learning
The detection of signet ring cells in histology images is of great value in clinical practice. However, several reasons such as appearance variations and lack of well-labelled data make it a challenging task. Considering the intrinsic characteristics of signet ring cell images, a dedicated similarity learning network is designed in this paper to help the discovery of distinctive feature representations for ring cells. Specifically, we adapt the region proposal network and add an embedding layer to enable similarity learning for training the model. Experimental results show that similarity learning can strengthen the performance of the state-of-the-art and makes our approach competent for the task of signet ring cell detection.
- IEEE MemberUS $11.00
- Society MemberUS $0.00
- IEEE Student MemberUS $11.00
- Non-IEEE MemberUS $15.00
A Robust Network Architecture to Detect Normal Chest X-Ray Radiographs
We propose a novel deep neural network architecture for normalcy detection in chest x-ray images. This architecture treats the problem as fine-grained binary classification in which the normal cases are well-defined as a class while leaving all other cases in the broad class of abnormal. It employs several components that allow generalization and prevent overfitting across demographics. The model is trained and validated on a large public dataset of frontal chest X-ray images. It is then tested independently on images from a clinical institution of differing patient demographics using a three radiologist consensus for ground truth labeling. The model provides an area under ROC curve of 0.96 when tested on 1271 images. Using this model, we can automatically remove nearly a third of disease-free chest X-ray screening images from the workflow, without introducing any false negatives thus raising the potential of expediting radiology workflows in hospitals in future.
- IEEE MemberUS $11.00
- Society MemberUS $0.00
- IEEE Student MemberUS $11.00
- Non-IEEE MemberUS $15.00
Hessian Splines for Scanning Transmission X-Ray Microscopy
Scanning transmission X-ray microscopy (STXM) produces images in which each pixel value is related to the measured attenuation of an X-ray beam. In practice, the location of the illuminated region does not exactly match the desired uniform pixel grid. This error can be measured using an interferometer. In this paper, we propose a spline-based reconstruction method for STXM which takes these position errors into account. We achieve this by formulating the reconstruction problem as a continuous-domain inverse problem in a spline basis, and by using Hessian nuclear-norm regularization. We solve this problem using the standard ADMM algorithm, and we demonstrate the pertinence of our approach on both simulated and real STXM data.
- IEEE MemberUS $11.00
- Society MemberUS $0.00
- IEEE Student MemberUS $11.00
- Non-IEEE MemberUS $15.00
Earthmover-Based Manifold Learning for Analyzing Molecular Conformation Spaces
In this paper, we propose a novel approach for manifold learning that combines the Earthmover's distance (EMD) with the diffusion maps method for dimensionality reduction. We demonstrate the potential benefits of this approach for learning shape spaces of proteins and other flexible macromolecules using a simulated dataset of 3-D density maps that mimic the non-uniform rotary motion of ATP synthase. Our results show that EMD-based diffusion maps require far fewer samples to recover the intrinsic geometry than the standard diffusion maps algorithm that is based on the Euclidean distance. To reduce the computational burden of calculating the EMD for all volume pairs, we employ a wavelet-based approximation to the EMD which reduces the computation of the pairwise EMDs to a computation of pairwise weighted-l1 distances between wavelet coefficient vectors.
- IEEE MemberUS $11.00
- Society MemberUS $0.00
- IEEE Student MemberUS $11.00
- Non-IEEE MemberUS $15.00
ASCNet: Adaptive-Scale Convolutional Neural Networks for Multi-Scale Feature Learning
Extracting multi-scale information is key to semantic segmentation. However, the classic convolutional neural networks (CNNs) encounter difficulties in achieving multi-scale information extraction: expanding convolutional kernel incurs the high computational cost and using maximum pooling sacrifices image information. The recently developed dilated convolution solves these problems, but with the limitation that the dilation rates are fixed and therefore the receptive field cannot fit for all objects with different sizes in the image. We propose an adaptive-scale convolutional neural network (ASCNet), which introduces a 3-layer convolution structure in the end-to-end training, to adaptively learn an appropriate dilation rate for each pixel in the image. Such pixel-level dilation rates produce optimal receptive fields so that the information of objects with different sizes can be extracted at the corresponding scale. We compare the segmentation results using the classic CNN, the dilated CNN and the proposed ASCNet on two types of medical images (The Herlev dataset and SCD RBC dataset). The experimental results show that ASCNet achieves the highest accuracy. Moreover, the automatically generated dilation rates are positively correlated to the sizes of the objects, confirming the effectiveness of the proposed method.
- IEEE MemberUS $11.00
- Society MemberUS $0.00
- IEEE Student MemberUS $11.00
- Non-IEEE MemberUS $15.00
Self-Supervision vs. Transfer Learning: Robust Biomedical Image Analysis against Adversarial Attacks
Deep neural networks are being increasingly used for disease diagnosis and lesion localization in biomedical images. However, training deep neural networks not only requires large sets of expensive ground truth (image labels or pixel annotations), they are also susceptible to adversarial attacks. Transfer learning alleviates the former problem to some extent by pre-training the lower layers of a neural network on a large labeled dataset from a different domain (e.g., ImageNet). In transfer learning, the final few layers are trained on the target domain (e.g., chest X-rays), while the pre-trained layers are only fine-tuned or even kept frozen. An alternative to transfer learning is self-supervised learning in which a supervised task is created by transforming the unlabeled images from the target domain itself. The lower layers are pre-trained to invert the transformation in some sense. In this work, we show that self-supervised learning combined with adversarial training offers additional advantages over transfer learning as well as vanilla self-supervised learning. In particular, the process of adversarial training leads to both a reduction in the amount of supervised data required for comparable accuracy, as well as a natural robustness to adversarial attacks. We support our claims using experiments on the two modalities and tasks -- classification of chest X-rays, and segmentation of MRI images, as well as two types of adversarial attacks -- PGD and FGSM.
- IEEE MemberUS $11.00
- Society MemberUS $0.00
- IEEE Student MemberUS $11.00
- Non-IEEE MemberUS $15.00
Ising-GAN: Annotated Data Augmentation with a Spatially Constrained Generative Adversarial Network
Data augmentation is a popular technique with which new dataset samples are artificially synthesized to the end of aiding training of learning-based algorithms and avoiding overfitting. Methods based on Generative adversarial networks (GANs) have recently rekindled interest in research on new techinques for data augmentation. With the current paper we propose a new GAN-based model for data augmentation, comprising a suitable Markov Random Field-based spatial constraint that encourages synthesis of spatially smooth outputs. Oriented towards use with medical imaging sets where a localization/segmentation annotation is available, our model can simultaneously also produce artificial annotations. We gauge performance numerically by measuring performance of U-Net trained to detect cells on microscopy images, by taking into account the produced augmented dataset. Numerical trials, as well as qualitative results validate the usefulness of our model.
- IEEE MemberUS $11.00
- Society MemberUS $0.00
- IEEE Student MemberUS $11.00
- Non-IEEE MemberUS $15.00
A List-mode OSEM-based Attenuation and Scatter Compensation Method for SPECT
Reliable attenuation and scatter compensation (ASC) is a pre-requisite for quantification tasks and beneficial for visual interpretation tasks in single-photon emission computed tomography (SPECT) imaging. For this purpose, we develop a SPECT reconstruction method that uses the entire SPECT emission data, i.e. data in both the photopeak and scattered windows, and acquired in list-mode format, to perform ASC. Further, the method uses the energy attribute of the detected photons while performing the reconstruction. We implemented a GPU-version of this method using an ordered subsets expectation maximization (OSEM) algorithm for faster convergence and quicker computation. The performance of the method was objectively evaluated using realistic simulation studies on the task of estimating activity uptake in the caudate, putamen, and globus pallidus regions of the brain in a dopamine transporter (DaT)-SPECT study. The method yielded improved performance in terms ob bias, variance, and mean square error compared to existing ASC techniques in quantifying activity in all three regions. Overall, the results provide promising evidence of the potential of the proposed technique for ASC in SPECT imaging.
- IEEE MemberUS $11.00
- Society MemberUS $0.00
- IEEE Student MemberUS $11.00
- Non-IEEE MemberUS $15.00
Short Trajectory Segmentation with 1D Unet Framework: Application to Secretory Vesicle Dynamics
The study of protein transport in living cell requires automated techniques to capture and quantify dynamics of the protein packaged into secretory vesicles. The movement of the vesicles is not consistent along the trajectory, therefore the quantitative study of their dynamics requires trajectories segmentation. This paper explores quantification of such vesicle dynamics and introduces a novel 1D U-Net based trajectory segmentation. Unlike existing mean squared displacement based methods, our proposed framework is not restricted under the requirement of long trajectories for effective segmentation. Moreover, as our approach provides segmentation within each sliding window, it enables effectively capture even short segments. The approach is quantified by the data acquired from spinning disk microscopy imaging of protein trafficking in Drosophila epithelial cells. The extracted trajectories have lengths ranging from 5 (short tracks) to 135 (long tracks) points. The proposed approach achieves 77.7% accuracy for the trajectory segmentation.
- IEEE MemberUS $11.00
- Society MemberUS $0.00
- IEEE Student MemberUS $11.00
- Non-IEEE MemberUS $15.00
Adaptive Weighted Minimax-Concave Penalty Based Dictionary Learning for Brain MR Images
We consider adaptive weighted minimax-concave (WMC) penalty as a generalization of the minimax-concave penalty (MCP) and vector MCP (VMCP). We develop a computationally efficient algorithm for sparse recovery considering the WMC penalty. Our algorithm in turn employs the fast iterative soft-thresholding algorithm (FISTA) but with the key difference that the threshold is adapted from one iteration to the next. The new sparse recovery algorithm when used for dictionary learning has a better representation capability as demonstrated by an application to magnetic resonance image denoising. The denoising performance turns out to be superior to the state-of-the-art techniques considering the standard performance metrics namely peak signal-to-noise ratio (PSNR) and structural similarity index metric (SSIM).
- IEEE MemberUS $11.00
- Society MemberUS $0.00
- IEEE Student MemberUS $11.00
- Non-IEEE MemberUS $15.00
A High-Powered Brain Age Prediction Model Based on Convolutional Neural Network
Predicting individual chronological age based on neuroimaging data is very promising and important for understanding the trajectory of normal brain development. In this work, we proposed a new model to predict brain age ranging from 12 to 30 years old, based on structural magnetic resonance imaging and a deep learning approach with reduced model complexity and computational cost. We found that this model can predict brain age accurately not only in the training set (N = 1721, mean absolute error is 1.89 in 10-fold cross validation) but in an independent validation set (N = 226, mean absolute error is 1.96), substantially outperforming the previous published models. Given the considerable accuracy and generalizability, it is promising to further deploy our model in the clinic and help to investigate the pathophysiology of neurodevelopmental disorders.
- IEEE MemberUS $11.00
- Society MemberUS $0.00
- IEEE Student MemberUS $11.00
- Non-IEEE MemberUS $15.00
SU-Net and DU-Net Fusion for Tumour Segmentation in Histopathology Images
In this work, a fusion framework is proposed for automatic cancer detection and segmentation in whole-slide histopathology images. The framework includes two parts of fusion, multi-scale fusion, and sub-datasets fusion. For a particular type of cancer, histopathological images often demonstrate large morphological variances, the performance of an individual trained network is usually limited. We develop a fusion model that integrates two types of U-net structures: Shallow U-net (SU-net) and Deep U-net (DU-net), trained with a variety of multiple re-scaled images and different subsets of images, and finally ensemble a unified output. Smoothing and noise elimination are conducted using convolutional Conditional Random Fields (CRFs)cite{CRF}. The proposed model is validated on Automatic Cancer Detection and Classification in Whole-slide Lung Histopathology (ACDC@LungHP) challenge in ISBI2019 and Digestive-System Pathological Detection and Segmentation Challenge 2019(DigestPath 2019) in MICCAI2019. Our method achieves a dice coefficient of 0.7968 in ACDC@LungHP and 0.773 in DigestPath 2019, and the result of ACDC@LungHP challenge is ranked in third place on the board.
- IEEE MemberUS $11.00
- Society MemberUS $0.00
- IEEE Student MemberUS $11.00
- Non-IEEE MemberUS $15.00
Improved Motion Correction for Functional MRI Using an Omnibus Regression Model
Head motion during functional Magnetic Resonance Imaging acquisition can significantly contaminate the neural signal and introduce spurious, distance-dependent changes in signal correlations. This can heavily confound studies of development, aging, and disease. Previous approaches to suppress head motion artifacts have involved sequential regression of nuisance covariates, but this has been shown to reintroduce artifacts. We propose a new motion correction pipeline using an omnibus regression model that avoids this problem by simultaneously capturing multiple artifact sources using the best performing algorithms for each artifact. We quantitatively evaluate its motion artifact suppression performance against sequential regression pipelines using a large heterogeneous dataset (n=151) which includes high-motion subjects and multiple disease phenotypes. The proposed concatenated regression pipeline significantly reduces the association between head motion and functional connectivity while significantly outperforming the traditional sequential regression pipelines in eliminating distance-dependent head motion artifacts.