Classification of Echocardiogram View using A Convolutional Neural Network

Hannah Ornstein, Dan Adam

Abstract


The standard views in echocardiography capture distinct slices of the heart which can be used to assess cardiac function. Determining the view of a given echocardiogram is the first step for analysis. To automate this step, a deep network of the ResNet-18 architecture was used to classify between six standard views. The network parameters were pre-trained with the ImageNet database and prediction quality was assessed with a visualization tool known as gradient-weighted class activation mapping (Grad-CAM). The network was able to distinguish between three parasternal short axis views and three apical views to ~99\% accuracy. 10-fold cross validation showed a 97\%-98\% accuracy for the apical view subcategories (which included apical two-, three-, and four- chamber views). Grad-CAM images of these views highlighted features that were similar to those used by experts in manual classification. Parasternal short axis subcategories (which included apex level, mitral valve level, and papillary muscle level) had accuracies of 54\%-73\%. Grad-CAM images illustrate that the network classifies most parasternal short axis views as belonging to the papillary muscle level. Likely more images and incorporating time-dependent features would increase the parasternal short axis view accuracy. Overall, a convolutional neural network can be used to reliably classify echocardiogram views.


Full Text:

PDF


DOI: https://doi.org/10.5430/air.v11n1p1

Refbacks

  • There are currently no refbacks.


Artificial Intelligence Research

ISSN 1927-6974 (Print)   ISSN 1927-6982 (Online)

Copyright © Sciedu Press 
To make sure that you can receive messages from us, please add the 'Sciedupress.com' domain to your e-mail 'safe list'. If you do not receive e-mail in your 'inbox', check your 'bulk mail' or 'junk mail' folders.