Incorporating Artificial Intelligence Into Pathology | AI | USA

The medical industry has been harnessing AI for the past decade or so. We will look at several approaches to artificial intelligence in pathology that use whole slide images.

In this article, we will analyze how to deal with problems such as classification and segmentation for whole slide images (WSIs), which can provide pathologists with insights quickly!

Why Use AI in Pathology?

use of digital pathology Information technology has become very prominent since the beginning of the era. computational AI in Pathology AI could open doors for many applications, including. Due to recent advances, it has become easier to develop image-based diagnostic, prognostic and predictive algorithms.

Some of the main tasks performed by pathologists are classification, segmentation, and quantification. read or learn attentively The technology enhances these functions dramatically. In addition to collecting more accurate results, investing in AI will help pathologists become more productive by reducing human errors, improved workloads and quicker patient turnaround times. Additionally, pipelines are easier to automate when working with large amounts of images.

WSI. Brief information about

Hole slide imaging, also known as virtual microscopy, aims to mimic microscopy in a computer-generated manner. Thus, storing and implementing image-based solutions is hassle-free.

The approaches described below are general and may have different uses Use cases in pathology, We will look at both the classification and division of WSI.

Normalizing H&E

If you are working with H&E images. You may need to follow an additional set of processing steps to convert images from RGB to OD space.

https://www.youtube.com/watch?v=yUrwEYgZUsA

https://www.cs.unc.edu/~mn/sites/default/files/macenko2009.pdf

classification

The WSI resolution is usually large, exceeding 15K. It would demand enormous computational power and memory usage to train a simple classifier on the entire image.

first approach

If the classification features are clear, then resizing and performing the vanilla CNN classification will do the trick! It is the simplest and takes the least amount of time. The difficulty is that we choose the model architecture and image resolution based on factors such as available computation power and memory.

second point of view

It is tiling or patching entire slide images.

Some libraries that can extract patches of tissues from WSI such as CLAM or pathologist intervention may be required to annotate the region of interest and obtain these annotations. Therefore, we select tiles within this annotated region for training and validation.

Classification is applied to each tile of a particular image. Ultimately, all predictions will determine the class of WSI.

The advantage is that tiling creates multiple images from a handful of WSIs. But it takes more time as it requires additional processing efforts to generate the tiles.

model

Preferably, transfer learning is better because of the complex features of the images. Therefore, models such as VGG16/ResNet weights are suitable, or can be open-source pre-trained models depending on the use case. Transfer learning models may converge soon.

Instead of using pre-trained weights from VGG16/ResNet, we can also build a CNN model from scratch. It is effective if the image features are outstanding which should be visible and distinguishable. The model size will also be smaller and will not require heavy resource usage.

the division

The partition shows the desired parts of the WSI such as Tumors, Tissue, and Wounds Which help the pathologist to make quick decisions.

Segmentation is pixel-level classification where a given set of classes generates a mask. Tiling is necessary to perform segmentation on high-resolution images such as WSI.

Generally, the process is as follows,

Tiled Approach to Partition (Ignore the Facade)

  • Create a mask for the WSI with the help of annotations.
  • Run pre-processing (if any) and tile the WSI and their respective masks.
  • Train the model on the tiled data.
  • Run post-processing (if any) and then completely re-apply the generated mask.

It is necessary to establish a standard for WSI resolution/dimensions. When training a segmentation model, conforming to only one input shape should not be overlooked. Therefore, when tiling on the native WSI, we must make sure to resize them beforehand.

To clarify, let’s say we have 3 WSIs with different resolutions 15640x15640, 15360x16896, and 17920x12000, If the input size or tile dimension is 512x512, so these are not all divisible 512, Then a way to resize these WSIs would be 15360x15360, which is a multiple of 512,

However, it works well when the resolutions of different images are very close. But when the difference is significant, such as the third resolution where one dimension is 12000, The resized image will be skewed and may lose important features relevant to the segmentation. Ideally, the WSI resolution should be the same for all WSIs.

Similar to the second approach to classification, we can choose the relevant tile for the use case.

model

Several segmentation model architectures have been implemented. But the most popular are unet and mask-rcnn. These are perfectly capable of doing eg/semantic segmentation on images.

unitet model architecture

Usenet was created to do division on medical images. The first advantage of using Unet is that it allows the use of global location and context at the same time. Secondly, it works with very few training samples and provides better performance for segmentation tasks.

However, the only advantage of using Mask-RCNN over UNet would be its simplicity. Uset requires additional processing steps. Overall, there is no difference between unet and mask-rcNN results, or it may be specific to the use case.

Regardless of the model architecture, segmentation is a resource-demanding task. Therefore, use a GPU with a powerful machine for both training and inference.

hierarchical approach

Classification and segmentation can happen inside the pipeline by belonging to different stages. Classification can filter out images that need to be passed to the segmentation model to obtain more accurate results and avoid unnecessary processing. Similarly, fragmented regions may require further classification. There can be multiple stages like this in the pipeline.

ending

The purpose of this article was to shed light on the aspects of Deep Learning in Pathology, The mentioned approaches are general and depending on the use cases can be complex procedures. But they should give a good baseline for facing your pathology projects.

Incorporating Deep Learning Into Pathology

Leave a Comment