Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
SEGMENTATION OF MEDICAL IMAGES
Document Type and Number:
WIPO Patent Application WO/2019/010470
Kind Code:
A1
Abstract:
Methods for segmenting medical images from different modalities include integrating a plurality of types of quantitative image descriptors with a deep 3D convolutional neural network. The descriptors include: (i) a Gibbs energy for a prelearned 7th-order Markov-Gibbs random field (MGRF) model of visual appearance, (ii) an adaptive shape prior model, and (iii) a first-order appearance model of the original volume to be segmented. The neural network fuses the computed descriptors to obtain the final voxel-wise probabilities of the goal regions.

Inventors:
EL-BAZ AYMAN (US)
SOLIMAN AHMED (US)
EL-MELEGY MOUMEN (EG)
EL-GHAR MOHAMED (EG)
Application Number:
PCT/US2018/041168
Publication Date:
January 10, 2019
Filing Date:
July 07, 2018
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
UNIV LOUISVILLE RES FOUND INC (US)
International Classes:
G06T7/143; G16H30/00
Foreign References:
US20150286786A12015-10-08
US20170083793A12017-03-23
Attorney, Agent or Firm:
CHELLGREN, Brian, W. et al. (US)
Download PDF:
Claims:
What is claimed is:

1 . A method for segmenting medical images comprising:

integrating image descriptors with a three dimensional neural network, the image descriptors including a medical image, a Gibbs energy for a Markov-Gibbs random field model of the medical image, and an adaptive shape prior model of the medical image;

generating, using the three dimensional neural network, probabilities for a goal region; and

designating, based on the generated probabilities, the goal region in the medical image.

2. The method of claim 1 , wherein the goal region is a region of the medical image corresponding to a biological feature of interest.

3. The method of claim 1 , wherein the Gibbs energy for the Markov-Gibbs random field model is a Gibbs energy for a 7th-order Markov-Gibbs random field model.

4. The method of claim 1 , wherein the Markov-Gibbs random field model is generated by using grayscale patterns of exemplary goal objects as samples of a trainable Markov-Gibbs random field.

5. The method of claim 4, wherein the goal region is a region of the medical image corresponding to a biological feature of interest, and wherein the exemplary goal objects are the same biological feature.

6. The method of claim 1 , wherein the adaptive shape prior depends on a set of manually-segmented co-aligned subject images of a biological feature of interest corresponding to the goal region.

7. The method of claim 1 , wherein the medical image includes a plurality of voxels, and wherein the generating comprises generating probabilities that each voxel depicts a biological feature of interest.

8. The method of claim 1 , further comprising outputting a segmented medical image identifying the goal region.

9. A method for segmenting medical images comprising:

receiving a medical image including a plurality of voxels;

inputting into a neural network a plurality of image descriptors describing the medical image, wherein the plurality of image descriptors include a Gibbs energy for a Markov-Gibbs random field model, an adaptive shape prior model, and a first-order appearance model of the original volume to be segmented;

calculating, by the neural network, probabilities that each voxel represents a goal region in the medical image; and

segmenting, by the neural network, the medical image to identify the goal region.

10. The method of claim 9, wherein the goal region is a region of the medical image corresponding to a biological feature of interest.

1 1 . The method of claim 9, wherein the first-order appearance model of the original volume to be segmented is the medical image. 12. The method of claim 9, further comprising outputting a segmented medical image identifying the goal region.

13. The method of claim 9, wherein the Markov-Gibbs random field model is generated by using grayscale patterns of exemplary goal objects as samples of a trainable Markov-Gibbs random field.

14. The method of claim 9, wherein the adaptive shape prior model is generated by adapting a shape prior in accord with the medical image and a database of customized training images.

15. A method for segmenting a three-dimensional medical image, comprising: receiving medical image data representing a three-dimensional medical image, the medical image data including a plurality of voxels;

integrating a plurality of image descriptors of the medical image using a neural network; and

outputting, by the neural network, segmentation data relating to the three- dimensional medical image, wherein the segmentation data is based on the plurality of integrated image descriptors.

16. The method of claim 15, wherein the plurality of image descriptors include the medical image and a Gibbs energy for a Markov-Gibbs random field model of the medical image. 17. The method of claim 15, wherein the plurality of image descriptors include the medical image and an adaptive shape prior model.

18. The method of claim 15, wherein the segmentation data identifies a goal region in the medical image.

Description:
SEGMENTATION OF MEDICAL IMAGES

[0001] This application claims the benefit of United States provisional patent application serial no. 62/529,788, filed July 7, 2017, for PRECISE SEGMENTATION OF MEDICAL IMAGES BY GUIDING ADAPTIVE SHAPE WITH A 7 TH -ORDER MGRF MODEL OF VISUAL APPEARANCE, incorporated herein by reference.

FIELD OF THE INVENTION

[0002] Methods for segmenting medical images from different modalities include integrating a plurality of types of quantitative image descriptors with a deep 3D convolutional neural network. The descriptors include: (i) a Gibbs energy for a prelearned 7th-order Markov-Gibbs random field (MGRF) model of visual appearance, (ii) an adaptive shape prior model, and (iii) a first-order appearance model of the original volume to be segmented. The neural network fuses the computed descriptors, together with the raw image data, for obtaining the final voxel- wise probabilities of the goal regions. BACKGROUND OF THE INVENTION

[0003] Segmentation is a key first step in medical image processing and analysis. Accuracy in segmentation is necessary to generate accurate results in later steps of medical image processing and analysis, such as, for example, co- registration of different images, feature extraction, and computer aided detection or diagnostics (CAD). However, a robust and accurate segmentation is still a challenge due to various acquisition techniques for different imaging modalities and their technical limitations, signal noise and inhomogenities, artefacts, and blurred boundaries between anatomical structures, pathologies that accompany most of the investigated subjects, complex and overlapping object-background signal distributions in many scans, and other factors.

[0004] Existing segmentation techniques meet with several limitations. Shape distortions of pathological organs are mostly not taken into consideration. Objective functions of active shape models are typically non-convex, making the evolution too sensitive to accurate initialization due to fast convergence to the closest local energy minima. Some level-set algorithms assume piecewise constant or smooth goal segments to evolve an active contour, despite this assumption generally not being valid for medical image segmentation.

SUMMARY

[0005] To overcome these limitations and deal with various challenging normal and pathological organs, the disclosed method combines two shape and appearance descriptors of objects under consideration with a multi-channel deep 3D convolutional neural network (deep-3D-CNN). The object appearance is quantified by coupling its high-order pre-learned probabilistic model with a simple first-order appearance model of the object outlined at each current position of the evolving shape.

[0006] Disclosed herein is a novel framework for segmenting medical images from different modalities by integrating higher-order appearance and adaptive shape descriptors, in addition to the input image current appearance with a deep-3D-CNN. The segmentation results for lung CT scans and DW-MRI kidney scans achieved a high accuracy evidenced by the reported DSC (Dice similarity coefficients), BHD (bidirectional Hausdorff distance), and PVD (percentage volume difference) values.

[0007] This summary is provided to introduce a selection of the concepts that are described in further detail in the detailed description and drawings contained herein. This summary is not intended to identify any primary or essential features of the claimed subject matter. Some or all of the described features may be present in the corresponding independent or dependent claims, but should not be construed to be a limitation unless expressly recited in a particular claim. Each embodiment described herein is not necessarily intended to address every object described herein, and each embodiment does not necessarily include each feature described. Other forms, embodiments, objects, advantages, benefits, features, and aspects of the present invention will become annamnt tn nne of skill in the art from the detailed description and drawings contained herein. Moreover, the various apparatuses and methods described in this summary section, as well as elsewhere in this application, can be expressed as a large number of different combinations and subcombinations. All such useful, novel, and inventive combinations and subcombinations are contemplated herein, it being recognized that the explicit expression of each of these combinations is unnecessary.

BRIEF DESCRIPTION OF THE DRAWINGS

[0008] A better understanding of the present invention will be had upon reference to the following description in conjunction with the accompanying drawings.

[0009] FIG. 1 depicts 2D axial projections for (A) the original image, (B) Gibbs energy, (C) adaptive shape probability, and (D) overlay of the final segmentation. The original images in rows (1 ) and (2) are DW-MRI kidney scans. The original images in rows (3) and (4) are lung CT scans. DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS

[0010] For the purposes of promoting an understanding of the principles of the invention, reference will now be made to selected embodiments illustrated in the drawings and specific language will be used to describe the same. It will nevertheless be understood that no limitation of the scope of the invention is thereby intended; any alterations and further modifications of the described or illustrated embodiments, and any further applications of the principles of the invention as illustrated herein are contemplated as would normally occur to one skilled in the art to which the invention relates. At least one embodiment of the invention is shown in great detail, although it will be apparent to those skilled in the relevant art that some features or some combinations of features may not be shown for the sake of clarity.

[0011] Any reference to "invention" within this document is a reference to an embodiment of a family of inventions, with no single embodiment including features that are necessarily included in all embodiments, unless otherwise stated. Furthermore, although there may be references to "advantages" provided by some embodiments of the present invention, other embodiments may not include those same advantages, or may include different advantages. Any advantages described herein are not to be construed as limiting to any of the claims.

[0012] Specific quantities (spatial dimensions, dimensionless parameters, etc.) may be used explicitly or implicitly herein, such specific quantities are presented as examples only and are approximate values unless otherwise indicated. Discussions pertaining to specific compositions of matter, if present, are presented as examples only and do not limit the applicability of other compositions of matter, especially other compositions of matter with similar properties, unless otherwise indicated.

[0013] To obtain the final segmentation, the learned adaptive shape prior and high-order appearance descriptor are integrated with the model of current raw image signals by using the deep-3D-CNN. Basic components of the proposed framework are detailed below.

Adaptive Shape Prior

[0014] The probabilistic feature "adaptive shape prior" recited in Algorithm 1 describes the shape of a specific object, i.e., the target organ or tissue intended for segmentation in the medical image. Traditional shape priors account for a co-aligned training database of imaging scans from subject patients, but do not include pathologies or even large anatomical inconsistencies. The adaptive shape prior disclosed herein provides both the global and local adaptation to an object to be segmented. Global adaptation constitutes selecting training subjects, which are most similar to the test subject, to contribute to a customized database. The local adaptation constitutes including local patch similarity weights in addition to the customized voxel-wise similarity. Adding the local similarities amplifies the shape prior, especially for images like DW-MRI with irregular contrast. The appearance- guided adaptive shape prior depends on a set (database) of pre-selected, manually- segmented, and co-aligned subjects for each organ of interest. Using known non- rigid registration techniques, each database subject and its "gold-standard", or ground-truth segmentation map are co-aligned to the reference template (selected as an intermediate size and shape of all subjects). Each organ database, D = {di = (si, mi ) : i = 1 , 2, . . . , N }, contains 3D scans, having been chosen to represent typical inter-subject variations, and their true region maps. The subjects were selected from available data sets using the principal component analysis (PCA). A customized, globally similar database, Dcus, is extracted from D for each 3D test subject t to be segmented. In so doing, Dcus is co-aligned to the domain of D and normalized cross correlations (NCC) between the body region in the aligned test subject t and in each database image, sw, are calculated for selecting the top J similar subjects; J > 1 .

[0015] The shape prior is adapted in accord with visual appearances of the test subject t and the customized training images, Dcus, at both the voxel and patch/cube levels. Each voxel r of the test subject t is mapped to the database lattice by a deformation field aligning t to the database Dcus. For the voxel-level adaptation, an initial search cube Cr of size n X :i χ n y :i χ n Z :i is centered at the mapped location r. The search focuses on all the atlas voxels with signal deviations to within a predefined fixed range, λ, from the mapped input signal, t r . If such voxels are absent in the atlas, the cube size, n X :i χ n y :i χ n Z :i, increases iteratively until the voxels within the range λ are found. If the final cube size is reached, the search is repeated for the increased range λ until such voxels are found. At the next, patch level of adaptation, a local similarity weight, wet, Cj, for each atlas subject cube is obtained by calculating the NCC between the test subject patch, Ct:r, and each contributing patch, Cj :r . Then the voxel-wise shape probabilities, P S h:r(k); k ε K, are estimated based on the found voxels of similar appearance and their region labels, in addition to the local similarity between the patches for the test and contributing training subjects.

[0016] Let I j:r = {φ : φ e R; φ ε Cr ; - t r I≤ λ} be a subset of similar voxels within the cube Cj:r in the training image gj. Then the unnormalized local similarity weight for the voxel r is

and their normalizing factor is

(here, μ ... are the related mean signals).

[0017] Let δ(ζ) denote the Kronecker's delta-function:

otherwise. Then, as shown in Equation 1 below,

[0018] Exemplary steps for generating the adaptive shape prior are detailed in Algorithm 1 below:

Algorithm 1 - Adaptive shape prior algorithm

Input: Test image t; co-aligned training database D = {d; = (s/, m;) : / = 1, ...,N}.

Output: 4D shape prior Psh = (Psh:r : r ε R)

1 begin

// Align t to D, get the deformation field, φ, mapping each test voxel to the database domain

// Select top J images by normalized cross correlation (NCC) with the co-aligned test image

2 foreach image s/ e D do

4 end // Form the atlas Dcus from J closest, by the NCC, training images.

5 foreach voxel r e R do

// Map the voxel rto the atlas Dcus using φ.

6 while matches between the signal tr and the atlas signals are not found do

// Initialize the matching tolerance: λ <— λ init

// Loop until λ reaches a predefined threshold σ

7 while λ < a do

// Find within each search cube C/r, j = 1 . . . J in the atlas image s j e Dcus the matching subset ¾ ≤ A) and calculate the similarity between the test image, t, and training image, s/, using the NCC to get the weight

8 if matching voxels are found in at least one C r then

// Compute the voxel-wise region label probability P S h:r(k); k ε {0,1}, using the training labels and the numbers of voxels R/ r in the subsets K j:r and the atch similarity wei ht wtj

10 break

1 1 else

// increment the matching threshold

12 λ ^ λ + Δλ

13 end

14 end

// increment the search cube size 15 a <— a + Asize

16 end

17 end

18 return Psh

19 end

High-Order MGRF Model of Visual Appearance

[0019] To build this model, grayscale patterns of the goal objects, i.e., the organs, tissues, or other objects of interest in the medical image, are considered samples of a trainable translation- and contrast-offset-invariant 7th-order MGRF. The model relates the probability of an image texture, g = (g(r) : r ε K ), with voxel-wise HU g(r) to the Gibbs energy, E7 (g), in a general-case exponential family distribution: P7(g) = (1/Z)i|j(g)exp(-E7(g)). Here, Z is the normalizing factor.

[0020] To describe visual appearance of the target object with due account of how the training subjects have been affected, signal dependencies, called interactions, between each voxel and its seven neighbors are quantified in terms of simultaneous partial ordinal relations between the voxel-wise signals to within a fixed distance, p, from the voxel. To compute the energy E7 (g), Gibbs potentials, V7: P (g(r') : r' ε K(r)), of translation-invariant 7-voxel subsets, are learned from a known training image, g°, by using their approximate maximum likelihood estimates (MLE). These MLEs generalize the like analytical approximations of the potentials for a generic 2nd-order MGRF as:

Here, β is a numerically coded contrast-offset-invariant relation between seven signals; 7 denotes the set of these codes for all possible ordinal 7-signal relations; F7: P (g°) is an empirical marginal probability of the code β; β ε ¾ over all the 7- voxel configurations with the center-to-voxel distance p in g°, and F7:p:∞re(P) is the like probability for the core distribution. The computed energy indicates the object presence: the lower the energy, the higher the object's probability.

[0021] The object and background appearances are quantified below by the voxel-wise Gibbs energies for the three similar 7th-order MGRFs, each with a single family of fixed-shape central-symmetric voxel configurations M (r = (x, y, z)) = {(x, y, z); (x ± p, y, z), (x, y ± p, z), (x, y, z ± p)}. Their potentials and distances, p, between the peripheral and central voxels are learned from the training image, g°.

[0022] Steps for learning the 7 th -order MGRF appearance model are detailed in Algorithm 2 below:

Algorithm 2 - Learning the 7 th -order MGRF appearance model

1 . Given a training images g°, find the empirical object (k = 1 ) and background (k =

0) probability distributions, Fi:7r(g°) = [Fi:7r(P|g°) : β ε ¾ of the local binary pattern (LBP)-based descriptors for different clique sizes r ε {1 , /max} where the top size

/max = 10.

2. Compute the empirical distributions F7:r:core = [F7:r:core(P) : β ε i] of the same descriptors for the core IRF cp(g), e.g. , for an image, sampled from the core.

the potentials:

4. Compute partial Gibbs energies of the descriptors for equal and all other clique- wise signals over the training image for the clique sizes r = 1 ,2, ... , 10 to choose the size pi, making both the energies the closest one to another.

Deep-3D-CNN Based Tissue Classification and Segmentation

[0023] For the final segmentation, the adaptive shape prior, the Gibbs energies with potentials from the learned 7th-order MGRF models, and the first-order appearance model of the original volume to be segmented, are used as inputs of a deep-3D-CNN. The CNN uses this data to generate probabilities that a given voxel in a raw data image is a goal region (e.g., the region of the medical image corresponding to a biological feature of interest, such as lungs in a chest CT scan, blood vessels in a retinal scan, etc.), then outputs a final segmented image. The first- order appearance model of the original volume to be segmented is simply the original, unmodified medical image. To output a final labeled region map of the segmented input image, the deep-3D-CNN generates soft-segmentation maps followed by a fully-connected classifier based on a 3D conditional random field (CRF). The input is sequentially convolved with multiple filters at the cascaded network layers, each consisting of a number of channels. Each channel corresponds to the 3D volume of a single calculated feature.

[0024] Considering the 3D input feature volumes at the first layer the input channels, the whole process can be viewed as convolving 4D volumes (the concatenated 3D feature volumes) with 4D kernels. The used CNN architecture consists of seven layers with the kernels of size 5 3 . The size of the receptive field (i.e., of the Input voxel neighborhood influencing the activation of a neuron) is 17 3 , while the classification layer has a single kernel. The advantage of this architecture is its ability to fuse and capture comprehensive 3D contextual information from the input feature volumes. The configuration parameters have been chosen heuristically. In other embodiments, fewer or greater numbers of layers may be used, and different kernel sizes and receptive field sizes may be used. Experimentation

[0025] The proposed framework has been tested on two medical imaging modalities for different organs, namely, on chest CT scans to segment the lungs and abdominal DW-MRI to segment the kidneys. The chest scans have been locally collected from 95 subjects (control and diagnosed with different diseases). Data spacing for the data collected ranges from 1 .17 χ 1 .17 χ 2.0 mm to 1 .37 χ 1.37 χ 3.0 mm. The DW-MRI volumes had been collected from 53 subjects at different b- values, in total, 583 DW-MRI data sets with b-values from 0 to 1000 s/mm 2 and the voxel of size 1 .3281 χ 1 .3281 χ 4.00 mm 3 , using a SIGNA Horizon scanner (General Electric Medical Systems, Milwaukee WIV The gold-standard segmentations for training and verification of tests were manually delineated by a medical imaging expert. To obtain different features for training our deep-3D-CNN network, it was applied in a leave-one-subject-out mode to segment objects inside the VOI determined by the body mask for each input scan. A minimal morphological post- processing to fill holes and remove scattered voxels was used for refinement. To accelerate convergence by reducing the internal covariant shift, the raw signals and modeled features for each input VOI were normalized to zero mean and unit standard deviation.

[0026] Testing performance has been evaluated in terms of the numbers of true positive (TP), true negative (TN), false positive (FP), and false negative (FN) voxels to measure the segmentation accuracy Acc = (TP+TN)/(TP+TN+FP+FN), sensitivity Sens = TP/(TP+FN), and specificity Spec = TN/(TN+FP). Table 1 lists these measures for different feature groups (FG), using only the appearance model of the raw signals (FG1 ), FG1 together with the learned seventh order Markov-Gibbs random field appearance model (FG2), and FG2 together with the adaptive shape prior (FG3). Clearly the combined features achieve the highest accuracy due to complementing each other in both normal and challenging pathological cases. The segmentation accuracy for each subject has been evaluated also using the DSC, BHD, and PVD metrics, which characterize the voxel-to-voxel similarities, the maximum surface-to-surface deviations, and the volume differences, respectively, between the region maps obtained and their ground truth. Table 1 summarizes the DSC, BHD, and PVD statistics for all the test subjects for the different FGs in the proposed framework. In particular, the mean ± standard deviation values of the DSC, BHD, and PVD for all the test subjects are 96.65±2.15 %, 4.32±3.09 mm, and 5.61 ±3.37 %, respectively for the segmented kidneys, and 98.37±0.68 %, 2.79±1 .32 mm, and 3.94±2.1 1 %, respectively for the lungs. Figure 1 exemplifies the segmentations and shows some generated features. In particular, column (A) in FIG. 1 depicts original, unmodified medical images and column (D) depicts final labeled images. As indicated in Table 1 below, combining the , which corresponds to the method disclosed herein, combining the seventh order Markov-Gibbs random field appearance model together with the adaptive shape prior provides the greatest accuracy, sensitivity, and specificity, and the most favorable DSC, BHD, and PVD, as compared to the MGRF alone or the raw image data.

Table 1 : Deep-3D-CNN performance for the feature groups FG1 , FG2, and FG3, evaluated by Acc, Sens, Spec, DSC (Dice similarity coefficient), BHD (bidirectional Hausdorff distance), and PVD (percentage volume difference).

[0027] Various aspects of different embodiments of the present invention are expressed in paragraphs X1 and X2 as follows:

[0028] X1 . One aspect of the present invention pertains to a method for segmenting medical images comprising integrating image descriptors with a three dimensional neural network, the image descriptors including a medical image, a Gibbs energy for a Markov-Gibbs random field model of the medical image, and an adaptive shape prior model of the medical image; generating, using the three dimensional neural network, probabilities for a goal region; and designating, based on the generated probabilities, the goal region in the medical image.

[0029] X2. Another aspect of the present invention pertains to a method for segmenting medical images comprising receiving a medical image including a plurality of voxels; inputting into a neural network a plurality of image descriptors describing the medical image, wherein the plurality of image descriptors include a Gibbs energy for a Markov-Gibbs random field model, an adaptive shape prior model, and a first-order appearance model of the original volume to be segmented; calculating, by the neural network, probabilities that each voxel represents a goal region in the medical image; and segmenting, by the neural network, the medical image to identify the goal region.

[0030] X3. A further aspect of the present invention pertains to a method for segmenting a three-dimensional medical image, comprising receiving medical image data representing a three-dimensional medical image, the medical image data including a plurality of voxels; integrating a plurality of image descriptors of the medical image using a neural network; and outputting, by the neural network, segmentation data relating to the three-dimensional medical image, wherein the segmentation data is based on the plurality of integrated image descriptors.

[0031] et other embodiments pertain to any of the previous statements X1 or X2 which are combined with one or more of the following other aspects.

[0032] Wherein the goal region is a region of the medical image corresponding to a biological feature of interest.

[0033] Wherein the goal region is a region of the medical image corresponding to a kidney, a lung, a heart, one or more blood vessels, a liver, a bladder, a stomach, a brain, or an intestine.

[0034] Wherein the Gibbs energy for the Markov-Gibbs random field model is a Gibbs energy for a 7th-order Markov-Gibbs random field model.

[0035] Wherein the goal region is a region of the medical image corresponding to a biological feature of interest, and wherein the exemplary goal objects are the same biological feature.

[0036] Wherein the adaptive shape prior depends on a set of manually- segmented co-aligned subject images of a biological feature of interest corresponding to the goal region. [0037] Wherein the medical image includes a plurality of voxels, and wherein the generating comprises generating probabilities that each voxel depicts a biological feature of interest.

[0038] Further comprising outputting a segmented medical image identifying the goal region.

[0039] Wherein the first-order appearance model of the original volume to be segmented is the medical image.

[0040] Further comprising outputting a segmented medical image identifying the goal region.

[0041] Wherein the Markov-Gibbs random field model is generated by using grayscale patterns of exemplary goal objects as samples of a trainable Markov- Gibbs random field.

[0042] Wherein the adaptive shape prior model is generated by adapting a shape prior in accord with the medical image and a database of customized training images.

[0043] Wherein the plurality of image descriptors include the medical image and a Gibbs energy for a Markov-Gibbs random field model of the medical image.

[0044] Wherein the plurality of image descriptors include the medical image and an adaptive shape prior model.

[0045] Wherein the segmentation data identifies a goal region in the medical image.

[0046] The foregoing detailed description is given primarily for clearness of understanding and no unnecessary limitations are to be understood therefrom, for modifications can be made by those skilled in the art upon reading this disclosure and may be made without departing from the spirit of the invention. Although specific spatial dimensions are stated herein, such specific quantities are presented as examples only.