Creating 3D anatomical models from medical images

Now more than ever, medical 3D printing is being recognized by physicians and hospitals for the added value it brings to personalized patient care. Already, 3D printing is routinely used to address a wide range of abdominal, craniomaxillofacial, cardiac, musculoskeletal, genitourinary, breast and thoracic conditions. And the list of indications for 3D printing are likely to grow as the medical community digests data accumulated via the joint American College of Radiology (ACR)-Radiological Society of North America (RSNA) 3D printing registry.

At the same time, we are witnessing a growing interest in the use of “mixed reality” (MR) or “extended reality” (XR) technologies as an alternative means of viewing 3D anatomical models with a true, 3D perspective. Several virtual and augmented reality technologies are also being evaluated for their ability to facilitate realistic simulations of complex procedures in support of surgical planning, training, medical education, and communication. It is just a matter of time before physicians and surgeons expect to be able to perform these activities routinely using anatomical models specific to their patients.

Creating anatomical models from image regions-of-interest (ROIs)

Whether you’re using medical 3D printing to facilitate a diagnosis and plan a procedure, or an extended reality technology to simulate a surgery for a specific patient, the anatomical models that drive these activities must first be derived from the patient’s medical images. This process typically starts with image “segmentation”, where regions-of-interest (ROI) corresponding to an anatomical structure or pathology are identified on successive cross-sectional image slices based on radiological signatures (i.e. image grey-scale values). In a sense, image segmentation is a “paint-by-numbers” exercise with additional intelligent input from clinicians or other delegate with specialist knowledge trained to discern structural boundaries. In general, the anatomic fidelity of the resulting model is tightly coupled to the quality of the image from which it was derived. Anatomical structures with fine detail are best generated with volumetric images with thinner slice thicknesses; however, an enhanced spatial resolution in the images can result in additional workload on 2D platforms as more image slices will need to be segmented or edited.

The burden described above can be overcome with Elucis’ advanced 3D modeling tools. In Elucis, regions of interest (or volumes of interest) can be identified using a suite of novel new 3D operations. These operations are applied to the image using intuitive input mechanisms that involve a hand-held stylus pen or controller.  These control and virtual stylus in VR to allow for drawing on image cross-sections, or by “sculpting” 3D models free in air.  In the real world you’re drawing on the surface of a desk or waving in air: in VR a full 3D model is being created in front of your eyes.

3D modeling in action. The one-of-a-kind 3D operations in Elucis can create 3D anatomical models in seconds.

When combined with the semi-automated functions like dynamic thresholding and smart grow/shrink tools, 3D segmentation can define structures by volumetric edits rather than time-consuming slice-by-slice approach, all while maintaining precise control over the parts of the image that are included in the model structure.

Applying touch-ups to 3D models in Elucis is also a natural and intuitive process which. Advanced smoothing and denoising tools can be selectively applied to all or part of a model to correct non-anatomical defects or compensate for image artifacts. The quality of the final print will depend on the image modality, anatomy being modeled, and the intended use of the model.

Model touch-ups in Elucis are a snap.

Creating anatomical models in Elucis is also artistically freeing. By providing the ability to create accurate 3D models with simultaneous true 3D visualizations, users can include more structures in their models than would otherwise be practical on 2D platforms geared toward 3D printing. If 3D printing a final model is still desired, users can more easily determine which 3D structures are pertinent for planning or diagnostic purposes, and therefore exported for 3D printing, based on thorough examination of adjacent anatomical structures in 3D.

Image/model post-processing

In standard medical 3D printing workflows, image segmentation is followed by additional steps for STL file generation, computer-aided design (CAD) operations for model refinement or instrument/device design, model quality checks, and “file fixing”. These additional steps typically require separate software applications-but not in Elucis. Elucis is an all-in-one platform that eliminates extra steps and completely does away with separate concepts for “segmentation” or “mesh” . The internal data representation used by Elucis enables users to treat the 3D models create in real-time as what-you-see-is-what-you-get structures that can be sculpted with ease in 3D, or edited slice-by-slice in 2D at any point. With Elucis, you are always working with and viewing the final model. No further post-processing and no extra software required.

Image Navigation

Elucis also makes it easy to find what you’re looking for. Structures that are normally difficult to locate in standard cross-sectional views, such as the brachial plexus surrounding a Pancoast tumor on the pulmonary apex as seen in a CT scan, are readily identified in Elucis since users can access 2D views with any orientation while never losing their sense of the 3D orientation. This is the power of image navigation in VR. No longer will it be necessary to place objects such as splines to identify the path of nerves of vessels.

Tracing the path of blood vessels in the brain with Elucis – The vessels are segmented as the 3D-modeling sphere detects thresholded regions of the image volume.

Concluding Remarks

Ideally, physicians and surgeons would have ready access to 3D anatomical models of their patients to make the best use of their 3D printing and extended reality programs. Unfortunately, deriving these models from CT, MRI or other sources of medical images can be time consuming when done using conventional methods. As the number of indications for 3D printing and extended reality activities continues to expand, hospitals will come under pressure to generate anatomical models at a rate that meets demand. And with the recent introduction of four new Category III CPT codes to collect data and assess its use, medical 3D printing is poised to attract further attention from forward-thinking institutions seeking to offer 3D printing services for the benefit of its patients, or to justify the use of existing programs.