¸ñÀû: To develop an augmented reality–based osteotomy guide that enhances
surgical accuracy, especially for complex or revision facial bone
procedures, and reduces incorrect osteotomies and related complications. ¹æ¹ý:In this study, we developed and validated a comprehensive system to
improve the accuracy of facial bone osteotomy through dual-sensor
integration, AI-based registration, and augmented reality (AR)
visualization. The system comprises dual-sensor hardware with an RGB-D
camera for depth and color data acquisition, navigation software, and
an AR platform with real-time tracking capabilities. The hardware
components, including sensors and navigation systems, were mounted on
two custom-designed carts. Calibration protocols involving
standardized patterns and controlled lighting were implemented to
enhance the precision of depth measurements.
Five three-dimensional (3D) models of the face and facial bones were
created from both patient imaging data and 3D-printed replicas,
reflecting diverse anatomical variations such as male, female, obese,
and slender facial structures. An AI-based automatic registration
algorithm aligned virtual and physical anatomy in real-time using
these models. System verification was performed using both anatomical
models and clinical data. Two otolaryngologists evaluated alignment
errors in augmented views. Average errors were evaluated for shape
sensing error (SSE), fiducial registration error (FRE), and target
registration error (TRE), all of which were found to be within 2 mm.
A skin deformation probability map was devised to predict areas of
potential movement during manipulation to address intraoperative
anatomical shifts. Real-time synchronization between the dual-sensor
system and augmented reality head-mounted display (HMD) was achieved
using continuous tracking algorithms. Wireless protocols enabled real-
time transmission of osteotomy guides, including instrument
trajectories, to the HMD. By combining precise 3D modeling, robust
calibration, AI-based alignment, and dynamic AR visualization, this
methodology demonstrates the potential to streamline surgical
workflows, reduce errors, and improve clinical outcomes.
°á°ú:TThe system demonstrated consistent accuracy in facial bone osteotomy
procedures, as validated through comprehensive validation. The system
achieved an average shape sensing error (SSE) of 2.14 ¡¾ 0.40 mm,
fiducial registration error (FRE) of 2.10 ¡¾ 0.45 mm, and target
registration error (TRE) of 1.81 ¡¾ 0.33 mm. These results confirm the
system's reliability and precision, indicating its potential to enhance
surgical guidance, reduce errors, and improve patient outcomes in
complex osteotomy procedures. °á·Ð:This study successfully developed and clinically validated a dual-
sensor registration system, overcoming limitations in accuracy and
shadowing effects of conventional navigation systems. Based on these
results, the methodology has the potential for expansion to various
surgical fields, including neurosurgery, orthopedic spinal surgery,
and oral and maxillofacial surgery, enhancing precision and surgical
outcomes. |