Automatically detect, classify, and measure wound dimensions from clinical images using deep learning to support faster and more accurate medical assessments.

Client Overview
A hospital wound care centre managing chronic wound patients — including diabetic foot ulcers, pressure injuries, and post-surgical wounds — was relying entirely on manual clinical assessment to measure and document wound progression. Nurses and wound care specialists were using paper rulers, photographic comparison, and clinical judgement to estimate wound dimensions during each patient visit. This process was time-consuming, introduced significant inter-assessor variability, and made it difficult to objectively track whether a wound was improving, stable, or deteriorating across weekly review appointments. The wound photography workflow added further administrative burden. Clinical staff photographed wounds at each visit using standardised protocols, but the images were stored in unstructured format within the electronic health record with dimensions and stage classifications entered manually from the clinician's bedside notes. There was no automated linkage between the image and the quantitative measurement data, making retrospective analysis of patient cohort trends nearly impossible without labour-intensive manual data extraction. Clinical leadership was aware that standardised, objective wound measurement was directly associated with better treatment decision-making and earlier identification of deteriorating wounds that required escalation. They wanted a system that could analyse wound photographs at the point of care, automatically detect the wound boundary, classify the wound stage, calculate area and perimeter dimensions, and update the electronic health record — reducing documentation time, improving consistency, and building a structured image dataset that could support future clinical research.
Our Approach
Zentric Solutions developed a deep learning wound assessment system using YOLOv8 for wound region detection and a secondary segmentation model for precise boundary delineation. The system was trained on a curated clinical image dataset annotated by wound care specialists, covering the full range of wound types, sizes, tissue classifications, and image capture conditions present in the hospital's actual photography workflow. TensorFlow and OpenCV were used throughout the model development and image processing pipeline. At the point of care, clinical staff photographed the wound using the standard clinical protocol — including a calibration reference marker in frame — and uploaded the image through a simple interface accessible from a tablet or workstation. The system automatically detected the wound region, segmented the wound boundary with pixel-level precision, and used the calibration marker to calculate real-world area and perimeter dimensions in square centimetres and centimetres respectively. Tissue composition within the wound boundary was classified across standard clinical categories — granulation, slough, eschar, and epithelialisation — with percentage coverage estimates for each tissue type. All assessment outputs — dimensions, tissue classification, wound stage, and a processed image with the detected boundary overlay — were written directly to the patient's record via a DICOM-compatible integration with the hospital's clinical imaging system. Clinicians could view the automated assessment alongside the uploaded image in the patient's EHR within seconds of upload. Longitudinal tracking charts were generated automatically for each patient, showing wound area trend over time and flagging wounds where measurements indicated deterioration beyond a configurable threshold. Assessment consistency across clinical staff improved significantly and documentation time per wound assessment was reduced by over 70%.
Tech Stack
Project Tags
Everything you need to know about this project and our approach.
Zentric Solutions delivers cutting-edge digital products that streamline operations, enhance engagement, and drive lasting growth.