Multi-camera fault detection in fused filament fabrication printing

Journal Article (2026)
Author(s)

Shanthalakshmi Kilambi (TU Delft - Group Masania)

Aster Tournoy (Student TU Delft)

Muhamad Amani (TU Delft - Group Masania)

Jovana Jovanova (TU Delft - Transport Engineering and Logistics)

Baris Caglar (TU Delft - Group Çaglar)

Kunal Masania (TU Delft - Group Masania)

Research Group
Group Masania
DOI related publication
https://doi.org/10.1016/j.addlet.2026.100360
More Info
expand_more
Publication Year
2026
Language
English
Research Group
Group Masania
Volume number
17
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

Fused filament fabrication is a popular extrusion 3D printing technology because of its affordability and accessibility. However, the approach often suffers from printing errors that result in wasted time, materials and energy. Convolutional neural networks can be trained to recognise a wide spectrum of printing anomalies from image data in real time, but past work has been limited to a few defect classifications at a time. Here, we introduce a fault detection system, designed to identify a range of errors without interrupting the printing process. Real-time detection is achieved using a pre-trained image recognition and pattern recognition convolutional neural network (CNN) with two mounted cameras on the print bed and a nozzle camera. Two CNN models are developed to classify images into common 3D printing errors for the two camera systems. The nozzle camera model achieves a high validation accuracy of 97.7%. The side camera model achieves comparable performance with a validation accuracy of 97.6%. To integrate the two CNNs into one unified system, a logic-based priority framework was used to improve reliability beyond individual model accuracies by resolving conflicting predictions and leveraging complementary viewing angles from both camera types to detect a broader range of defects. The data fusion framework identifies 12 common errors and has significantly improved the robustness of error classification, in-situ and in real-time, with inference times as small as 220 milliseconds. The results demonstrate the feasibility of a robust multi-input fault detection system to advance the reliability of extrusion 3D printing.

Files

Taverne
warning

File under embargo until 27-07-2026