Magnetic Resonance Imaging (MRI) is a powerful tool for visualizing internal body structures and is widely used in clinical fields. However, MRI's long scanning times and high computational demands for post-processing pose challenges, especially in resource-limited environments.
...
Magnetic Resonance Imaging (MRI) is a powerful tool for visualizing internal body structures and is widely used in clinical fields. However, MRI's long scanning times and high computational demands for post-processing pose challenges, especially in resource-limited environments. Recent advancements in machine learning, specifically model compression techniques, have offered solutions to accelerate MRI post-processing and make it more accessible.
This thesis systematically investigates the application of several model compression methods, such as low-rank factorization, knowledge distillation, and quantization, to enhance the efficiency of a baseline MR reconstruction neural network. By exploring multiple variations within each compression technique, this study evaluates their impact on key performance metrics such as inference speed, model size reduction, and reconstruction accuracy. Extensive tests show significant trade-offs between image fidelity and computational efficiency, providing insights into the practical feasibility of deploying compressed models in clinical workflows.
Among the techniques tested, low-rank factorization implemented via Tucker decomposition emerged as the most effective approach. This method achieved a threefold reduction in inference time while maintaining high reconstruction quality, highlighting its potential to improve MRI processing times in real-world applications significantly.