Large Language Models (LLMs) are becoming more commonplace in today's society. However their adoption rate, especially in the fact checking field, is being slowed down by the distrust in their thinking process and the rationales leading to the results. In crucial moments the just
...
Large Language Models (LLMs) are becoming more commonplace in today's society. However their adoption rate, especially in the fact checking field, is being slowed down by the distrust in their thinking process and the rationales leading to the results. In crucial moments the justifications behind a verdict are more important than the verdict itself. However, LLMs often produce explanations that are not grounded in the provided evidence, leading to hallucinations and reduced trust in their outputs. This paper aims to show exactly the level the LLMs have reached in both the faithfulness of their explanations, based on some provided facts, and the correctness of their explanations. To investigate this, multiple LLMs are asked to assign a label to a claim based on some evidence provided from two datasets of varying complexity: HoVer and QuanTemp. The outputs are then evaluated both manually and by another LLM to evaluate how well the LLM relates to the evidence and if the LLM hallucinates in some parts of its responses. The results reveal that while some models demonstrate high correctness in label assignment, faithfulness in explanations varies significantly across models and evidence types. The outcomes of this experiment aim to inform both LLM developers and fact-checking researchers about the current limitations of LLMs in response quality while also showing which areas require further improvements to become mainstream.