The rise of large language models for client-facing conversational search in healthcare necessitates evaluation frameworks that enable the assessment and comparison of these tools. Most such frameworks centre around the automated calculation of performance-related metrics and benchmarks. Though necessary, this focus fails to account for the human factors that impact the development, use, and adoption of these systems, as well as the factors specific to the healthcare context. Human evaluation frameworks attempt to address these drawbacks, but few such frameworks have been developed so far, and even fewer are those based on expert insight. In this work, we conduct semi-structured interviews with eleven healthcare professionals in health lifestyle care. From these interviews, we contribute a two-part healthcare domain expert evaluation framework, (K) Knowledge and (I) Interaction, which organises seven evaluation metrics. Our results reveal key understudied metrics for evaluation like (I1) Context-Seeking, (I2) Empathy, and (I3) Trustworthiness.