Automating Research Synthesis with Domain-Specific Large Language Model Fine-Tuning

This research pioneers the use of fine-tuned Large Language Models (LLMs) toautomate Systematic Literature Reviews (SLRs), presenting a significant andnovel contribution in integrating AI to enhance academic researchmethodologies. Our study employed the latest fine-tuning methodologies togetherwith open-sourced LLMs, and demonstrated a practical and efficient approach toautomating the final execution stages of an SLR process that involves knowledgesynthesis. The results maintained high fidelity in factual accuracy in LLMresponses, and were validated through the replication of an existingPRISMA-conforming SLR. Our research proposed solutions for mitigating LLMhallucination and proposed mechanisms for tracking LLM responses to theirsources of information, thus demonstrating how this approach can meet therigorous demands of scholarly research. The findings ultimately confirmed thepotential of fine-tuned LLMs in streamlining various labor-intensive processesof conducting literature reviews. Given the potential of this approach and itsapplicability across all research domains, this foundational study alsoadvocated for updating PRISMA reporting guidelines to incorporate AI-drivenprocesses, ensuring methodological transparency and reliability in future SLRs.This study broadens the appeal of AI-enhanced tools across various academic andresearch fields, setting a new standard for conducting comprehensive andaccurate literature reviews with more efficiency in the face of ever-increasingvolumes of academic studies.

Further reading