Enhancing Large Language Models for Text-to-Testcase Generation

Context: Test-driven development (TDD) is a widely employed softwaredevelopment practice that involves developing test cases based on requirementsprior to writing the code. Although various methods for automated test casegeneration have been proposed, they are not specifically tailored for TDD,where requirements instead of code serve as input. Objective: In this paper, weintroduce a text-to-testcase generation approach based on a large languagemodel (GPT-3.5) that is fine-tuned on our curated dataset with an effectiveprompt design. Method: Our approach involves enhancing the capabilities ofbasic GPT-3.5 for text-to-testcase generation task that is fine-tuned on ourcurated dataset with an effective prompting design. We evaluated theeffectiveness of our approach using a span of five large-scale open-sourcesoftware projects. Results: Our approach generated 7k test cases for opensource projects, achieving 78.5

Further reading