As the availability of historical biodiversity data continues to grow, ensuring its usability through adherence to FAIR principles (Findable, Accessible, Interoperable, and Reusable) has become increasingly essential. This study focuses on solving key challenges in interpreting biodiversity data from historical texts, particularly in identifying and aligning common species names with their modern scientific counterparts. We address five main challenges: spelling variations, the invention of new terms, semantic shifts between broad and narrow naming conventions, and the renaming or reclassification of historical terms. To tackle these issues, we tested a range of large language models (LLMs) (GPT‑4, LLaMA3-405B, Mistral-8B, and Qwen3-30B-A3B) for their ability to resolve these challenges and support terminology alignment. The initial entity detection was performed using GPT-4o, which achieved a 92% success rate in detecting historical common names and correctly identified 98% of scientific terms on a test dataset. Comparative evaluation of the ability to match historical common names with modern equivalents revealed that GPT-4o consistently delivered the most accurate and nuanced outputs across four of the five challenges, demonstrating strong contextual understanding. The results highlight the potential of advanced LLMs to not only identify entities but also to interpret historical naming conventions, thereby enhancing the reusability and interoperability of biodiversity data in line with FAIR principles.