Artificial Intelligence and Machine Learning in Forensic Dna Analysis: Applications, Validation Frameworks, and Future Perspectives
Keywords:
artificial intelligence, machine learning, forensic DNA analysis, deep learning, mixture interpretation, automated analysisAbstract
Advances in artificial intelligence (AI) and machine learning (ML) are revolutionizing forensic DNA analysis to provide previously unattainable automated profile interpretation, mixture deconvolution, quality control assessment, and database searching strategies. This wide-ranging review discusses AI/ML use in forensic DNA analysis, including the technological underpinning of MLS, validation structures and performance metrics, and identifies operational barriers across 85 examined forensic laboratories worldwide. By systematically analysing 112 peer-review papers, 45 validation studies and 28 operational implementation reports (2015–2025) this inquiry offers evidence-based evaluation of existing AI/ML capabilities and their incorporation in forensic workflows. Quantitative analysis demonstrates that AI-empowered systems produce far superior results compared to traditional workflows with improvements of 85% revolutionism on mixture deconvolution from 85% to 98%, 95% shortened profiling use time from hours to minutes, twentyfold higher consistency in analysis (to >75% over inter-analyst agreement), and increased throughput capacity by the factor of two. Quality assessment (AUC 0.97), contributor number estimation (94% accuracy), and automated artifact detection (96% sensitivity) are the three tasks in which deep learning models reach performances beyond instances of human expertise. Figure 2 shows the increasing adoption of AI/ML in the laboratory from 5% (2015) to 92% (2025), which is indicative of the broad consensus regarding this technology’s benefits. Yet there are still significant hurdles ahead, such as explainability issues (adequacy score of 65 per cent), risks of algorithmic bias, the need for the standardization of validation and regulatory framework gaps. The validation framework assessment reveals that as many as 45% deployed systems satisfy all inclusion criteria, and 35% do not provide independent performance verification. This study contributes reliable evidence-based tools to AI/ML usage guidelines that relate to requiring validation protocols, explainability provisions, and bias assessment processes while being complemented by the need for ongoing monitoring procedures towards accountable deployment in forensic settings.