The false promise of firearms examination validation studies: Lay controls, simplistic comparisons, and the failure to soundly measure misidentification rates

This article therefore utilizes such a group, specifically 82 attorneys, as a post hoc control and juxtaposes their performance on a comparison set of cartridge case images from one commonly cited study (Duez et al. in J Forensic Sci. 2018;63:1069-1084) with that of the original participant pool of professionals. Despite lacking the kind of formalized training and experience common to the latter, our lay participants displayed an ability, generally, to distinguish between cartridge cases fired by the same versus different guns in the 327 comparisons they performed. And while their accuracy rates lagged substantially behind those of the original participant pool of professionals on same-source comparisons, their performance on different-source comparisons was essentially indistinguishable from that of trained examiners. This indicates that although the study we vetted may provide useful information about professional accuracy when performing same-source comparisons, it has little to offer in terms of measuring examiners' ability to distinguish between cartridge cases fired by different guns. If similar issues pervade other accuracy studies, then there is little reason to rely on the false-positive rates they have generated.PMID:38684627 | DOI:10.1111/1556-4029.15531
Source: Journal of Forensic Sciences - Category: Forensic Medicine Authors: Source Type: research