Training benchmarks based on validated composite scores for the RobotiX robot-assisted surgery simulator on basic tasks

This study aimed to determine the validity evidence on multiple levels of the RobotiX simulator for basic skills. Participants were divided in either the novice, laparoscopic or robotic experienced group based on their minimally invasive surgical experience. Two basic tasks were performed: wristed manipulation (Task 1) and vessel energy dissection (Task 2). The performance scores and a questionnaire regarding the realism, didactic value, and usability were gathered (content). Composite scores (0 –100), pass/fail values, and alternative benchmark scores were calculated. Twenty-seven novices, 21 laparoscopic, and 13 robotic experienced participants were recruited. Content validity evidence was scored positively overall. Statistically significant differences between novices and robotic exper ienced participants (construct) was found for movements left (Task 1p = 0.009), movements right (Task 1p = 0.009, Task 2p = 0.021), path length left (Task 1p = 0.020), and time (Task 1p = 0.040, Task 2p <  0.001). Composite scores were statistically significantly different between robotic experienced and novice participants for Task 1 (85.5 versus 77.1,p = 0.044) and Task 2 (80.6 versus 64.9,p = 0.001). The pass/fail score with false-positive/false-negative percentage resulted in a value of 75/100, 46/9.1% (Task 1) and 71/100, 39/7.0% (Task 2). Calculated benchmark scores resulted in a minority of novices passing multiple parameters. Validity evidence on ...
Source: Journal of Robotic Surgery - Category: Surgery Source Type: research