SARAS, to be fully implemented and operational, requires the development of new technologies in terms of: Perception & Decisional Autonomy, Cognitive Control, Advanced Planning & Navigation and Human-Robot Interaction.
- The Perception module takes care of understanding the complexity of the surgical area, by reconstructing, labelling and tracking all its elements as observed by the available sensors;
- The Cognitive module learns from real intervention data the structure of complex laparoscopic procedures to identify anomalies, understand the surgeon’s actions (situation awareness), and his/her future needs (decision making);
- The Planning module translates the autonomous decisions made by the Cognitive module into appropriate trajectories for the laparoscopic tools mounted on SARAS assistive robotic arms;
- The Human–Robot Interface gives the surgeon full control over the procedure through a multi-modal interface with novel capabilities.
SARAS will develop a robotic system with three levels of increasing complexity:
- MULTIROBOTS-SURGERY platform: the main surgeon will use a commercial robotic system whereas the assistant surgeon will teleoperate the SARAS assistive robotic arms.
- SOLO-SURGERY platform: the SARAS system will be autonomous and will play the role of the assistant to help the main surgeon at the da Vinci console performing the surgical procedure. The daVinci console of the main surgeon will be enhanced by integrating force/tactile feedback and a speech recognition module to interact with the SARAS system.
- LAPARO2.0-SURGERY platform: SARAS system will play the role of the assistant as in the SOLOSURGERY case, but with the main surgeon using standard handheld laparoscopic tools.
The SARAS SOLO-SURGERY platform
The SARAS LAPARO2.0-SURGERY platform