Samuel, Dinesh Jackson; Cuzzolin, Fabio SVD-GAN for Real-Time Unsupervised Video Anomaly Detection Conference 2021 The British Machine Vision Conference (BMVC), 2021. Abstract | Links | BibTeX | Tags: depth-wise separable convolutions, gan convergence, gan reconstruction, lightweight gan model, minimized kl divergence, singular value decomposition loss, spatiotemporal features, svd-gan, unsupervised anomaly detection Khan, Salman; Cuzzolin, Fabio Spatiotemporal Deformable Scene Graphs for Complex Activity Detection Conference 2021 The British Machine Vision Conference (BMVC), 2021. Abstract | Links | BibTeX | Tags: Action detection, activity detection, autonomous driving, complex activity detection, deformable pooling, graph convolutional network, parts deformation, scene graph, Surgical robotics Falezza, Fabio; Piccinelli, Nicola; Rossi, Giacomo De; Roberti, Andrea; Kronreif, Gernot; Setti, Francesco; Fiorini, Paolo; Muradore, Riccardo Modeling of Surgical Procedures Using Statecharts for Semi-Autonomous Robotic Surgery Journal Article IEEE TRANSACTIONS ON MEDICAL ROBOTICS AND BIONICS, 3 (4), pp. 888-899, 2021, ISSN: 2576-3202. Abstract | Links | BibTeX | Tags: autonomous robotics, statecharts, supervisory controller, Surgical robotics Samuel, Dinesh Jackson; Cuzzolin, Fabio Unsupervised anomaly detection for a Smart Autonomous Robotic Assistant Surgeon (SARAS) using a deep residual autoencoder Journal Article IEEE Robotics and Automation Letters , 6 (4), pp. 7256 - 7261, 2021. Abstract | Links | BibTeX | Tags: Computer Vision for Medical Robotics, Multi-Robot systems, Surgical robotics Piccinelli, Nicola; Muradore, Riccardo A bilateral teleoperation with interaction force constraint in unknown environment using non linear model predictive control Journal Article European Journal of Control, 62 (November 2021), pp. 185-191, 2021, ISSN: 0947-3580. Abstract | Links | BibTeX | Tags: Bilateral teleoperation, Model Predictive Control, Optimal control, Robotics Giacomo De Rossi Marco Minelli, Serena Roin Fabio Falezza Alessio Sozzi Federica Ferraguti Francesco Setti Marcello Bonfè Cristian Secchi Riccardo Muradore A First Evaluation of a Multi-Modal Learning System to Control Surgical Assistant Robots via Action Segmentation Journal Article IEEE TRANSACTIONS ON MEDICAL ROBOTICS AND BIONICS, 3 (3), pp. 714-724, 2021, ISSN: 2576-3202. Abstract | Links | BibTeX | Tags: Action segmentation, Cognitive robotics, Medical robotics, Model-predictive control, R-MIS Piccinelli, Nicola; Muradore, Riccardo 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), IEEE, 2021. Abstract | Links | BibTeX | Tags: bilateral teleoperation algorithm, passivity Piccinelli, Nicola; Roberti, Andrea; Tagliabue, Eleonora; Setti, Francesco; Kronreif, Gernot; Muradore, Riccardo; Fiorini, Paolo Rigid 3D Registration of Pre-operative Information for Semi-Autonomous Surgery Conference 2020 International Symposium on Medical Robotics (ISMR), IEEE, 2021. Abstract | Links | BibTeX | Tags: R-MIS, RARP, semi-autonomous robotics Rossi, Giacomo De; Roin, Serena; Setti, Francesco; Muradore, Riccardo A Multi-Modal Learning System for On-Line Surgical Action Segmentation Conference 2020 International Symposium on Medical Robotics (ISMR), IEEE, 2021. Abstract | Links | BibTeX | Tags: Deep learning model, Surgical Action Recognition, Surgical Action Segmentation Andrea Roberti Nicola Piccinelli, Daniele Meli Riccardo Muradore ; Paolo Fiorini, Improving Rigid 3-D Calibration for Robotic Surgery Journal Article IEEE TRANSACTIONS ON MEDICAL ROBOTICS AND BIONICS, 2 (4), pp. 569-573, 2020, ISBN: 2576-3202. Abstract | Links | BibTeX | Tags: Calibration, Medical robotics, Minimally invasive surgery, multi arm calibration, Robot, Robot vision systems, Surgery, Surgical robotics, Three-dimensional displays Marco Minelli Alessio Sozzi, Giacomo De Rossi Federica Ferraguti Francesco Setti Riccardo Muradore Marcello Bonfè Cristian Secchi 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2020. Abstract | Links | BibTeX | Tags: Collision avoidance, Minimally invasive surgery, Planning, Predictive models, Robot kinematics, Robots, Tools Sayols, Narcís; Sozzi, Alessio; Piccinelli, Nicola; Hernansanz, Albert; Casals, Alicia; Bonfè, Marcello; Muradore, Riccardo A hFSM based cognitive control architecture for assistive task in R-MIS Conference 10th Conference on New Technologies for Computer/Robot Assisted Surgery (CRAS), 2020. Abstract | Links | BibTeX | Tags: Cognitive control, hierarchical finite state machine, Robotic surgery Pieras, Tomàs; Hernansanz, Albert; Sayols, Narcís; Parra, Johanna; Eixarch, Elisenda; Gratacós, Eduard; Casals, Alícia Multi-task control strategy exploiting redundancy in RMIS Conference 10th Conference on New Technologies for Computer/Robot Assisted Surgery, 2020. Abstract | Links | BibTeX | Tags: human-robot collisions, human-robot interaction, R-MIS, surgical tasks Amat, Josep; Casals, Alícia; Frigola, Manel Bitrack: a friendly four arms robot for laparoscopic surgery Conference 10th Conference on New Technologies for Computer/Robot Assisted Surgery, 2020. Abstract | Links | BibTeX | Tags: hybrid surgery, robotic laparoscopic surgery Falezza, Fabio; Nicola, ; Piccinelli, ; Roberti, Andrea; Setti, Francesco; Muradore, Riccardo; Fiorini, Paolo A supervisory controller for semi-autonomous surgical interventions Conference 10th Conference on New Technologies for Computer/Robot Assisted Surgery (CRAS 2020), 2020. Abstract | Links | BibTeX | Tags: hierarchical finite state machine, R-MIS, semi-autonomous robot Narcís Sayols Alessio Sozzi, Nicola Piccinelli Albert Hernansanz Alicia Casals Marcello Bonfè ; Riccardo Muradore, 2020 IEEE International Conference on Robotics and Automation (ICRA), 2020. Abstract | Links | BibTeX | Tags: assistive tasks, autonomous execution, autonomous surgical, Collision avoidance, collision free connections, collision-free trajectories, desired task, developed motion planner, dynamical systems based obstacle avoidance, final target, geometric constraints, global level computes smooth spline-based trajectories, Medical robotics, mobile robots, motion control, moving obstacles, realistic surgical scenario, Robots, splines (mathematics), Surgery, surgery INSPEC: Non-Controlled Indexing robotic minimally invasive surgery, Task analysis, Tools, Trajectory, two-layer architecture Inna Skarga-Bandurova Rostislav Siriak, Tetiana Biloborodova Fabio Cuzzolin Vivek Singh Bawa Mohamed Ibrahim Mohamed Dinesh Jackson R Surgical Hand Gesture Prediction for the Operating Room Journal Article Studies in Health Technology and Informatics , 273 , pp. 97-103, 2020. Abstract | Links | BibTeX | Tags: ConvLSTM; GestureConvLSTM; Hand gesture; operating room; prediction; surgeon V. Singh Bawa G. Singh, Kaping’A Skarga-Bandurova Leporini Landolfo Stabile Setti Muradore Oleari Cuzzolin F I A C A F R E F ESAD: Endoscopic Surgeon Action Detection Dataset Online arXiv, (Ed.): 2020, visited: 25.06.2020. Abstract | Links | BibTeX | Tags: Action detection, endoscopic video, surgeon action detection, Surgery Alice Leporini Elettra Oleari, Carmela Landolfo Alberto Sanna Alessandro Larcher Giorgio Gandaglia Nicola Fossati Fabio Muttin Umberto Capitanio Francesco Montorsi Andrea Salonia Marco Minelli Federica Ferraguti Cristian Secchi Saverio Farsoni Alessio Sozzi Marcello Bonf`e Narcis Sayols Albert Hernansanz Alicia Casals Sabine Hertle Fabio Cuzzolin Andrew Dennison Andreas Melzer Gernot Kronreif Salvatore Siracusano Fabio Falezza Francesco Setti ; Muradore, Riccardo Technical and Functional Validation of a Teleoperated Multirobots Platform for Minimally Invasive Surgery Journal Article IEEE TRANSACTIONS ON MEDICAL ROBOTICS AND BIONICS, 2 (2), pp. 148-156, 2020, ISSN: 2576-3202. Abstract | Links | BibTeX | Tags: functional evaluation, Instruments, Manipulators, Protocols, Robot kinematics, robotic end effector task metrics, Surgery, surgical-related tasks, tele-operated surgical robotic system, Tools, Validation protocol Giacomo De Rossi Marco Minelli, Alessio Sozzi Nicola Piccinelli Federica Ferraguti Francesco Setti Marcello Bonfé Christian Secchi Riccardo Muradore 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), IEEE, 2020, ISBN: 978-1-7281-4004-9. Abstract | Links | BibTeX | Tags: Artificial Intelligence, Collision avoidance, Manipulators, Medical robotics, mobile robots, predictive control, Robot, Robot vision, Surgery, trajectory control, uncertain systems Casals, Alicia; Hernansanz, Albert; Sayols, Narcís; Amat, Josep Assistance Strategies for Robotized Laparoscopy Conference Robot 2019: Fourth Iberian Robotics Conference, 2019, ISBN: 978-3-030-36149-5. Abstract | Links | BibTeX | Tags: Cooperative robotics, Laparoscopy, Robot, Safety, Surgery, Surgical robots, Virtual feedback Sayols, Narcís; Hernansanz, Albert; Parra, Johanna; Eixarch, Elisenda; Gratacós, Eduard; Amat, Josep; Casals, Alícia Vision Based Robot Assistance in TTTS Fetal Surgery Journal Article 2019 41st Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), 2019. Abstract | Links | BibTeX | Tags: Coagulation, Image processing, Laser, Robot, Surgery, Three-dimensional displays, Visualization Sozzi, Alessio; Bonfè, Marcello; Farsoni, Saverio; DeRossi, Giacomo; Muradore, Riccardo Dynamic Motion Planning for Autonomous Assistive Surgical Robots Journal Article Electronics 2019, 8 (9) , pp. 957, 2019. Abstract | Links | BibTeX | Tags: Dynamical systems, Motion planning, Obstacle avoidance, Robot, Robot, Surgical robots Sartori, Enrico; Tadiello, Carlo; Secchi, Cristian; Muradore, Riccardo Tele-Echography Using a Two-Layer Teleoperation Algorithm with Energy Scaling Conference 2019, (2019 International Conference on Robotics and Automation (ICRA) Palais des congres de Montreal, Montreal, Canada, May 20-24, 2019). Abstract | Links | BibTeX | Tags: Energy Scaling, Tele-Echography, Teleoperation, Two-Layer Algorithm Minelli, Marco; Ferraguti, Federica; Piccinelli, Nicola; Muradore, Riccardo; Secchi, Cristian Energy-Shared Two-Layer (Approach for Multi-Master-Multi-Slave) Bilateral Teleoperation Systems Conference 2019. Abstract | Links | BibTeX | Tags: Control architecture, Control programming, Laparoscopy, Robot, Surgical robots, Teleoperation, Telerobotics Setti, Francesco; Oleari, Elettra; Leporini, Alice; Trojaniello, Diana; Sanna, Alberto; Capitanio, Umberto; Montorsi, Francesco; Salonia, Andrea; Muradore, Riccardo 2019, ISBN: 978-1-5386-7825-1. Abstract | Links | BibTeX | Tags: Artificial Intelligence, Cognitive control, Computer Science, Laparoscopy, Laparoscopy, machine learning, Robot, Robotic surgery, Surgery, Teleoperation Oleari, Elettra; Leporini, Alice; Trojaniello, Diana; Sanna, Alberto; Capitanio, Umberto; Deho, Federico; Larcher, Alessandro; Montorsi, Francesco; Salonia, Andrea; Setti, Francesco; Muradore, Riccardo Enhancing Surgical Process Modeling for Artificial Intelligence Development in Robotics the SARAS Case Study for Minimally Invasive Procedures Journal Article pp. 1-6, 2019, ISBN: 978-1-7281-2342-4. Abstract | Links | BibTeX | Tags: Artificial Intelligence, Autonomy, Cognitive control, Cognitive functions, Decision making, Laparoscopes, Laparoscopy, Laparoscopy, learning systems, machine learning, Medical robotics, multirobots teleoperated platform, Robotic surgery, Surgery, Surgical robots, Teleoperation Hernansanz, Albert; Martínez, ; Rovira, ; Casals, Alicia A physical/virtual platform for hysteroscopy training Conference Proceedings of the 9th Joint Workshop on New Technologies for Computer/Robot Assisted Surgery, 2019. Abstract | Links | BibTeX | Tags: Computer Science, Endoscopy, Laparoscopy, Laparoscopy, Robot, Robotic surgery, Robotic Surgery, Surgery, Surgical robots, Training Hernansanz, Albert; Pieras, ; Ferrandiz, ; Moreno, ; Casals, Alicia Sentisim: a hybrid training platform for sinb in local melanoma staging Conference CRAS 2019, 2019. Abstract | Links | BibTeX | Tags: Anatomical trainer, Biopsy, Melanoma, Simulator, Surgery, Surgical trainer Roberti, Andrea; Muradore, Riccardo; Fiorini, Paolo; Cristani, Marco; Setti, Francesco An energy saving approach to active object recognition and localization Conference Annual Conference of the IEEE Industrial Electronics Society (IECON).Washington, DC, USA. 2018. Abstract | Links | BibTeX | Tags: Active object recognition, Artificial Intelligence, Computer Science, Learning, Object recognition, Pattern Recognition, POMDP, Robotics Roberti, Andrea; Carletti, Marco; Setti, Francesco; Castellani, Umberto; Fiorini, Paolo; Cristani, Marco Recognition self-awareness for active object recognition on depth images Conference BMVC 2018 2018. Abstract | Links | BibTeX | Tags: 3D object classifier, Artificial Intelligence, Computer Science, Object exploration, Object recognition, POMDP Singh, Gurkirt; Saha, Suman; Cuzzolin, Fabio Predicting action tubes Journal Article 2018, (Proceedings of the ECCV 2018 Workshop on Anticipating Human Behaviour (AHB 2018), Munich, Germany, Sep 2018). Abstract | Links | BibTeX | Tags: Artificial Intelligence, Computer Science, Computer vision, Object recognition, Pattern Recognition, Robot, Robotics Arturo, Marbán; Srinivasan, Vignesh; Samek, Wojciech; Fernández, Josep; Casals, Alicia 2018. Abstract | Links | BibTeX | Tags: Learning, Robot, Robotic surgery, Robotics, Surgery, Training Singh, Gurkirt; Saha, Suman; Cuzzolin, Fabio TraMNet - Transition Matrix Network for Efficient Action Tube Proposals Proceeding 2018. Abstract | Links | BibTeX | Tags: Computer Science, Computer vision, Electrical Engineering, Image processing, Pattern Recognition, Robot, Robotics, Systems Science, Visual processing Behl, Harkirat Singh; Sapienza, Michael; Singh, Gurkirt; Saha, Suman; Cuzzolin, Fabio; Torr, Philip H S Incremental Tube Construction for Human Action Detection Proceeding 2018. Abstract | Links | BibTeX | Tags: Action detection, Artificial Intelligence, Computer Science, Computer vision, Detection, Pattern Recognition, Robot
2021
title = {SVD-GAN for Real-Time Unsupervised Video Anomaly Detection},
author = {Dinesh Jackson Samuel and Fabio Cuzzolin},
url = {https://www.bmvc2021-virtualconference.com/assets/papers/1295.pdf},
year = {2021},
date = {2021-11-24},
booktitle = {2021 The British Machine Vision Conference (BMVC)},
abstract = {Real-time unsupervised anomaly detection from videos is challenging due to the uncertainty in occurrence and definition of abnormal events. To overcome this ambiguity, an unsupervised adversarial learning model is proposed to detect such unusual events. The proposed end-to-end system is based on a Generative Adversarial Network (GAN) architecture with spatiotemporal feature learning and a new Singular Value Decomposition (SVD) loss function for robust reconstruction and video anomaly detection. The loss employs efficient low-rank approximations of the matrices involved to drive the convergence of the model. During training, the model strives to learn the relevant normal data distribution. Anomalies are then detected as frames whose reconstruction error, based on such distribution, shows a significant deviation. The model is efficient and lightweight due to our adoption of depth-wise separable convolution. The complete system is validated upon several benchmark datasets and proven to be robust for complex video anomaly detection, in terms of both AUC and Equal Error Rate (EER).},
keywords = {depth-wise separable convolutions, gan convergence, gan reconstruction, lightweight gan model, minimized kl divergence, singular value decomposition loss, spatiotemporal features, svd-gan, unsupervised anomaly detection},
pubstate = {published},
tppubtype = {conference}
}
title = {Spatiotemporal Deformable Scene Graphs for Complex Activity Detection},
author = {Salman Khan and Fabio Cuzzolin},
url = {https://www.bmvc2021-virtualconference.com/assets/papers/0706.pdf},
year = {2021},
date = {2021-11-24},
booktitle = {2021 The British Machine Vision Conference (BMVC)},
abstract = {Long-term complex activity recognition and localisation can be crucial for decision making in autonomous systems such as smart cars and surgical robots. Here we address the problem via a novel deformable, spatiotemporal scene graph approach, consisting of three main building blocks: (i) action tube detection, (ii) the modelling of the deformable geometry of parts, and (iii) a graph convolutional network. Firstly, action tubes are detected in a series of snippets. Next, a new 3D deformable RoI pooling layer is designed for learning the flexible, deformable geometry of the constituent action tubes. Finally, a scene graph is constructed by considering all parts as nodes and connecting them based on different semantics such as order of appearance, sharing the same action label and feature similarity. We also contribute fresh temporal complex activity annotation for the recently released ROAD autonomous driving and SARAS-ESAD surgical action datasets and show the adaptability of our framework to different domains. Our method is shown to significantly outperform graph-based competitors on both augmented datasets.},
keywords = {Action detection, activity detection, autonomous driving, complex activity detection, deformable pooling, graph convolutional network, parts deformation, scene graph, Surgical robotics},
pubstate = {published},
tppubtype = {conference}
}
title = {Modeling of Surgical Procedures Using Statecharts for Semi-Autonomous Robotic Surgery},
author = {Fabio Falezza and Nicola Piccinelli and Giacomo De Rossi and Andrea Roberti and Gernot Kronreif and Francesco Setti and Paolo Fiorini and Riccardo Muradore},
editor = {IEEE},
url = {https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=9530457},
doi = {10.1109/TMRB.2021.3110676},
issn = {2576-3202},
year = {2021},
date = {2021-09-06},
journal = {IEEE TRANSACTIONS ON MEDICAL ROBOTICS AND BIONICS},
volume = {3},
number = {4},
pages = {888-899},
abstract = {In this paper we propose a new methodology to model surgical procedures that is specifically tailored to semi-autonomous robotic surgery. We propose to use a restricted version of statecharts to merge the bottom-up approach, based on data-driven techniques (e.g., machine learning), with the top-down approach based on knowledge representation techniques. We consider medical knowledge about the procedure and sensing of the environment in two concurrent regions of the statecharts to facilitate re-usability and adaptability of the modules. Our approach allows producing a well defined procedural model exploiting the hierarchy capability of the statecharts, while machine learning modules act as soft sensors to trigger state transitions. Integrating data driven and prior knowledge techniques provides a robust, modular, flexible and re-configurable methodology to define a surgical procedure which is comprehensible by both humans and machines. We validate our approach on the three surgical phases of a RobotAssisted Radical Prostatectomy (RARP) that directly involve the assistant surgeon: bladder mobilization, bladder neck transection, and vesicourethral anastomosis, all performed on synthetic manikins.},
keywords = {autonomous robotics, statecharts, supervisory controller, Surgical robotics},
pubstate = {published},
tppubtype = {article}
}
title = {Unsupervised anomaly detection for a Smart Autonomous Robotic Assistant Surgeon (SARAS) using a deep residual autoencoder},
author = {Dinesh Jackson Samuel and Fabio Cuzzolin},
editor = {IEEE},
doi = {10.1109/LRA.2021.3097244},
year = {2021},
date = {2021-07-14},
journal = { IEEE Robotics and Automation Letters },
volume = {6},
number = {4},
pages = {7256 - 7261},
abstract = {Anomaly detection in Minimally-Invasive Surgery (MIS) traditionally requires a human expert monitoring the procedure from a console. Data scarcity, on the other hand, hinders what would be a desirable migration towards autonomous robotic-assisted surgical systems. Automated anomaly detection systems in this area typically rely on classical supervised learning. Anomalous events in a surgical setting, however, are rare, making it difficult to capture data to train a detection model in a supervised fashion. In this work we thus propose an unsupervised approach to anomaly detection for robotic-assisted surgery based on deep residual autoencoders. The idea is to make the autoencoder learn the 'normal' distribution of the data and detect abnormal events deviating from this distribution by measuring the reconstruction error. The model is trained and validated upon both the publicly available Cholec80 dataset, provided with extra annotation, and on a set of videos captured on procedures using artificial anatomies ('phantoms') produced as part of the Smart Autonomous Robotic Assistant Surgeon (SARAS) project. The system achieves recall and precision equal to 78.4%, 91.5%, respectively, on Cholec80 and of 95.6%, 88.1% on the SARAS phantom dataset. The end-to-end system was developed and deployed as part of the SARAS demonstration platform for real-time anomaly detection with a processing time of about 25 ms per frame.},
keywords = {Computer Vision for Medical Robotics, Multi-Robot systems, Surgical robotics},
pubstate = {published},
tppubtype = {article}
}
title = {A bilateral teleoperation with interaction force constraint in unknown environment using non linear model predictive control},
author = {Nicola Piccinelli and Riccardo Muradore},
doi = {https://doi.org/10.1016/j.ejcon.2021.06.030},
issn = {0947-3580},
year = {2021},
date = {2021-07-10},
journal = {European Journal of Control},
volume = {62},
number = {November 2021},
pages = {185-191},
abstract = {In critical scenarios, the interaction forces between a robot and the environment could lead to damages and dangerous situations. Complex tasks like grasping fragile objects or physical human-robot interaction in collaborative robotics require the capability of controlling forces. In bilateral teleoperation, force feedback is used to provide telepresence to the operator. In such situation, the force is commonly measured by a force/torque sensor at the end effector of the remote robot. Even if force feedback allows the operator to feel the interaction with the environment this does not prevent unsafe motion. In this paper, we propose a model predictive control (MPC) based bilateral teleoperation able to guarantee safe interaction with the environment by constraining the forces. The method does not assume any prior knowledge of the environment.},
keywords = {Bilateral teleoperation, Model Predictive Control, Optimal control, Robotics},
pubstate = {published},
tppubtype = {article}
}
title = {A First Evaluation of a Multi-Modal Learning System to Control Surgical Assistant Robots via Action Segmentation},
author = {Giacomo De Rossi, Marco Minelli, Serena Roin, Fabio Falezza, Alessio Sozzi, Federica Ferraguti, Francesco Setti, Marcello Bonfè, Cristian Secchi, Riccardo Muradore},
editor = {IEEE },
doi = {10.1109/TMRB.2021.3082210},
issn = {2576-3202},
year = {2021},
date = {2021-05-21},
journal = {IEEE TRANSACTIONS ON MEDICAL ROBOTICS AND BIONICS},
volume = {3},
number = {3},
pages = {714-724},
abstract = {The next stage for robotics development is to introduce autonomy and cooperation with human agents in tasks that require high levels of precision and/or that exert considerable physical strain. To guarantee the highest possible safety standards, the best approach is to devise a deterministic automaton that performs identically for each operation. Clearly, such approach inevitably fails to adapt itself to changing environments or different human companions. In a surgical scenario, the highest variability happens for the timing of different actions performed within the same phases. This paper presents a cognitive control architecture that uses a multi-modal neural network trained on a cooperative task performed by human surgeons and produces an action segmentation that provides the required timing for actions while maintaining full phase execution control via a deterministic Supervisory Controller and full execution safety by a velocity-constrained Model Predictive Controller.},
keywords = {Action segmentation, Cognitive robotics, Medical robotics, Model-predictive control, R-MIS},
pubstate = {published},
tppubtype = {article}
}
title = {A Passivity-Based Bilateral Teleoperation Architecture using Distributed Nonlinear Model Predictive Control},
author = {Nicola Piccinelli and Riccardo Muradore},
doi = {10.1109/IROS45743.2020.9341048},
year = {2021},
date = {2021-02-10},
booktitle = {2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
publisher = {IEEE},
abstract = {Bilateral teleoperation systems allow the telepresence of an operator while working remotely. Such ability becomes crucial when dealing with critical environments like space, nuclear plants, rescue, and surgery. The main properties of a teleoperation system are the stability and the transparency which, in general, are in contrast and they cannot be fully achieved at the same time. In this paper, we will present a novel model predictive controller that implements a passivity-based bilateral teleoperation algorithm. Our solution mitigates the chattering issue arising when resorting to the energy tank (or reservoir) mechanism by forcing the passivity as a hard constraint on the system evolution.},
keywords = {bilateral teleoperation algorithm, passivity},
pubstate = {published},
tppubtype = {conference}
}
title = {Rigid 3D Registration of Pre-operative Information for Semi-Autonomous Surgery},
author = {Nicola Piccinelli and Andrea Roberti and Eleonora Tagliabue and Francesco Setti and Gernot Kronreif and Riccardo Muradore and Paolo Fiorini},
editor = {IEEE },
doi = {10.1109/ISMR48331.2020.9312949},
year = {2021},
date = {2021-01-11},
booktitle = {2020 International Symposium on Medical Robotics (ISMR)},
publisher = {IEEE},
abstract = {Autonomous surgical robotics is the new frontier of surgery. In recent years, several studies have analysed the feasibility of autonomy in the field of robotic minimally invasive surgery (R-MIS). One of the most important requisites for such a system is the capability of reconstructing patient's 3D anatomy in real-time and registering it with pre-operative data. A popular approach to address this problem is to use simultaneous localisation and mapping (SLAM) techniques. However, they suffer from the lack of a correct scaling factor for the 3D model when a monocular vision systems is used. In this paper we register the sparse point cloud obtained with SLAM with the pre-operative model of the patient, in order to guide a robotic arm to perform some representative surgical tasks. To achieve this goal, we propose to recover the environment scaling factor for the SLAM point cloud exploiting the kinematics of the da Vinci ® Endoscopic Camera Manipulator (ECM). The proposed approach is tested in a real environment using an anatomically realistic phantom whose pre-operative model is extracted from the phantom's magnetic resonance imaging (MRI) scan. Validation is carried out by performing the bladder pushing task during a radical prostatectomy procedure.},
keywords = {R-MIS, RARP, semi-autonomous robotics},
pubstate = {published},
tppubtype = {conference}
}
title = {A Multi-Modal Learning System for On-Line Surgical Action Segmentation},
author = {Giacomo De Rossi and Serena Roin and Francesco Setti and Riccardo Muradore},
doi = {10.1109/ISMR48331.2020.9312950},
year = {2021},
date = {2021-01-11},
booktitle = {2020 International Symposium on Medical Robotics (ISMR)},
publisher = {IEEE},
abstract = {Surgical action recognition and temporal segmentation is a building block needed to provide some degrees of autonomy to surgical robots. In this paper, we present a deep learning model that relies on videos and kinematic data to output in real-time the current action in a surgical procedure. The proposed neural network architecture is composed of two sub-networks: a Spatial-Kinematic Network, which produces high-level features by processing images and kinematic data, and a Temporal Convolutional Network, which filters such features temporally over a sliding window to stabilize their changes over time. Since we are interested in applications to real-time supervisory control of robots, we focus on an efficient and causal implementation, i.e. the prediction at sample k only depends on previous observations. We tested our causal architecture on the publicly available JIGSAWS dataset, outperforming comparable state-of-the-art non-causal algorithms up to 8.6% in the edit score.},
keywords = {Deep learning model, Surgical Action Recognition, Surgical Action Segmentation},
pubstate = {published},
tppubtype = {conference}
}
2020
title = {Improving Rigid 3-D Calibration for Robotic Surgery},
author = {Andrea Roberti , Nicola Piccinelli , Daniele Meli, Riccardo Muradore , and Paolo Fiorini ,},
editor = {IEEE },
doi = {10.1109/TMRB.2020.3033670},
isbn = {2576-3202},
year = {2020},
date = {2020-11-04},
journal = {IEEE TRANSACTIONS ON MEDICAL ROBOTICS AND BIONICS},
volume = {2},
number = {4},
pages = {569-573},
abstract = {Autonomy is the next frontier of research in robotic surgery and its aim is to improve the quality of surgical procedures in the next future. One fundamental requirement for autonomy is advanced perception capability through vision sensors. In this article, we propose a novel calibration technique for a surgical scenario with a da Vinci ® Research Kit (dVRK) robot. Camera and robotic arms calibration are necessary to precise position and emulate expert surgeon. The novel calibration technique is tailored for RGB-D cameras. Different tests performed on relevant use cases prove that we significantly improve precision and accuracy with respect to state of the art solutions for similar devices on a surgical-size setups. Moreover, our calibration method can be easily extended to standard surgical endoscope used in real surgical scenario.},
keywords = {Calibration, Medical robotics, Minimally invasive surgery, multi arm calibration, Robot, Robot vision systems, Surgery, Surgical robotics, Three-dimensional displays},
pubstate = {published},
tppubtype = {article}
}
title = {Integrating Model Predictive Control and Dynamic Waypoints Generation for Motion Planning in Surgical Scenario},
author = {Marco Minelli, Alessio Sozzi, Giacomo De Rossi, Federica Ferraguti, Francesco Setti, Riccardo Muradore, Marcello Bonfè, Cristian Secchi},
editor = {IEEE },
doi = {10.1109/IROS45743.2020.9341673},
year = {2020},
date = {2020-10-24},
booktitle = {2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
abstract = {In this paper we present a novel strategy for motion planning of autonomous robotic arms in Robotic Minimally Invasive Surgery (R-MIS). We consider a scenario where several laparoscopic tools must move and coordinate in a shared environment. The motion planner is based on a Model Predictive Controller (MPC) that predicts the future behavior of the robots and allows to move them avoiding collisions between the tools and satisfying the velocity limitations. In order to avoid the local minima that could affect the MPC, we propose a strategy for driving it through a sequence of waypoints. The proposed control strategy is validated on a realistic surgical scenario.},
keywords = {Collision avoidance, Minimally invasive surgery, Planning, Predictive models, Robot kinematics, Robots, Tools},
pubstate = {published},
tppubtype = {conference}
}
title = {A hFSM based cognitive control architecture for assistive task in R-MIS },
author = {Narcís Sayols and Alessio Sozzi and Nicola Piccinelli and Albert Hernansanz and Alicia Casals and Marcello Bonfè and Riccardo Muradore},
url = {https://zenodo.org/record/5770464#.YbIo_VXMLIU},
year = {2020},
date = {2020-09-29},
booktitle = {10th Conference on New Technologies for Computer/Robot Assisted Surgery (CRAS)},
pages = {44-45},
abstract = {Nowadays, one of the most appealing and debated challenge in robotic surgery is the introduction of certain levels of autonomy in robot behaviour [1] implying technical advances in scene understanding and situation awareness, decision making, collision-free motion planning and environment interaction. The growth of R&D projects for autonomous surgical robotics (e.g. EU funded I-SUR, MURAB and SARAS) demonstrates the confidence and the expectations of the medical community on the benefits of such technologies. SARAS aims to develop assistive surgical robots for laparoscopic MIS, autonomously operating in the same workspace of either a teleoperated surgical robot or a manually driven surgical tool. The auxiliary robots autonomously decide which task perform to assist the main surgeon, planning
motions for executing the task considering the dynamics of human driven tools and patient's organs (predictable
only within a short time horizon). This paper proposes a control architecture for surgical robotic assistive tasks in
MIS using a hierarchical multi-level Finite State Machine (hFSM) as the cognitive control and a two-layered motion planner for the execution of the task. The hFSM models the operation starting from atomic actions to progressively build up more complex levels. The twolayer architecture of the motion planner merges the benefits of an offline geometric path construction method with those of online trajectory reconfiguration and reactive adaptation. At a global level, the path is built according to the initial knowledge of the operating scene and the requirements of the surgical tasks. Then, the path is reconfigured with respect to the dynamic environment using artificial potential fields [2]. Finally, a local level computes the robot trajectory, preserving collision-free property even in presence of obstacles with small diameter (i.e. the manually driver surgical instruments), by enforcing a velocity modulation technique derived from the Dynamical Systems (DS) based approach of [3].},
keywords = {Cognitive control, hierarchical finite state machine, Robotic surgery},
pubstate = {published},
tppubtype = {conference}
}
motions for executing the task considering the dynamics of human driven tools and patient's organs (predictable
only within a short time horizon). This paper proposes a control architecture for surgical robotic assistive tasks in
MIS using a hierarchical multi-level Finite State Machine (hFSM) as the cognitive control and a two-layered motion planner for the execution of the task. The hFSM models the operation starting from atomic actions to progressively build up more complex levels. The twolayer architecture of the motion planner merges the benefits of an offline geometric path construction method with those of online trajectory reconfiguration and reactive adaptation. At a global level, the path is built according to the initial knowledge of the operating scene and the requirements of the surgical tasks. Then, the path is reconfigured with respect to the dynamic environment using artificial potential fields [2]. Finally, a local level computes the robot trajectory, preserving collision-free property even in presence of obstacles with small diameter (i.e. the manually driver surgical instruments), by enforcing a velocity modulation technique derived from the Dynamical Systems (DS) based approach of [3].
title = {Multi-task control strategy exploiting redundancy in RMIS},
author = {Tomàs Pieras and Albert Hernansanz and Narcís Sayols and Johanna Parra and Elisenda Eixarch and Eduard Gratacós and Alícia Casals},
url = {https://zenodo.org/record/5770371#.YbIhI1XMLIU},
year = {2020},
date = {2020-09-28},
booktitle = {10th Conference on New Technologies for Computer/Robot Assisted Surgery},
pages = {52-53},
abstract = {Intrauterine fetal surgery allows a minimally invasive surgery (FMIS) approach to the treatment of congenital defects. This surgical technique allows the correction of the Twin-to-Twin Transfusion Syndrome (TTTS) [1]. TTTS is a severe complication in monochorionic twins’ pregnancies that occurs when there is communication (anastomoses) between the fetuses’ blood systems, which leads to cardiovascular disturbances and results in their death in 90% of cases. A minimally invasive approach is less harmful and allows the preservation of the tissues of the amniotic sac. Fetoscopic Laser Photocoagulation (FLP) is a MIS intervention to ablate all the intertwin anastomoses to make independent the twins’ vascular systems from each other [2]. A single master single slave teleoperation platform was developed to assist the surgeon during FLP, Fig. 1. The master is composed of a 6DoF haptic device and an interactive user interface containing fetoscopic view, interactive navigation map, etc. The slave is composed of 6DoF robot holding a fetoscope, an active trocar insertion depth control and an automated coagulation laser control system. The platform has been tested by 14 surgeons with different fetoscopic surgical experience, obtaining the face validity. Two main issues have been detected. First, the need of a redundant robot to overcome the kinematic restrictions imposed by the Remote Center of Motion (RCM) and the workspace placement, defined by the placenta position. Second, the need of active humanrobot interaction during pre and post-operative phases (insertion and extraction of the fetoscope) and during surgery to enable a safe shared workspace between medical staff (e.g. auxiliary surgeon with an echographer probe) and robot. Following the generalized framework for control of redundant manipulators in RMIS proposed in [3], this paper proposes a multi-task control strategy exploiting redundancy to improve dexterity and reachability as well as enable human-robot interaction to deal with humanrobot collisions and co-manipulation while performing the surgical task. This work is based on a 7 DoF KUKA LWR 4, a redundant and collaborative robot.},
keywords = {human-robot collisions, human-robot interaction, R-MIS, surgical tasks},
pubstate = {published},
tppubtype = {conference}
}
title = {Bitrack: a friendly four arms robot for laparoscopic surgery},
author = {Josep Amat and Alícia Casals and Manel Frigola},
url = {https://zenodo.org/record/5770426#.YbIlY1XMLIU},
year = {2020},
date = {2020-09-28},
booktitle = {10th Conference on New Technologies for Computer/Robot Assisted Surgery},
pages = {96-97},
abstract = {For years, robotic laparoscopic surgery has motivated research and initiatives with the development of robotic systems offering different kind of solutions, being Bitrack a new option. At present the market is dominated by DaVinci, which has become a benchmark in this speciality. Laparoscopic robots offer precision and accessibility since their instruments are endowed with 3 DoF, for their orientation, which are missing in standard laparoscopy. This paper presents the Bitrack system, which is a new laparoscopic surgical robot, designed at UPC, and currently undergoing the process of certification by a spin-off created for its exploitation, RSS. This new robot aims to obtain the same benefits and accuracy as the current benchmark but overcoming some dependencies that current robotic surgery poses. Many reports on studies that evaluate the contribution of robotics do not doubt on the improvements achieved with the use of robots in what refers to surgical quality and that more complex surgeries can be addressed than those performed by laparoscopy [1, 2]. The Bitrack system apart from these clear contributions of robotics also provides the concept of hybrid surgery. This concept implies the capability of performing an intervention alternating standard laparoscopy with other robot assisted phases according to the needs in each stage of the procedure. This performance has been achieved thanks to the Bitrack friendly design which allows a quick interchange of robotized instruments by their manual counterparts, less than a minute each, using the same conventional trocar. This paper presents the robot architecture and the experimental results achieved with the operational prototype.},
keywords = {hybrid surgery, robotic laparoscopic surgery},
pubstate = {published},
tppubtype = {conference}
}
title = {A supervisory controller for semi-autonomous surgical interventions},
author = {Fabio Falezza and Nicola and Piccinelli and Andrea Roberti and Francesco Setti and Riccardo Muradore and Paolo Fiorini},
url = {https://cras-eu.org/wp-content/uploads/2020/09/CRAS_2020_proceedings.pdf},
year = {2020},
date = {2020-09-28},
booktitle = {10th Conference on New Technologies for Computer/Robot Assisted Surgery (CRAS 2020)},
pages = {50-51},
abstract = {Nowadays the main research interests in the field of Robotic Minimally Invasive Surgery (R-MIS) are related to robots’ autonomy. Techniques like trajectory planning, collision avoidance, decision making and scene understanding require technical advances in order to be applied to such an environment. In this paper, we propose a deterministic supervisory controller for a surgical semi-autonomous robotic platform. The proposed method uses a three-level Hierarchical Finite State Machine (HFSM) to define all the possible behaviours of the autonomous system. The transitions of the HFSM are triggered by the Observers, a set of functions fed with the state of the system (robot kinematics, anatomical structures, etc.) that output a logical description of the surgery state. We tested the supervisory controller performing the “bladder neck incision” phase of a Radical Prostatectomy (RARP) procedure.},
keywords = {hierarchical finite state machine, R-MIS, semi-autonomous robot},
pubstate = {published},
tppubtype = {conference}
}
title = {Global/local motion planning based on Dynamic Trajectory Reconfiguration and Dynamical Systems for autonomous surgical robots},
author = {Narcís Sayols, Alessio Sozzi, Nicola Piccinelli, Albert Hernansanz, Alicia Casals, Marcello Bonfè, and Riccardo Muradore,},
editor = {IEEE},
doi = {10.1109/ICRA40945.2020.9197525},
year = {2020},
date = {2020-09-15},
booktitle = {2020 IEEE International Conference on Robotics and Automation (ICRA)},
abstract = {This paper addresses the generation of collision-free trajectories for the autonomous execution of assistive tasks in Robotic Minimally Invasive Surgery (R-MIS). The proposed approach takes into account geometric constraints related to the desired task, like for example the direction to approach the final target and the presence of moving obstacles. The developed motion planner is structured as a two-layer architecture: a global level computes smooth spline-based trajectories that are continuously updated using virtual potential fields; a local level, exploiting Dynamical Systems based obstacle avoidance, ensures collision free connections among the spline control points. The proposed architecture is validated in a realistic surgical scenario.},
keywords = {assistive tasks, autonomous execution, autonomous surgical, Collision avoidance, collision free connections, collision-free trajectories, desired task, developed motion planner, dynamical systems based obstacle avoidance, final target, geometric constraints, global level computes smooth spline-based trajectories, Medical robotics, mobile robots, motion control, moving obstacles, realistic surgical scenario, Robots, splines (mathematics), Surgery, surgery INSPEC: Non-Controlled Indexing robotic minimally invasive surgery, Task analysis, Tools, Trajectory, two-layer architecture},
pubstate = {published},
tppubtype = {conference}
}
title = {Surgical Hand Gesture Prediction for the Operating Room},
author = {Inna Skarga-Bandurova, Rostislav Siriak, Tetiana Biloborodova, Fabio Cuzzolin, Vivek Singh Bawa, Mohamed Ibrahim Mohamed, R Dinesh Jackson },
editor = {IOS press},
url = {https://zenodo.org/record/4471560#.YBFNouhKiXI},
doi = {10.3233/SHTI200621},
year = {2020},
date = {2020-09-04},
journal = {Studies in Health Technology and Informatics },
volume = {273},
pages = {97-103},
abstract = {Technological advancements in smart assistive technology enable navigating and manipulating various types of computer-aided devices in the operating room through a contactless gesture interface. Understanding surgeon actions is crucial to natural human-robot interaction in operating room since it means a sort of prediction a human behavior so that the robot can foresee the surgeon's intention, early choose appropriate action and reduce waiting time. In this paper, we present a new deep network based on Convolution Long Short-Term Memory (ConvLSTM) for gesture prediction configured to provide natural interaction between the surgeon and assistive robot and improve operating-room efficiency. The experimental results prove the capability of reliably recognizing unfinished gestures on videos. We quantitatively demonstrate the latter ability and the fact that GestureConvLSTM improves the baseline system performance on LSA64 dataset.},
keywords = {ConvLSTM; GestureConvLSTM; Hand gesture; operating room; prediction; surgeon},
pubstate = {published},
tppubtype = {article}
}
title = {ESAD: Endoscopic Surgeon Action Detection Dataset},
author = {V. Singh Bawa, G. Singh, F. Kaping’A, I. Skarga-Bandurova, A. Leporini, C. Landolfo, A. Stabile, F. Setti, R. Muradore, E. Oleari, F. Cuzzolin},
editor = {arXiv},
url = {https://zenodo.org/record/4471476#.YBFMT-hKiXI},
year = {2020},
date = {2020-06-12},
urldate = {2020-06-25},
abstract = {In this work, we take aim towards increasing the effectiveness of surgical assistant robots. We intended to make assistant robots safer by making them aware about the actions of surgeon, so it can take appropriate assisting actions. In other words, we aim to solve the problem of surgeon action detection in endoscopic videos. To this, we introduce a challenging dataset for surgeon action detection in real-world endoscopic videos. Action classes are picked based on the feedback of surgeons and annotated by medical professional. Given a video frame, we draw bounding box around surgical tool which is performing action and label it with action label. Finally, we presenta frame-level action detection baseline model based on recent advances in ob-ject detection. Results on our new dataset show that our presented dataset provides enough interesting challenges for future method and it can serveas strong benchmark corresponding research in surgeon action detection in endoscopic videos.},
keywords = {Action detection, endoscopic video, surgeon action detection, Surgery},
pubstate = {published},
tppubtype = {online}
}
title = {Technical and Functional Validation of a Teleoperated Multirobots Platform for Minimally Invasive Surgery},
author = {Alice Leporini, Elettra Oleari, Carmela Landolfo, Alberto Sanna, Alessandro Larcher, Giorgio Gandaglia, Nicola Fossati, Fabio Muttin, Umberto Capitanio, Francesco Montorsi, Andrea Salonia, Marco Minelli, Federica Ferraguti, Cristian Secchi, Saverio Farsoni, Alessio Sozzi, Marcello Bonf`e, Narcis Sayols, Albert Hernansanz, Alicia Casals, Sabine Hertle, Fabio Cuzzolin, Andrew Dennison, Andreas Melzer, Gernot Kronreif, Salvatore Siracusano, Fabio Falezza, Francesco Setti and Riccardo Muradore},
editor = {IEEE},
doi = {10.1109/TMRB.2020.2990286},
issn = {2576-3202},
year = {2020},
date = {2020-05-18},
journal = {IEEE TRANSACTIONS ON MEDICAL ROBOTICS AND BIONICS},
volume = {2},
number = {2},
pages = {148-156},
abstract = {Nowadays Robotic assisted Minimally Invasive Surgeries (R-MIS) are the elective procedures for treating highly accurate and scarcely invasive pathologies, thanks to their ability to empower surgeons’ dexterity and skills. The research on new Multi-Robots Surgery (MRS) platform is cardinal to the development of a new SARAS surgical robotic platform, which aims at carrying out autonomously the assistants tasks during R-MIS procedures. In this work, we will present the SARAS MRS platform validation protocol, framed in order to assess: (i) its technical performances in purely dexterity exercises, and (ii) its functional performances. The results obtained show a prototype able to put the users in the condition of accomplishing the tasks requested (both dexterity- and surgical-related), even with reasonably lower performances respect to the industrial standard. The main aspects on which further improvements are needed result to be the stability of the end effectors, the depth perception and the vision systems, to be enriched with dedicated virtual fixtures. The SARAS’ aim is to reduce the main surgeon’s workload through the automation of assistive tasks which would benefit both surgeons and patients by facilitating the surgery and reducing the operation time.},
keywords = {functional evaluation, Instruments, Manipulators, Protocols, Robot kinematics, robotic end effector task metrics, Surgery, surgical-related tasks, tele-operated surgical robotic system, Tools, Validation protocol},
pubstate = {published},
tppubtype = {article}
}
title = {Cognitive Robotic Architecture for Semi-Autonomous Execution of Manipulation Tasks in a Surgical Environment},
author = {Giacomo De Rossi, Marco Minelli, Alessio Sozzi, Nicola Piccinelli, Federica Ferraguti, Francesco Setti, Marcello Bonfé, Christian Secchi, Riccardo Muradore},
editor = {IEEE International Intelligent Robots and Systems (IROS)},
doi = {10.1109/IROS40897.2019.8967667},
isbn = {978-1-7281-4004-9},
year = {2020},
date = {2020-01-27},
booktitle = {2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
publisher = {IEEE},
abstract = {The development of robotic systems with a certain level of autonomy to be used in critical scenarios, such as an operating room, necessarily requires a seamless integration of multiple state-of-the-art technologies. In this paper we propose a cognitive robotic architecture that is able to help an operator accomplish a specific task. The architecture integrates an action recognition module to understand the scene, a supervisory control to make decisions, and a model predictive control to plan collision-free trajectory for the robotic arm taking into account obstacles and model uncertainty. The proposed approach has been validated on a simplified scenario involving only a da VinciO surgical robot and a novel manipulator holding standard laparoscopic tools.},
keywords = {Artificial Intelligence, Collision avoidance, Manipulators, Medical robotics, mobile robots, predictive control, Robot, Robot vision, Surgery, trajectory control, uncertain systems},
pubstate = {published},
tppubtype = {conference}
}
2019
title = {Assistance Strategies for Robotized Laparoscopy},
author = {Alicia Casals and Albert Hernansanz and Narcís Sayols and Josep Amat},
editor = {Springer
},
url = {https://link.springer.com/chapter/10.1007%2F978-3-030-36150-1_40},
doi = {10.1007/978-3-030-36150-1_40},
isbn = {978-3-030-36149-5},
year = {2019},
date = {2019-11-20},
booktitle = {Robot 2019: Fourth Iberian Robotics Conference},
pages = {485-496},
abstract = {Robotizing laparoscopic surgery not only allows achieving better accuracy to operate when a scale factor is applied between master and slave or thanks to the use of tools with 3 DoF, which cannot be used in conventional manual surgery, but also due to additional informatic support. Relying on computer assistance different strategies that facilitate the task of the surgeon can be incorporated, either in the form of autonomous navigation or cooperative guidance, providing sensory or visual feedback, or introducing certain limitations of movements. This paper describes different ways of assistance aimed at improving the work capacity of the surgeon and achieving more safety for the patient, and the results obtained with the prototype developed at UPC.},
keywords = {Cooperative robotics, Laparoscopy, Robot, Safety, Surgery, Surgical robots, Virtual feedback},
pubstate = {published},
tppubtype = {conference}
}
title = {Vision Based Robot Assistance in TTTS Fetal Surgery},
author = {Narcís Sayols and Albert Hernansanz and Johanna Parra and Elisenda Eixarch and Eduard Gratacós and Josep Amat and Alícia Casals},
editor = {IEEE
},
doi = {10.1109/EMBC.2019.8856402},
year = {2019},
date = {2019-10-07},
journal = {2019 41st Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC)},
abstract = {This paper presents an accurate and robust tracking vision algorithm for Fetoscopic Laser Photo-coagulation (FLP) surgery for Twin-Twin Transfusion Syndrome (TTTS). The aim of the proposed method is to assist surgeons during anastomosis localization, coagulation and review using a tele-operated robotic system. The algorithm computes the relative position of the fetoscope tool tip with respect to the placenta, via local vascular structure registration. The algorithm uses image features (local superficial vascular structures of the placenta's surface) to automatically match consecutive fetoscopic images. It is composed of three sequential steps: image processing (filtering, binarization and vascular structures segmentation); relevant Points Of Interest (POIs) seletion; and image registration between consecutive images. The algorithm has to deal with the low quality of fetoscopic images, the liquid and dirty environment inside the placenta jointly with the thin diameter of the fetoscope optics and low amount of environment light reduces the image quality. The obtained images are blurred, noisy and with very poor color components. The tracking system has been tested using real video sequences of FLP surgery for TTTS. The computational performance enables real time tracking, locally guiding the robot over the placenta's surface with enough accuracy.},
keywords = {Coagulation, Image processing, Laser, Robot, Surgery, Three-dimensional displays, Visualization},
pubstate = {published},
tppubtype = {article}
}
title = {Dynamic Motion Planning for Autonomous Assistive Surgical Robots},
author = {Alessio Sozzi and Marcello Bonfè and Saverio Farsoni and Giacomo DeRossi and Riccardo Muradore},
doi = {10.3390/electronics8090957},
year = {2019},
date = {2019-07-26},
journal = {Electronics 2019},
volume = {8 (9)},
pages = {957},
abstract = {The paper addresses the problem of the generation of collision-free trajectories for a robotic manipulator, operating in a scenario in which obstacles may be moving at non-negligible velocities. In particular, the paper aims to present a trajectory generation solution that is fully executable in real-time and that can reactively adapt to both dynamic changes of the environment and fast reconfiguration of the robotic task. The proposed motion planner extends the method based on a dynamical system to cope with the peculiar kinematics of surgical robots for laparoscopic operations, the mechanical constraint being enforced by the fixed point of insertion into the abdomen of the patient the most challenging aspect. The paper includes a validation of the trajectory generator in both simulated and experimental scenarios.},
keywords = {Dynamical systems, Motion planning, Obstacle avoidance, Robot, Robot, Surgical robots},
pubstate = {published},
tppubtype = {article}
}
title = {Tele-Echography Using a Two-Layer Teleoperation Algorithm with Energy Scaling},
author = {Enrico Sartori and Carlo Tadiello and Cristian Secchi and Riccardo Muradore},
doi = {10.5281/zenodo.3362881},
year = {2019},
date = {2019-05-24},
abstract = {Performing ultrasound procedures from a remote site is a challenging task since both a stable behavior, for the safety of the patient, and a high-level of usability, to exploit the sonographer’s expertise, need to be guaranteed. Furthermore, a teleoperation system that provides such requirements has to deal with communication delays as well. To address this issue, we use the two-layer algorithm: a passivity-based bilateral teleoperation architecture able to guarantee stability despite unknown and time-varying delay. Its flexibility allows to implement different kinds of control laws. In a Tele-Echography system, the slave manipulator has to apply significant forces needed by the procedure whereas the haptic device at the master side should be very light to avoid tiring the operator. Therefore, the energy needed by these two robots to perform their movements is very different and the energy injected into the system by the operator is often not sufficient to implement the desired action at the slave side. Methods to overcome this problem require to perfectly know the dynamical models of the robots. The solution proposed in this paper does not require such knowledge and is based on properly scaling the energy exchanged between the master and the slave side. We show the effectiveness of this approach in a real setup using a TOUCH haptic device and a WAM Barrett robot holding an ultrasound probe.},
note = {2019 International Conference on Robotics and Automation (ICRA) Palais des congres de Montreal, Montreal, Canada, May 20-24, 2019},
keywords = {Energy Scaling, Tele-Echography, Teleoperation, Two-Layer Algorithm},
pubstate = {published},
tppubtype = {conference}
}
title = {Energy-Shared Two-Layer (Approach for Multi-Master-Multi-Slave) Bilateral Teleoperation Systems},
author = {Marco Minelli and Federica Ferraguti and Nicola Piccinelli and Riccardo Muradore and Cristian Secchi},
url = {https://nxgsur-icra2019.sciencesconf.org/272549/document},
doi = {10.5281/zenodo.3362947 },
year = {2019},
date = {2019-05-20},
abstract = {In this paper, a two-layer architecture for the bilateral teleoperation of multi-arms systems with communication delay is presented. We extend the single-master-single-slave two layer approach proposed in [1] by connecting multiple robots to a single energy tank. This allows to minimize the conservativeness due to passivity preservation and to increment the level of transparency that can be achieved. The proposed approach is implemented on a realistic surgical scenario developed within the EU-funded SARAS project.
},
keywords = {Control architecture, Control programming, Laparoscopy, Robot, Surgical robots, Teleoperation, Telerobotics},
pubstate = {published},
tppubtype = {conference}
}
title = {A Multirobots Teleoperated Platform for Artificial Intelligence Training Data Collection in Minimally Invasive Surgery},
author = {Francesco Setti and Elettra Oleari and Alice Leporini and Diana Trojaniello and Alberto Sanna and Umberto Capitanio and Francesco Montorsi and Andrea Salonia and Riccardo Muradore},
editor = {IEEE},
url = {http://bmvc2018.org/contents/papers/0593.pdf},
doi = {10.1109/ISMR.2019.8710209},
isbn = {978-1-5386-7825-1},
year = {2019},
date = {2019-05-09},
pages = {1-7},
abstract = {Dexterity and perception capabilities of surgical robots may soon be improved by cognitive functions that can support surgeons in decision making and performance monitoring, and enhance the impact of automation within the operating rooms. Nowadays, the basic elements of autonomy in robotic surgery are still not well understood and their mutual interaction is unexplored. Current classification of autonomy encompasses six basic levels: Level 0: no autonomy; Level 1: robot assistance; Level 2: task autonomy; Level 3: conditional autonomy; Level 4: high autonomy. Level 5: full autonomy. The practical meaning of each level and the necessary technologies to move from one level to the next are the subject of intense debate and development. In this paper, we discuss the first outcomes of the European funded project Smart Autonomous Robotic Assistant Surgeon (SARAS). SARAS will develop a cognitive architecture able to make decisions based on pre-operative knowledge and on scene understanding via advanced machine learning algorithms. To reach this ambitious goal that allows us to reach Level 1 and 2, it is of paramount importance to collect reliable data to train the algorithms. We will present the experimental setup to collect the data for a complex surgical procedure (Robotic Assisted Radical Prostatectomy) on very sophisticated manikins (i.e. phantoms of the inflated human abdomen). The SARAS platform allows the main surgeon and the assistant to teleoperate two independent two-arm robots. The data acquired with this platform (videos, kinematics, audio) will be used in our project and will be released (with annotations) for research purposes.},
keywords = {Artificial Intelligence, Cognitive control, Computer Science, Laparoscopy, Laparoscopy, machine learning, Robot, Robotic surgery, Surgery, Teleoperation},
pubstate = {published},
tppubtype = {conference}
}
title = {Enhancing Surgical Process Modeling for Artificial Intelligence Development in Robotics the SARAS Case Study for Minimally Invasive Procedures},
author = {Elettra Oleari and Alice Leporini and Diana Trojaniello and Alberto Sanna and Umberto Capitanio and Federico Deho and Alessandro Larcher and Francesco Montorsi and Andrea Salonia and Francesco Setti and Riccardo Muradore},
editor = {IEEE},
doi = {10.1109/ISMICT.2019.8743931},
isbn = {978-1-7281-2342-4},
year = {2019},
date = {2019-05-09},
pages = {1-6},
abstract = {Nowadays Minimally Invasive Surgery (MIS) is playing an increasingly major role in the clinical practice also thanks to a rapid evolution of the available medical technologies, especially surgical robotics. A new challenge in this respect is to equip robots with cognitive capabilities, in order to make them able to act autonomously and cooperate with human surgeons. In this paper we describe the methodological approach developed to comprehensively describe a specific surgical knowledge, to be transferred to a complex Artificial Intelligence (AI) integrating Perception, Cognitive and Planning modules. Starting from desk researches and a strict cooperation with expert surgeons, the surgical process is framed on a high-level perspective, which is then deepened into a granular model through a Surgical Process Modelling approach, so as to embed all of the needed information by the AI to properly work. The model is eventually completed adding the corresponding Process Risk Analysis. We present the results obtained with the application of the aforementioned methodology to a Laparoscopic Radical Nephrectomy (LRN) procedure and discuss on the next technical implementation of this model.},
keywords = {Artificial Intelligence, Autonomy, Cognitive control, Cognitive functions, Decision making, Laparoscopes, Laparoscopy, Laparoscopy, learning systems, machine learning, Medical robotics, multirobots teleoperated platform, Robotic surgery, Surgery, Surgical robots, Teleoperation},
pubstate = {published},
tppubtype = {article}
}
title = {A physical/virtual platform for hysteroscopy training},
author = {Albert Hernansanz and Martínez and Rovira and Alicia Casals},
editor = {CRAS 2019},
doi = {10.5281/zenodo.3373297},
year = {2019},
date = {2019-03-21},
booktitle = {Proceedings of the 9th Joint Workshop on New Technologies for Computer/Robot Assisted Surgery},
abstract = {This work presents HysTrainer (HT), a training module for hysteroscopy, which is part of the generic endoscopic training platform EndoTrainer (ET). This platform merges both technologies, with the benefits of having a physical anatomic model and computer assistance for augmented reality and objective assessment. Further to the functions of a surgical trainer, EndoTrainer provides an integral education, training and evaluation platform.},
keywords = {Computer Science, Endoscopy, Laparoscopy, Laparoscopy, Robot, Robotic surgery, Robotic Surgery, Surgery, Surgical robots, Training},
pubstate = {published},
tppubtype = {conference}
}
title = {Sentisim: a hybrid training platform for sinb in local melanoma staging},
author = {Albert Hernansanz and Pieras and Ferrandiz and Moreno and Alicia Casals
},
doi = {10.5281/zenodo.3373320},
year = {2019},
date = {2019-03-21},
publisher = {CRAS 2019},
abstract = {This work presents a new training platform for SLNB in local melanoma staging. This new system solves the previous problems maintaining a realistic scenario, at the same time that measures a series of important parameters to determine the quality of the surgery.},
keywords = {Anatomical trainer, Biopsy, Melanoma, Simulator, Surgery, Surgical trainer},
pubstate = {published},
tppubtype = {conference}
}
2018
title = {An energy saving approach to active object recognition and localization},
author = {Andrea Roberti and Riccardo Muradore and Paolo Fiorini and Marco Cristani and Francesco Setti},
editor = {IECON 2018},
doi = {10.1109/IECON.2018.8591411},
year = {2018},
date = {2018-10-21},
organization = {Annual Conference of the IEEE Industrial Electronics Society (IECON).Washington, DC, USA. },
abstract = {We propose an active object recognition (AOR) strategy explicitly suited to work with a real robotic arm. So far, AOR policies on robotic arms have focused on heterogeneous constraints, most of them related to classification accuracy, classification confidence, number of moves etc., discarding physical and energetic constraints a real robot has to fulfill. Our strategy adjusts this discrepancy, with a POMDP-based AOR algorithm that explicitly considers manipulability and energetic terms in the planning optimization. The manipulability term avoids the robotic arm to encounter singularities, which require expensive and straining backtracking steps; the energetic term deals with the arm gravity compensation when in static conditions, which is crucial in AOR policies where time is spent in the classifier belief update, before to do the next move. Several experiments have been carried out on a redundant, 7-DoF Panda arm manipulator, on a multi-object recognition task. This allows to appreciate the improvement of our solution with respect to other competitors evaluated on simulations only.},
keywords = {Active object recognition, Artificial Intelligence, Computer Science, Learning, Object recognition, Pattern Recognition, POMDP, Robotics},
pubstate = {published},
tppubtype = {conference}
}
title = {Recognition self-awareness for active object recognition on depth images},
author = {Andrea Roberti and Marco Carletti and Francesco Setti and Umberto Castellani and Paolo Fiorini and Marco Cristani},
editor = {British Machine Vision Conference (BMVC). Newcastle-Upon-Tyne, UK. (bmvc2018.org). Spotlight, 2% acceptance rate.},
url = {http://bmvc2018.org/contents/papers/0593.pdf},
doi = {10.5281/zenodo.3362923},
year = {2018},
date = {2018-09-06},
organization = {BMVC 2018},
abstract = {We propose an active object recognition framework that introduces the recognition self-awareness, which is an intermediate level of reasoning to decide which views to cover during the object exploration. This is built first by learning a multi-view deep 3D object classifier; subsequently, a 3D dense saliency volume is generated by fusing together single-view visualization maps, these latter obtained by computing the gradient map of the class label on different image planes. The saliency volume indicates which object parts the classifier considers more important for deciding a class. Finally, the volume is injected in the observation model of a Partially Observable Markov Decision Process (POMDP). In practice, the robot decides which views to cover, depending on the expected ability of the classifier to discriminate an object class by observing a specific part. For example, the robot will look for the engine to discriminate between a bicycle and a motorbike, since the classifier has found that part as highly discriminative. Experiments are carried out on depth images with both simulated and real data, showing that our framework predicts the object class with higher accuracy and lower energy consumption than a set of alternatives.},
keywords = {3D object classifier, Artificial Intelligence, Computer Science, Object exploration, Object recognition, POMDP},
pubstate = {published},
tppubtype = {conference}
}
title = {Predicting action tubes},
author = {Gurkirt Singh and Suman Saha and Fabio Cuzzolin},
editor = {ECCV 2018 Workshop on Anticipating Human Behaviour (AHB 2018), Munich, Germany, Sep 2018},
url = {http://openaccess.thecvf.com/content_ECCVW_2018/papers/11131/Singh_Predicting_Action_Tubes_ECCVW_2018_paper.pdf},
doi = {10.5281/zenodo.3362942},
year = {2018},
date = {2018-08-23},
abstract = {In this work, we present a method to predict an entire `action tube' (a set of temporally linked bounding boxes) in a trimmed video just by observing a smaller subset of it. Predicting where an action is going to take place in the near future is essential to many computer vision based applications such as autonomous driving or surgical robotics. Importantly, it has to be done in real-time and in an online fashion. We propose a Tube Prediction network (TPnet) which jointly predicts the past, present and future bounding boxes along with their action classification scores. At test time TPnet is used in a (temporal) sliding window setting, and its predictions are put into a tube estimation framework to construct/predict the video long action tubes not only for the observed part of the video but also for the unobserved part. Additionally, the proposed action tube predictor helps in completing action tubes for unobserved segments of the video. We quantitatively demonstrate the latter ability, and the fact that TPnet improves state-of-the-art detection performance, on one of the standard action detection benchmarks - J-HMDB-21 dataset.},
note = {Proceedings of the ECCV 2018 Workshop on Anticipating Human Behaviour (AHB 2018), Munich, Germany, Sep 2018},
keywords = {Artificial Intelligence, Computer Science, Computer vision, Object recognition, Pattern Recognition, Robot, Robotics},
pubstate = {published},
tppubtype = {article}
}
title = {Estimation of interaction forces in robotic surgery using a semi-supervised deep neural network model},
author = {Marbán Arturo and Vignesh Srinivasan and Wojciech Samek and Josep Fernández and Alicia Casals},
editor = {IEEE},
url = {https://upcommons.upc.edu/bitstream/handle/2117/132610/iros2018_paper_26_07_2018.pdf?sequence=3&isAllowed=y},
doi = {10.1109/IROS.2018.8593701},
year = {2018},
date = {2018-08-09},
abstract = {Providing force feedback as a feature in current Robot-Assisted Minimally Invasive Surgery systems still remains a challenge. In recent years, Vision-Based Force Sensing (VBFS) has emerged as a promising approach to address this problem. Existing methods have been developed in a Supervised Learning (SL) setting. Nonetheless, most of the video sequences related to robotic surgery are not provided with ground-truth force data, which can be easily acquired in a controlled environment. A powerful approach to process unlabeled video sequences and find a compact representation for each video frame relies on using an Unsupervised Learning (UL) method. Afterward, a model trained in an SL setting can take advantage of the available ground-truth force data. In the present work, UL and SL techniques are used to investigate a model in a Semi-Supervised Learning (SSL) framework, consisting of an encoder network and a Long-Short Term Memory (LSTM) network. First, a Convolutional Auto-Encoder (CAE) is trained to learn a compact representation for each RGB frame in a video sequence. To facilitate the reconstruction of high and low frequencies found in images, this CAE is optimized using an adversarial framework and a L1-loss, respectively. Thereafter, the encoder network of the CAE is serially connected with an LSTM network and trained jointly to minimize the difference between ground-truth and estimated force data. Datasets addressing the force estimation task are scarce. Therefore, the experiments have been validated in a custom dataset. The results suggest that the proposed approach is promising.},
keywords = {Learning, Robot, Robotic surgery, Robotics, Surgery, Training},
pubstate = {published},
tppubtype = {conference}
}
title = {TraMNet - Transition Matrix Network for Efficient Action Tube Proposals},
author = {Gurkirt Singh and Suman Saha and Fabio Cuzzolin},
url = {https://arxiv.org/abs/1808.00297},
year = {2018},
date = {2018-08-01},
abstract = {Current state-of-the-art methods solve spatio-temporal ac-tion localisation by extending 2D anchors to 3D-cuboid proposals onstacks of frames, to generate sets of temporally connected bounding boxescalled action micro-tubes. However, they fail to consider that the underly-ing anchor proposal hypotheses should also move (transition) from frameto frame, as the actor or the camera do. Assuming we evaluate n2D an-chors in each frame, then the number of possible transitions from each2D anchor to he next, for a sequence of fconsecutive frames, is in theorder of O(nf), expensive even for small values of f.To avoid this problem we introduce a Transition-Matrix-based Network(TraMNet) which relies on computing transition probabilities betweenanchor proposals while maximising their overlap with ground truth bound-ing boxes across frames, and enforcing sparsity via a transition threshold.As the resulting transition matrix is sparse and stochastic, this reducesthe proposal hypothesis search space from O(nf) to the cardinality ofthe thresholded matrix. At training time, transitions are specific to celllocations of the feature maps, so that a sparse (efficient) transition ma-trix is used to train the network. At test time, a denser transition matrixcan be obtained either by decreasing the threshold or by adding to itall the relative transitions originating from any cell location, allowingthe network to handle transitions in the test data that might not havebeen present in the training data, and making detection translation-invariant. Finally, we show that our network is able to handle sparseannotations such as those available in the DALY dataset, while allowingfor both dense (accurate) or sparse (efficient) evaluation within a singlemodel. We report extensive experiments on the DALY, UCF101-24 andTransformed-UCF101-24 datasets to support our claims.},
keywords = {Computer Science, Computer vision, Electrical Engineering, Image processing, Pattern Recognition, Robot, Robotics, Systems Science, Visual processing},
pubstate = {published},
tppubtype = {proceedings}
}
title = {Incremental Tube Construction for Human Action Detection},
author = {Harkirat Singh Behl and Michael Sapienza and Gurkirt Singh and Suman Saha and Fabio Cuzzolin and Philip H. S. Torr},
editor = {British Machine Vision Conference (BMVC). Newcastle-Upon-Tyne, UK},
url = {https://arxiv.org/abs/1704.01358},
year = {2018},
date = {2018-07-23},
abstract = {Current state-of-the-art action detection systems are tailored for offline batch-processing applications. However, for online applications like human-robot interaction, current systems fall short, either because they only detect one action per video, or because they assume that the entire video is available ahead of time. In this work, we introduce a real-time and online joint-labelling and association algorithm for action detection that can incrementally construct space-time action tubes on the most challenging action videos in which different action categories occur concurrently. In contrast to previous methods, we solve the detection-window association and action labelling problems jointly in a single pass. We demonstrate superior online association accuracy and speed (2.2ms per frame) as compared to the current state-of-the-art offline systems. We further demonstrate that the entire action detection pipeline can easily be made to work effectively in real-time using our action tube construction algorithm.},
keywords = {Action detection, Artificial Intelligence, Computer Science, Computer vision, Detection, Pattern Recognition, Robot},
pubstate = {published},
tppubtype = {proceedings}
}
SVD-GAN for Real-Time Unsupervised Video Anomaly Detection Conference 2021 The British Machine Vision Conference (BMVC), 2021. Spatiotemporal Deformable Scene Graphs for Complex Activity Detection Conference 2021 The British Machine Vision Conference (BMVC), 2021. Modeling of Surgical Procedures Using Statecharts for Semi-Autonomous Robotic Surgery Journal Article IEEE TRANSACTIONS ON MEDICAL ROBOTICS AND BIONICS, 3 (4), pp. 888-899, 2021, ISSN: 2576-3202. Unsupervised anomaly detection for a Smart Autonomous Robotic Assistant Surgeon (SARAS) using a deep residual autoencoder Journal Article IEEE Robotics and Automation Letters , 6 (4), pp. 7256 - 7261, 2021. A bilateral teleoperation with interaction force constraint in unknown environment using non linear model predictive control Journal Article European Journal of Control, 62 (November 2021), pp. 185-191, 2021, ISSN: 0947-3580. A First Evaluation of a Multi-Modal Learning System to Control Surgical Assistant Robots via Action Segmentation Journal Article IEEE TRANSACTIONS ON MEDICAL ROBOTICS AND BIONICS, 3 (3), pp. 714-724, 2021, ISSN: 2576-3202. 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), IEEE, 2021. Rigid 3D Registration of Pre-operative Information for Semi-Autonomous Surgery Conference 2020 International Symposium on Medical Robotics (ISMR), IEEE, 2021. A Multi-Modal Learning System for On-Line Surgical Action Segmentation Conference 2020 International Symposium on Medical Robotics (ISMR), IEEE, 2021. Improving Rigid 3-D Calibration for Robotic Surgery Journal Article IEEE TRANSACTIONS ON MEDICAL ROBOTICS AND BIONICS, 2 (4), pp. 569-573, 2020, ISBN: 2576-3202. 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2020. A hFSM based cognitive control architecture for assistive task in R-MIS Conference 10th Conference on New Technologies for Computer/Robot Assisted Surgery (CRAS), 2020. Multi-task control strategy exploiting redundancy in RMIS Conference 10th Conference on New Technologies for Computer/Robot Assisted Surgery, 2020. Bitrack: a friendly four arms robot for laparoscopic surgery Conference 10th Conference on New Technologies for Computer/Robot Assisted Surgery, 2020. A supervisory controller for semi-autonomous surgical interventions Conference 10th Conference on New Technologies for Computer/Robot Assisted Surgery (CRAS 2020), 2020. 2020 IEEE International Conference on Robotics and Automation (ICRA), 2020. Surgical Hand Gesture Prediction for the Operating Room Journal Article Studies in Health Technology and Informatics , 273 , pp. 97-103, 2020. ESAD: Endoscopic Surgeon Action Detection Dataset Online arXiv, (Ed.): 2020, visited: 25.06.2020. Technical and Functional Validation of a Teleoperated Multirobots Platform for Minimally Invasive Surgery Journal Article IEEE TRANSACTIONS ON MEDICAL ROBOTICS AND BIONICS, 2 (2), pp. 148-156, 2020, ISSN: 2576-3202. 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), IEEE, 2020, ISBN: 978-1-7281-4004-9. Assistance Strategies for Robotized Laparoscopy Conference Robot 2019: Fourth Iberian Robotics Conference, 2019, ISBN: 978-3-030-36149-5. Vision Based Robot Assistance in TTTS Fetal Surgery Journal Article 2019 41st Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), 2019. Dynamic Motion Planning for Autonomous Assistive Surgical Robots Journal Article Electronics 2019, 8 (9) , pp. 957, 2019. Tele-Echography Using a Two-Layer Teleoperation Algorithm with Energy Scaling Conference 2019, (2019 International Conference on Robotics and Automation (ICRA) Palais des congres de Montreal, Montreal, Canada, May 20-24, 2019). Energy-Shared Two-Layer (Approach for Multi-Master-Multi-Slave) Bilateral Teleoperation Systems Conference 2019. 2019, ISBN: 978-1-5386-7825-1. Enhancing Surgical Process Modeling for Artificial Intelligence Development in Robotics the SARAS Case Study for Minimally Invasive Procedures Journal Article pp. 1-6, 2019, ISBN: 978-1-7281-2342-4. A physical/virtual platform for hysteroscopy training Conference Proceedings of the 9th Joint Workshop on New Technologies for Computer/Robot Assisted Surgery, 2019. Sentisim: a hybrid training platform for sinb in local melanoma staging Conference CRAS 2019, 2019. An energy saving approach to active object recognition and localization Conference Annual Conference of the IEEE Industrial Electronics Society (IECON).Washington, DC, USA. 2018. Recognition self-awareness for active object recognition on depth images Conference BMVC 2018 2018. Predicting action tubes Journal Article 2018, (Proceedings of the ECCV 2018 Workshop on Anticipating Human Behaviour (AHB 2018), Munich, Germany, Sep 2018). 2018. TraMNet - Transition Matrix Network for Efficient Action Tube Proposals Proceeding 2018. Incremental Tube Construction for Human Action Detection Proceeding 2018.
2021
2020
2019
2018