Andrea Roberti Nicola Piccinelli, Daniele Meli Riccardo Muradore ; Paolo Fiorini, Improving Rigid 3-D Calibration for Robotic Surgery Journal Article IEEE TRANSACTIONS ON MEDICAL ROBOTICS AND BIONICS, 2 (4), pp. 569-573, 2020, ISBN: 2576-3202. Abstract | Links | BibTeX | Tags: Calibration, Medical robotics, Minimally invasive surgery, multi arm calibration, Robot, Robot vision systems, Surgery, Surgical robotics, Three-dimensional displays Giacomo De Rossi Marco Minelli, Alessio Sozzi Nicola Piccinelli Federica Ferraguti Francesco Setti Marcello Bonfé Christian Secchi Riccardo Muradore 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), IEEE, 2020, ISBN: 978-1-7281-4004-9. Abstract | Links | BibTeX | Tags: Artificial Intelligence, Collision avoidance, Manipulators, Medical robotics, mobile robots, predictive control, Robot, Robot vision, Surgery, trajectory control, uncertain systems Casals, Alicia; Hernansanz, Albert; Sayols, Narcís; Amat, Josep Assistance Strategies for Robotized Laparoscopy Conference Robot 2019: Fourth Iberian Robotics Conference, 2019, ISBN: 978-3-030-36149-5. Abstract | Links | BibTeX | Tags: Cooperative robotics, Laparoscopy, Robot, Safety, Surgery, Surgical robots, Virtual feedback Sozzi, Alessio; Bonfè, Marcello; Farsoni, Saverio; DeRossi, Giacomo; Muradore, Riccardo Dynamic Motion Planning for Autonomous Assistive Surgical Robots Journal Article Electronics 2019, 8 (9) , pp. 957, 2019. Abstract | Links | BibTeX | Tags: Dynamical systems, Motion planning, Obstacle avoidance, Robot, Robot, Surgical robots Minelli, Marco; Ferraguti, Federica; Piccinelli, Nicola; Muradore, Riccardo; Secchi, Cristian Energy-Shared Two-Layer (Approach for Multi-Master-Multi-Slave) Bilateral Teleoperation Systems Conference 2019. Abstract | Links | BibTeX | Tags: Control architecture, Control programming, Laparoscopy, Robot, Surgical robots, Teleoperation, Telerobotics Setti, Francesco; Oleari, Elettra; Leporini, Alice; Trojaniello, Diana; Sanna, Alberto; Capitanio, Umberto; Montorsi, Francesco; Salonia, Andrea; Muradore, Riccardo 2019, ISBN: 978-1-5386-7825-1. Abstract | Links | BibTeX | Tags: Artificial Intelligence, Cognitive control, Computer Science, Laparoscopy, Laparoscopy, machine learning, Robot, Robotic surgery, Surgery, Teleoperation Hernansanz, Albert; Martínez, ; Rovira, ; Casals, Alicia A physical/virtual platform for hysteroscopy training Conference Proceedings of the 9th Joint Workshop on New Technologies for Computer/Robot Assisted Surgery, 2019. Abstract | Links | BibTeX | Tags: Computer Science, Endoscopy, Laparoscopy, Laparoscopy, Robot, Robotic Surgery, Robotic surgery, Surgery, Surgical robots, Training Singh, Gurkirt; Saha, Suman; Cuzzolin, Fabio Predicting action tubes Journal Article 2018, (Proceedings of the ECCV 2018 Workshop on Anticipating Human Behaviour (AHB 2018), Munich, Germany, Sep 2018). Abstract | Links | BibTeX | Tags: Artificial Intelligence, Computer Science, Computer vision, Object recognition, Pattern Recognition, Robot, Robotics Arturo, Marbán; Srinivasan, Vignesh; Samek, Wojciech; Fernández, Josep; Casals, Alicia 2018. Abstract | Links | BibTeX | Tags: Learning, Robot, Robotic surgery, Robotics, Surgery, Training Singh, Gurkirt; Saha, Suman; Cuzzolin, Fabio TraMNet - Transition Matrix Network for Efficient Action Tube Proposals Proceeding 2018. Abstract | Links | BibTeX | Tags: Computer Science, Computer vision, Electrical Engineering, Image processing, Pattern Recognition, Robot, Robotics, Systems Science, Visual processing Behl, Harkirat Singh; Sapienza, Michael; Singh, Gurkirt; Saha, Suman; Cuzzolin, Fabio; Torr, Philip H S Incremental Tube Construction for Human Action Detection Proceeding 2018. Abstract | Links | BibTeX | Tags: Action detection, Artificial Intelligence, Computer Science, Computer vision, Detection, Pattern Recognition, Robot
2020
title = {Improving Rigid 3-D Calibration for Robotic Surgery},
author = {Andrea Roberti , Nicola Piccinelli , Daniele Meli, Riccardo Muradore , and Paolo Fiorini ,},
editor = {IEEE },
doi = {10.1109/TMRB.2020.3033670},
isbn = {2576-3202},
year = {2020},
date = {2020-11-04},
journal = {IEEE TRANSACTIONS ON MEDICAL ROBOTICS AND BIONICS},
volume = {2},
number = {4},
pages = {569-573},
abstract = {Autonomy is the next frontier of research in robotic surgery and its aim is to improve the quality of surgical procedures in the next future. One fundamental requirement for autonomy is advanced perception capability through vision sensors. In this article, we propose a novel calibration technique for a surgical scenario with a da Vinci ® Research Kit (dVRK) robot. Camera and robotic arms calibration are necessary to precise position and emulate expert surgeon. The novel calibration technique is tailored for RGB-D cameras. Different tests performed on relevant use cases prove that we significantly improve precision and accuracy with respect to state of the art solutions for similar devices on a surgical-size setups. Moreover, our calibration method can be easily extended to standard surgical endoscope used in real surgical scenario.},
keywords = {Calibration, Medical robotics, Minimally invasive surgery, multi arm calibration, Robot, Robot vision systems, Surgery, Surgical robotics, Three-dimensional displays},
pubstate = {published},
tppubtype = {article}
}
title = {Cognitive Robotic Architecture for Semi-Autonomous Execution of Manipulation Tasks in a Surgical Environment},
author = {Giacomo De Rossi, Marco Minelli, Alessio Sozzi, Nicola Piccinelli, Federica Ferraguti, Francesco Setti, Marcello Bonfé, Christian Secchi, Riccardo Muradore},
editor = {IEEE International Intelligent Robots and Systems (IROS)},
doi = {10.1109/IROS40897.2019.8967667},
isbn = {978-1-7281-4004-9},
year = {2020},
date = {2020-01-27},
booktitle = {2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
publisher = {IEEE},
abstract = {The development of robotic systems with a certain level of autonomy to be used in critical scenarios, such as an operating room, necessarily requires a seamless integration of multiple state-of-the-art technologies. In this paper we propose a cognitive robotic architecture that is able to help an operator accomplish a specific task. The architecture integrates an action recognition module to understand the scene, a supervisory control to make decisions, and a model predictive control to plan collision-free trajectory for the robotic arm taking into account obstacles and model uncertainty. The proposed approach has been validated on a simplified scenario involving only a da VinciO surgical robot and a novel manipulator holding standard laparoscopic tools.},
keywords = {Artificial Intelligence, Collision avoidance, Manipulators, Medical robotics, mobile robots, predictive control, Robot, Robot vision, Surgery, trajectory control, uncertain systems},
pubstate = {published},
tppubtype = {conference}
}
2019
title = {Assistance Strategies for Robotized Laparoscopy},
author = {Alicia Casals and Albert Hernansanz and Narcís Sayols and Josep Amat},
editor = {Springer
},
url = {https://link.springer.com/chapter/10.1007%2F978-3-030-36150-1_40},
doi = {10.1007/978-3-030-36150-1_40},
isbn = {978-3-030-36149-5},
year = {2019},
date = {2019-11-20},
booktitle = {Robot 2019: Fourth Iberian Robotics Conference},
pages = {485-496},
abstract = {Robotizing laparoscopic surgery not only allows achieving better accuracy to operate when a scale factor is applied between master and slave or thanks to the use of tools with 3 DoF, which cannot be used in conventional manual surgery, but also due to additional informatic support. Relying on computer assistance different strategies that facilitate the task of the surgeon can be incorporated, either in the form of autonomous navigation or cooperative guidance, providing sensory or visual feedback, or introducing certain limitations of movements. This paper describes different ways of assistance aimed at improving the work capacity of the surgeon and achieving more safety for the patient, and the results obtained with the prototype developed at UPC.},
keywords = {Cooperative robotics, Laparoscopy, Robot, Safety, Surgery, Surgical robots, Virtual feedback},
pubstate = {published},
tppubtype = {conference}
}
title = {Dynamic Motion Planning for Autonomous Assistive Surgical Robots},
author = {Alessio Sozzi and Marcello Bonfè and Saverio Farsoni and Giacomo DeRossi and Riccardo Muradore},
doi = {10.3390/electronics8090957},
year = {2019},
date = {2019-07-26},
journal = {Electronics 2019},
volume = {8 (9)},
pages = {957},
abstract = {The paper addresses the problem of the generation of collision-free trajectories for a robotic manipulator, operating in a scenario in which obstacles may be moving at non-negligible velocities. In particular, the paper aims to present a trajectory generation solution that is fully executable in real-time and that can reactively adapt to both dynamic changes of the environment and fast reconfiguration of the robotic task. The proposed motion planner extends the method based on a dynamical system to cope with the peculiar kinematics of surgical robots for laparoscopic operations, the mechanical constraint being enforced by the fixed point of insertion into the abdomen of the patient the most challenging aspect. The paper includes a validation of the trajectory generator in both simulated and experimental scenarios.},
keywords = {Dynamical systems, Motion planning, Obstacle avoidance, Robot, Robot, Surgical robots},
pubstate = {published},
tppubtype = {article}
}
title = {Energy-Shared Two-Layer (Approach for Multi-Master-Multi-Slave) Bilateral Teleoperation Systems},
author = {Marco Minelli and Federica Ferraguti and Nicola Piccinelli and Riccardo Muradore and Cristian Secchi},
url = {https://nxgsur-icra2019.sciencesconf.org/272549/document},
doi = {10.5281/zenodo.3362947 },
year = {2019},
date = {2019-05-20},
abstract = {In this paper, a two-layer architecture for the bilateral teleoperation of multi-arms systems with communication delay is presented. We extend the single-master-single-slave two layer approach proposed in [1] by connecting multiple robots to a single energy tank. This allows to minimize the conservativeness due to passivity preservation and to increment the level of transparency that can be achieved. The proposed approach is implemented on a realistic surgical scenario developed within the EU-funded SARAS project.
},
keywords = {Control architecture, Control programming, Laparoscopy, Robot, Surgical robots, Teleoperation, Telerobotics},
pubstate = {published},
tppubtype = {conference}
}
title = {A Multirobots Teleoperated Platform for Artificial Intelligence Training Data Collection in Minimally Invasive Surgery},
author = {Francesco Setti and Elettra Oleari and Alice Leporini and Diana Trojaniello and Alberto Sanna and Umberto Capitanio and Francesco Montorsi and Andrea Salonia and Riccardo Muradore},
editor = {IEEE},
url = {http://bmvc2018.org/contents/papers/0593.pdf},
doi = {10.1109/ISMR.2019.8710209},
isbn = {978-1-5386-7825-1},
year = {2019},
date = {2019-05-09},
pages = {1-7},
abstract = {Dexterity and perception capabilities of surgical robots may soon be improved by cognitive functions that can support surgeons in decision making and performance monitoring, and enhance the impact of automation within the operating rooms. Nowadays, the basic elements of autonomy in robotic surgery are still not well understood and their mutual interaction is unexplored. Current classification of autonomy encompasses six basic levels: Level 0: no autonomy; Level 1: robot assistance; Level 2: task autonomy; Level 3: conditional autonomy; Level 4: high autonomy. Level 5: full autonomy. The practical meaning of each level and the necessary technologies to move from one level to the next are the subject of intense debate and development. In this paper, we discuss the first outcomes of the European funded project Smart Autonomous Robotic Assistant Surgeon (SARAS). SARAS will develop a cognitive architecture able to make decisions based on pre-operative knowledge and on scene understanding via advanced machine learning algorithms. To reach this ambitious goal that allows us to reach Level 1 and 2, it is of paramount importance to collect reliable data to train the algorithms. We will present the experimental setup to collect the data for a complex surgical procedure (Robotic Assisted Radical Prostatectomy) on very sophisticated manikins (i.e. phantoms of the inflated human abdomen). The SARAS platform allows the main surgeon and the assistant to teleoperate two independent two-arm robots. The data acquired with this platform (videos, kinematics, audio) will be used in our project and will be released (with annotations) for research purposes.},
keywords = {Artificial Intelligence, Cognitive control, Computer Science, Laparoscopy, Laparoscopy, machine learning, Robot, Robotic surgery, Surgery, Teleoperation},
pubstate = {published},
tppubtype = {conference}
}
title = {A physical/virtual platform for hysteroscopy training},
author = {Albert Hernansanz and Martínez and Rovira and Alicia Casals},
editor = {CRAS 2019},
doi = {10.5281/zenodo.3373297},
year = {2019},
date = {2019-03-21},
booktitle = {Proceedings of the 9th Joint Workshop on New Technologies for Computer/Robot Assisted Surgery},
abstract = {This work presents HysTrainer (HT), a training module for hysteroscopy, which is part of the generic endoscopic training platform EndoTrainer (ET). This platform merges both technologies, with the benefits of having a physical anatomic model and computer assistance for augmented reality and objective assessment. Further to the functions of a surgical trainer, EndoTrainer provides an integral education, training and evaluation platform.},
keywords = {Computer Science, Endoscopy, Laparoscopy, Laparoscopy, Robot, Robotic Surgery, Robotic surgery, Surgery, Surgical robots, Training},
pubstate = {published},
tppubtype = {conference}
}
2018
title = {Predicting action tubes},
author = {Gurkirt Singh and Suman Saha and Fabio Cuzzolin},
editor = {ECCV 2018 Workshop on Anticipating Human Behaviour (AHB 2018), Munich, Germany, Sep 2018},
url = {http://openaccess.thecvf.com/content_ECCVW_2018/papers/11131/Singh_Predicting_Action_Tubes_ECCVW_2018_paper.pdf},
doi = {10.5281/zenodo.3362942},
year = {2018},
date = {2018-08-23},
abstract = {In this work, we present a method to predict an entire `action tube' (a set of temporally linked bounding boxes) in a trimmed video just by observing a smaller subset of it. Predicting where an action is going to take place in the near future is essential to many computer vision based applications such as autonomous driving or surgical robotics. Importantly, it has to be done in real-time and in an online fashion. We propose a Tube Prediction network (TPnet) which jointly predicts the past, present and future bounding boxes along with their action classification scores. At test time TPnet is used in a (temporal) sliding window setting, and its predictions are put into a tube estimation framework to construct/predict the video long action tubes not only for the observed part of the video but also for the unobserved part. Additionally, the proposed action tube predictor helps in completing action tubes for unobserved segments of the video. We quantitatively demonstrate the latter ability, and the fact that TPnet improves state-of-the-art detection performance, on one of the standard action detection benchmarks - J-HMDB-21 dataset.},
note = {Proceedings of the ECCV 2018 Workshop on Anticipating Human Behaviour (AHB 2018), Munich, Germany, Sep 2018},
keywords = {Artificial Intelligence, Computer Science, Computer vision, Object recognition, Pattern Recognition, Robot, Robotics},
pubstate = {published},
tppubtype = {article}
}
title = {Estimation of interaction forces in robotic surgery using a semi-supervised deep neural network model},
author = {Marbán Arturo and Vignesh Srinivasan and Wojciech Samek and Josep Fernández and Alicia Casals},
editor = {IEEE},
url = {https://upcommons.upc.edu/bitstream/handle/2117/132610/iros2018_paper_26_07_2018.pdf?sequence=3&isAllowed=y},
doi = {10.1109/IROS.2018.8593701},
year = {2018},
date = {2018-08-09},
abstract = {Providing force feedback as a feature in current Robot-Assisted Minimally Invasive Surgery systems still remains a challenge. In recent years, Vision-Based Force Sensing (VBFS) has emerged as a promising approach to address this problem. Existing methods have been developed in a Supervised Learning (SL) setting. Nonetheless, most of the video sequences related to robotic surgery are not provided with ground-truth force data, which can be easily acquired in a controlled environment. A powerful approach to process unlabeled video sequences and find a compact representation for each video frame relies on using an Unsupervised Learning (UL) method. Afterward, a model trained in an SL setting can take advantage of the available ground-truth force data. In the present work, UL and SL techniques are used to investigate a model in a Semi-Supervised Learning (SSL) framework, consisting of an encoder network and a Long-Short Term Memory (LSTM) network. First, a Convolutional Auto-Encoder (CAE) is trained to learn a compact representation for each RGB frame in a video sequence. To facilitate the reconstruction of high and low frequencies found in images, this CAE is optimized using an adversarial framework and a L1-loss, respectively. Thereafter, the encoder network of the CAE is serially connected with an LSTM network and trained jointly to minimize the difference between ground-truth and estimated force data. Datasets addressing the force estimation task are scarce. Therefore, the experiments have been validated in a custom dataset. The results suggest that the proposed approach is promising.},
keywords = {Learning, Robot, Robotic surgery, Robotics, Surgery, Training},
pubstate = {published},
tppubtype = {conference}
}
title = {TraMNet - Transition Matrix Network for Efficient Action Tube Proposals},
author = {Gurkirt Singh and Suman Saha and Fabio Cuzzolin},
url = {https://arxiv.org/abs/1808.00297},
year = {2018},
date = {2018-08-01},
abstract = {Current state-of-the-art methods solve spatio-temporal ac-tion localisation by extending 2D anchors to 3D-cuboid proposals onstacks of frames, to generate sets of temporally connected bounding boxescalled action micro-tubes. However, they fail to consider that the underly-ing anchor proposal hypotheses should also move (transition) from frameto frame, as the actor or the camera do. Assuming we evaluate n2D an-chors in each frame, then the number of possible transitions from each2D anchor to he next, for a sequence of fconsecutive frames, is in theorder of O(nf), expensive even for small values of f.To avoid this problem we introduce a Transition-Matrix-based Network(TraMNet) which relies on computing transition probabilities betweenanchor proposals while maximising their overlap with ground truth bound-ing boxes across frames, and enforcing sparsity via a transition threshold.As the resulting transition matrix is sparse and stochastic, this reducesthe proposal hypothesis search space from O(nf) to the cardinality ofthe thresholded matrix. At training time, transitions are specific to celllocations of the feature maps, so that a sparse (efficient) transition ma-trix is used to train the network. At test time, a denser transition matrixcan be obtained either by decreasing the threshold or by adding to itall the relative transitions originating from any cell location, allowingthe network to handle transitions in the test data that might not havebeen present in the training data, and making detection translation-invariant. Finally, we show that our network is able to handle sparseannotations such as those available in the DALY dataset, while allowingfor both dense (accurate) or sparse (efficient) evaluation within a singlemodel. We report extensive experiments on the DALY, UCF101-24 andTransformed-UCF101-24 datasets to support our claims.},
keywords = {Computer Science, Computer vision, Electrical Engineering, Image processing, Pattern Recognition, Robot, Robotics, Systems Science, Visual processing},
pubstate = {published},
tppubtype = {proceedings}
}
title = {Incremental Tube Construction for Human Action Detection},
author = {Harkirat Singh Behl and Michael Sapienza and Gurkirt Singh and Suman Saha and Fabio Cuzzolin and Philip H. S. Torr},
editor = {British Machine Vision Conference (BMVC). Newcastle-Upon-Tyne, UK},
url = {https://arxiv.org/abs/1704.01358},
year = {2018},
date = {2018-07-23},
abstract = {Current state-of-the-art action detection systems are tailored for offline batch-processing applications. However, for online applications like human-robot interaction, current systems fall short, either because they only detect one action per video, or because they assume that the entire video is available ahead of time. In this work, we introduce a real-time and online joint-labelling and association algorithm for action detection that can incrementally construct space-time action tubes on the most challenging action videos in which different action categories occur concurrently. In contrast to previous methods, we solve the detection-window association and action labelling problems jointly in a single pass. We demonstrate superior online association accuracy and speed (2.2ms per frame) as compared to the current state-of-the-art offline systems. We further demonstrate that the entire action detection pipeline can easily be made to work effectively in real-time using our action tube construction algorithm.},
keywords = {Action detection, Artificial Intelligence, Computer Science, Computer vision, Detection, Pattern Recognition, Robot},
pubstate = {published},
tppubtype = {proceedings}
}
Improving Rigid 3-D Calibration for Robotic Surgery Journal Article IEEE TRANSACTIONS ON MEDICAL ROBOTICS AND BIONICS, 2 (4), pp. 569-573, 2020, ISBN: 2576-3202. 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), IEEE, 2020, ISBN: 978-1-7281-4004-9. Assistance Strategies for Robotized Laparoscopy Conference Robot 2019: Fourth Iberian Robotics Conference, 2019, ISBN: 978-3-030-36149-5. Dynamic Motion Planning for Autonomous Assistive Surgical Robots Journal Article Electronics 2019, 8 (9) , pp. 957, 2019. Energy-Shared Two-Layer (Approach for Multi-Master-Multi-Slave) Bilateral Teleoperation Systems Conference 2019. 2019, ISBN: 978-1-5386-7825-1. A physical/virtual platform for hysteroscopy training Conference Proceedings of the 9th Joint Workshop on New Technologies for Computer/Robot Assisted Surgery, 2019. Predicting action tubes Journal Article 2018, (Proceedings of the ECCV 2018 Workshop on Anticipating Human Behaviour (AHB 2018), Munich, Germany, Sep 2018). 2018. TraMNet - Transition Matrix Network for Efficient Action Tube Proposals Proceeding 2018. Incremental Tube Construction for Human Action Detection Proceeding 2018.
2020
2019
2018