The semi-autonomous robotic movements system is intended to assist the surgeon to manoeuvre the robot around the patient and hospital staff in a user-friendly way by providing virtual axis control, while avoiding collisions with patient, staff or devices in the operating room. The system consists of anti-collision sensors, a 3D virtualization unit and a robotic control unit. The robotic movement system detects humans and devices and plans its movements in a 3D room environment model.
MODEL BASED TESTING: This demonstrator shows the specification of the User Interaction Design (UID) models at high abstraction layer using the UID domain specific language developed for this purpose. The UID models are then used as source for the generation of models, which are used by the model based testing tools (i.e. TorXakis). Therefore, this demo shows an automated test generation at the very first design phase. These test models are used for the system/component verification at the integration phase. Moreover, the UID models are used for the generation of executable models. These executable models are displayed in the 3D Visualization component of the Virtual Cathlab, enabling an interactive virtual environment where the UID designer and/or the stakeholder can experience the new system movements at the early design phase, eliminating specification miscommunication among different development phases. Last but not least, the UID models are used for the automated generation of current documentation models, thereby replacing the document development with model based development, by replacing word document tools with UID modelling tools.
VIRTUAL CATHLAB SIMULATION: A video of this demonstrator will be presented showing a hybrid Virtual Cathlab consisting of a virtual C-Arc and half a virtual table with a virtual patient on top, shown on a projection screen using a beamer. The other half of the table and patient are part of the physical world. The two halves are seamlessly connected. A Kinect tracker is used to adjust the perspective of the projection to the position of the user. A physical control box (TSO) is present to allow the user to move the virtual system. A physical foot switch is present to allow the user to create Fluoro (low-dose) or Exposure (high dose) runs using software that created virtual X-Ray images. This demonstrator includes the sounds of the mechatronic movements. A simplified life version of this demo will be shown live, using an HTC Vive instead of a beamer.
This use-case will focus on autonomous control towards optimal settings of X-ray acquisition and contrast medium injection using specific disease characteristics and a model of the relevant patient’s anatomy. The goal is to reach a sufficient image quality with a minimum of potential harmful contrast medium and X-ray dose. For this Use Case, the virtual product development framework needs various models to simulate the system and its environment, for modelling:
- x-ray absorption and scattering
- human phantom including image influencing characteristics of various pathologies
- blood flow and contrast medium flow
X-RAY IMAGE CHAIN OPTIMIZATION AND IMAGE QUALITY TUNING: There are many degrees of freedom in the input parameter space of an interventional X-ray system. By using smart analytical models, optimal X-ray parameters can be calculated for any situation (depending on patient and procedure). The challenge is to define what optimal means in order to decide which parameter set is better than another. And how do these parameters affect contrast to noise ratio and patient dose?