Medical Image and Sensor Data Processing

Diagnostic imaging is one of the two main pillars in the detection and diagnosis of diseases. The first task of radiologists is to create medical images using MRI, CT and co. Then, depending on the problem, the images are either examined for abnormalities (detection) or the presenting clinical picture is characterized (diagnosis). This process of diagnosis is time-consuming and tedious in view of large amounts of data. Also, very time-consuming manual steps of image processing are often necessary for planning minimally invasive interventions in therapy.

Therefore, the research field of medical image processing has been dominated by machine learning topics for several years, especially with the help of deep neural networks (Deep Learning).

In this area, the research group focuses on the generation of artificial data using simulation and synthesis methods. This data is used to explore how algorithms react to changes in data or to extend training and test data.

NeuroTEST (ZIM) - AI Stresstests using artificial MRI data

Contact: Prof. Dr. Stefanie Remmele
Period: 2021-2022 (2 years)
Partner: deepc GmbH, Munich
Associate Partners: LAKUMED Krankenhaus Landshut-Achdorf, Prof. Dr. Tobias Schäffter (PTB Berlin, TU Berlin)

Research Assistent: Christiane Posselt
Students: E. Gramotke, J. Fischer, S. Khajarian, A. Paulik, M. Kaiser, I. Marchl, Y. Mündler, N. Erl, M. Bischof, B. Rechenmacher, F. Kollmannsberger, M. Kottermair

Machine learning methods, especially using deep neural networks, are celebrating ever new successes in the analysis and classification of medical images. It is clear that in many applications the algorithms already achieve at least human-like decision-making accuracies. However, it is still unclear how networks trained on one particular data situation behave in another data situation. For example, when imaging hardware or imaging parameters differ from radiology to radiology.In the BMWi-funded ZIM project NeuroTEST, the Medical Technology Research Group (project leader Prof. Remmele) is developing methods for the systematic validation of neural networks (use case segmentation of MS lesions) together with the Munich-based startup deepc. For this purpose, methods of statistical experimental design are explored, with AI algorithms systematically tested on data from different recording protocols. The project at HAW focuses on the simulation and synthesis of these data in order to provide arbitrary data domains for the stress tests (more).

Artificial Data for AI supported and AR guided interventions

Contact: Prof. Dr. Stefanie Remmele
Period: 2021-2024 (3 years)
Partner: LAKUMED Krankenhaus Landshut-Achdorf

Research Assistent: Serouj Khajarian

"Especially if there are several liver tumors, it is difficult to assign them to the major hepatic vessels during surgery," reports Prof. Dr. Johannes Schmidt, Chief Physician and Medical Director of LAKUMED Kliniken, who supports HAW as a member of the University Council and the Advisory Board for Medical Technology and Health Management, among other things. Depending on the position and number of lesions, this poses a major challenge, as one can only estimate the optimal access and location of the tumor(s) from the outside. Only about 8% of liver tumors can be palpated. To reduce bleeding during liver surgery, it is possible to cut off the blood supply to the organ for about 15 minutes. However, reliable detection of the vessels would eliminate the need for this procedure, which is harmful to the liver.  In medical technology, methods are already being researched that superimpose virtual models of the anatomy on the surgeon's real operating room view (augmented reality, AR). Machine learning methods also exist to create virtual models from medical MR or CT images (segmentation) or to position them correctly in the real world (registration). But solutions for applying these methods in AR applications are searched for almost in vain.Now, the Medical Engineering Research Group has decided to tackle the problem in collaboration with LAKUMED KH. Professors from the faculties of ETWI, computer science and mechanical engineering want to contribute to the project with project and final theses in order to lay the foundation for joint funding applications and third-party projects.

3D prints for orthopedic surgery planning

Contact: Prof. Dr. N. Babel, Prof. Dr. S. Remmele
Bachelor thesis: Edith Gramotke (2021)

Currently, a wide variety of methods are being researched in medical technology to preoperatively adapt implants to the anatomy either on virtual bone structures or by means of 3D-printed models.

In her bachelor thesis, Edith Gramotke investigated possibilities for workflow-efficient segmentation of bone structures in preoperative CT data in the Medical Technology Laboratory (Bc BMT, supervisor Prof. Dr. Remmele). She is currently comparing different approaches for 3D printing of structures with the physicians, supported by Prof. Dr.-Ing. Babel, who has made his additive manufacturing laboratory available for the prints and advised the clinic on the selection of a suitable printer. If everything goes according to plan, then the first operations using 3D printed models will be planned before the bachelor's thesis is completed

Visualization methods for CNN validation

Visualization methods for CNN validation

Contact: Prof. Dr. Stefanie Remmele
Bachelor Thesis: Jakob Dexl (2018)

In summer 2018, various visualization approaches for the CNNs were implemented. These are intended to make the decision-making of the network comprehensible, and thus help, among other things, to identify weak points of the algorithms and to optimize the methods, as well as to focus attention in the long term on the detected anomalies when making findings. The work has been awarded the 2019 VDE-Preis in the Science category.

Deep learning for segmentation and object detection in medical images

Contact: Prof. Dr. Eduard Kromer
Partners: Fraunhofer Institut für Verfahrenstechnik und Verpackung IVV (Dr. Thilo Bauer, Satnam Singh), precipoint GmbH (Ludwig Wildner)
Bachelorthesis: Katharina Bauer (BMT), Lena Kinzel (BMT) in 2021
Professoren/-innen

In various projects, also in cooperation with industrial partners, the university investigates and develops Deep Learning models for regression, semantic segmentation and object detection for processing two- and three-dimensional medical image data. The focus is on the comprehensibility of the results, the type and comprehensiveness of the output, and the generalizability of the models (also to non-medical data).

Embedded Systems & Edge AI

contact: Prof. Dr. Andreas Breidenassel

Edge AI is composed of the trend of AI methods and classic edge computing. The collected data is processed directly on site. With the help of decentralization, many advantages can be created compared to cloud computing. Among other things, cloud decoupling eliminates the need for an otherwise compulsory Internet connection. It can be used in rural areas as well as in regions with extreme weather conditions. Furthermore, data security plays an overriding role, especially in the medical technology environment. The data remains on the system and can be processed directly on site. There is therefore no risk due to insufficient anonymization and traceability of data. Edge AI is also tailored to the use case and does not require permanent communication. The energy efficiency of mobile systems is another factor. Instant data processing and cloud-decoupled processing eliminate the need for large storage media and permanent communication with the cloud.

Edge AI

Classification of ECG data using deep learning on an Nvidia Jetson Nano System
Leo Hurzlmeier, Christiane Huber (Projektarbeit)

Diseases of the cardiovascular system (CVS) are the most frequent cause of death worldwide. These include cardiac arrhythmias, which can be diagnosed with the help of an ECG. In order to choose the appropriate treatment for the right type of arrhythmia, an exact diagnosis is necessary. This proves to be time-consuming and tedious and is to be reduced through the use of computer-assisted diagnostic programmes. In the winter semester of 2019, a project group successfully classified ECG data using Deep Learning. It was possible to categorise three different clinical pictures with a reliability of 98.06%. In addition, the influence of noise was investigated and the system was implemented on the Nvidia Jetson Nano.

Mole Identifier - Mobile classification of skin lesions
Maximilian Reiser, Gabriel Horkovics-Kovats (students)

The risk of developing skin cancer has increased from 0.17% to 1.00 to 1.33% since the 1960s. A decisive factor in reducing the mortality rate of 14.11% (status 2010) is early detection before metastasis. In a project work, a method for the classification of skin lesions was developed using Deep Learning and a mobile phone camera. In the application, different trained models can be selected. The user can then take a picture of the skin lesion to be classified with the mobile phone camera and have it evaluated. The segmented skin lesions could be classified with an accuracy of 85.11%, the trained network ported to an Android smartphone and controlled with an app.

Deep-PPG ("Strukturimpuls Forschungseinstieg" Programme)

Contact: Prof. Dr. Andreas Breidenassel
Duration: 2021-2023 (3 Jahre)
Partner: OSRAM Opto Semiconductors GmbH, Regensburg
Associate Partner: Prof. Dr. O. Amft (FAU Erlangen)

Research Assistent: Maximilian Reiser

Cardiovascular diseases are the most common cause of death in Germany. Medical wearables that measure vital parameters such as blood pressure, heart rate and blood oxygen levels in real time could help detect these diseases at an early stage and treat them preventively. In everyday life or during sports, the small, portable mini-computers in the form of fitness bracelets or smartwatches are already very popular. They are now also being used more and more frequently in medicine. The problem here, however, is that the mobile systems are not always free of errors. In most wearables, for example, vital parameters are measured using the so-called PPG (photoplethysmography) method. This can lead to signal interference if, for example, the sensors slip during movements. However, medical science depends on reliable measurements. This is precisely where the new "Deep-PPG" research project at Landshut University of Applied Sciences, headed by Prof. Dr. Andreas Breidenassel, comes in. Its goal is to reduce the susceptibility of the PPG signal to interference and thus enable more accurate measurements of wearables in medical applications. The company OSRAM Opto Semiconductors is involved in the project. The project is funded by the Bavarian State Ministry of Science and Arts (mehr).

Embedded Systems

EMG Biofeedback-System
Michael Hartl, Michael Gröber, Matthias Laber, Lisa-Maria Kirchner, Michael Gröber, Paul Wingert (students)

Partner: Gerald Gradl (Texas Instruments)

 

 A mobile biofeedback system was developed as part of a project. With the help of the system, patients can focus their consciousness on the paralysed areas, for example, in order to train them in a targeted manner. It can also be used as a warning system, e.g. for people in a tense posture (office work at the computer) or as a training system for athletes. A signal path for evaluating muscle potential was designed, then simulated and measured using a test setup.

In the subsequent project work, a galvanically decoupled system was developed based on a Raspberry Pi. A galvanically decoupled filter board and an ADC board for measuring muscle tension (EMG) with subsequent digital signal processing on the single-board computer were developed.



Consecption and prototypical implementation of a tonometer

G. Witte, M. Tesik, V. Eilers (students)

The intraocular pressure is decisive for the survival of the organ. If the intraocular pressure is permanently elevated, this can lead to damage to the optic nerve later in life. The project group has built a mobile prototype of a tonometer. In addition to the control board and coil system, a power management system was developed. A PIC24 was used as the microcontroller and the housing was made from polyactide using 3D printing (Fused Deposition Modelling - FDM).

AR, VR und Web Simulations

contact: Prof. Dr. Stefanie Remmele

In the laboratory for digital medical technology, VR, AR and 3D web applications are developed for teaching as part of teaching projects and theses. The main focus here is on the representation of complex facts such as electric or magnetic fields or on the virtualization of technologies that are difficult to access, such as the rooms and equipment in radiology or radiation oncology.

Virtual Reality

Planning Rooms in Radiology
Artur Maleta (Masterthesis)
The application was developed in cooperation with Philips DXR in Hamburg and serves as support in the early phases of a space planning process, where often only paper sketches or manual digital sketches are available as a basis for discussion. Furthermore, it is intended to enable the creation of detailed plans in the further course of a project. Ideas of possible room layouts shall be directly sketchable virtually with real equipment models and in real scale. In addition to 2D room layout planning (object arrangement) and 3D verification (visual control) of the layout, the room layout can be experienced in VR to make it more tangible for the user and to reduce planning errors (e.g. regarding distances or ceiling height). (pdf)
Gamification & VR-Apps for education
L. Rottmaier, F. Köhr, C. Steinberger, C. Huber, T. Held, J. Reindl, T. Diermeier, T. Schubert, A. Hanke, T. Huber (Students)
New applications for virtual excursions into medical rooms are created every year as part of the biomedical engineering project work. Here, virtual MRI devices can be assembled, X-ray images can be taken or the therapeutic settings of linear accelerators for the irradiation of tumors can be changed, partly in multi-user applications. The lab is equipped with Pico Neo 2, HTC Focus 3, Google Cardboard and Daydream hardware.

Augmented Reality

Tools for the development of AR Apps
T. Feulner, M. Kaiser, R. Stolz (students)

Augmented Reality applications are playing an increasingly important role in many industrial sectors. At the same time, a growing number of development tools, assets and libraries are entering the market.
In summer 2019, a project group compared different strategies for developing augmented reality apps (including Unity, Vuforia, Wikitude, Aframe, AR.js, ...). For the comparison, the project team developed an interactive simulation of an X-ray tube and compared development effort, robustness of the application and user satisfaction. Feel free to try it out for yourself here (pdf):

3D Web Simulations

Projekt-/Hiwi-/Abschlussarbeiten: Artur Maleta, Roland Stolz, Julian Fischer

Within the framework of a smartVHB grant and various student assistantships in the Medical Technology Laboratory, various 3D web simulations have been developed that enable experimentation at home in digitally supported teaching. Here, the influence of electrical parameters on the characteristics of electric and magnetic fields can be investigated or patients can be examined in MRI, X-ray and CT scanners. The simulations are freely accessible under Link.