Medical image processing for diagnosis and therapy

contact: see projects

Medical care goes hand in hand with the analysis and use of a very large and ever-growing amount of medical data. Diagnostic imaging for example, is one of the two main pillars in the detection and diagnosis of diseases. The first task of radiologists is to create medical images using MRI, CT and co. Then, depending on the problem, the images are either examined for abnormalities (detection) or the presenting clinical picture is characterized (diagnosis). This process of diagnosis is time-consuming and tedious in view of large amounts of data. Also, very time-consuming manual steps of image processing are often necessary for the planning and guidance of minimally invasive interventions in therapy.

Therefore, the research field of medical image processing has been dominated by machine learning topics for several years, especially with the help of deep neural networks (Deep Learning).

NeuroTEST (ZIM) - AI Stresstests using artificial MRI data

Contact: Prof. Dr. Stefanie Remmele
Period: 2021-2022 (2 years)
Partner: deepc GmbH, Munich
Associate Partners: LAKUMED Krankenhaus Landshut-Achdorf, Prof. Dr. Tobias Schäffter (PTB Berlin, TU Berlin)

Research Assistent: Christiane Posselt
Students: E. Gramotke, J. Fischer, S. Khajarian, A. Paulik, M. Kaiser, I. Marchl, Y. Mündler, N. Erl, M. Bischof, B. Rechenmacher, F. Kollmannsberger, M. Kottermair

Machine learning methods, especially using deep neural networks, are celebrating ever new successes in the analysis and classification of medical images. It is clear that in many applications the algorithms already achieve at least human-like decision-making accuracies. However, it is still unclear how networks trained on one particular data situation behave in another data situation. For example, when imaging hardware or imaging parameters differ from radiology to radiology.In the BMWi-funded ZIM project NeuroTEST, the Medical Technology Research Group (project leader Prof. Remmele) is developing methods for the systematic validation of neural networks (use case segmentation of MS lesions) together with the Munich-based startup deepc. For this purpose, methods of statistical experimental design are explored, with AI algorithms systematically tested on data from different recording protocols. The project at HAW focuses on the simulation and synthesis of these data in order to provide arbitrary data domains for the stress tests (more).

Christiane Posselt, Edith Gramotke, Abhijeet Parida, Mehmet Yiğitsoy, Stefanie Remmele, "Novel concept for systematic testing of AI models for MRI acquisition shifts with simulated data", Proceedings Volume 12467, Medical Imaging 2023,  124671B (2023)

AI supported and AR guided interventions (AIARLiver)

Prof. Dr. Stefanie Remmele (medical image processing, data synthesis and simulation)

Period: 2021-2024 (3 years)
Partner: LAKUMED Krankenhaus Landshut-Achdorf

Prof. Dr. Christopher Auer (Hololens App)
Prof. Dr. Eduard Kromer (AI Algorithms)
Prof. Dr. Norbert Babel (3D print)
Prof. Dr. Aida Anetsberger (medical support)
Research Assistent: Serouj Khajarian

"Especially if there are several liver tumors, it is difficult to assign them to the major hepatic vessels during surgery," reports Prof. Dr. Johannes Schmidt, Chief Physician and Medical Director of LAKUMED Kliniken, who supports HAW as a member of the University Council and the Advisory Board for Medical Technology and Health Management, among other things. Depending on the position and number of lesions, this poses a major challenge, as one can only estimate the optimal access and location of the tumor(s) from the outside. Only about 8% of liver tumors can be palpated. To reduce bleeding during liver surgery, it is possible to cut off the blood supply to the organ for about 15 minutes. However, reliable detection of the vessels would eliminate the need for this procedure, which is harmful to the liver.  In medical technology, methods are already being researched that superimpose virtual models of the anatomy on the surgeon's real operating room view (augmented reality, AR). Machine learning methods also exist to create virtual models from medical MR or CT images (segmentation) or to position them correctly in the real world (registration). But solutions for applying these methods in AR applications are searched for almost in vain.Now, the Medical Engineering Research Group has decided to tackle the problem in collaboration with LAKUMED KH. Professors from the faculties of ETWI, computer science and mechanical engineering want to contribute to the project with project and final theses in order to lay the foundation for joint funding applications and third-party projects.

Visualization methods for CNN validation

Visualization methods for CNN validation

Contact: Prof. Dr. Stefanie Remmele
Bachelor Thesis: Jakob Dexl (2018)

In summer 2018, various visualization approaches for the CNNs were implemented. These are intended to make the decision-making of the network comprehensible, and thus help, among other things, to identify weak points of the algorithms and to optimize the methods, as well as to focus attention in the long term on the detected anomalies when making findings. The work has been awarded the 2019 VDE-Preis in the Science category.

Deep learning for segmentation and object detection in medical images

Contact: Prof. Dr. Eduard Kromer
Partners: Fraunhofer Institut für Verfahrenstechnik und Verpackung IVV (Dr. Thilo Bauer, Satnam Singh), precipoint GmbH (Ludwig Wildner)
Bachelorthesis: Katharina Bauer (BMT), Lena Kinzel (BMT) in 2021

In various projects, also in cooperation with industrial partners, the university investigates and develops Deep Learning models for regression, semantic segmentation and object detection for processing two- and three-dimensional medical image data. The focus is on the comprehensibility of the results, the type and comprehensiveness of the output, and the generalizability of the models (also to non-medical data).

Medical Signal and Vital Sign Processing

contact: Prof. Dr. Andreas Breidenassel

The ongoing digital transformation is currently leading to major changes in many areas of society. In healthcare, for example, we perceive an increased use of telemedicine and e-health solutions as well as a sharp rise in the spread of mobile systems / wearables for recording and processing vital data. The health data available in real time, as well as the possibility of evaluating trend data, enables a wide range of applications from permanent patient monitoring to accident prognoses and extended fitness tracking.
Challenges for the development of such mobile systems arise mainly from two aspects: On the one hand, there are limitations in the usability of the sensor data due to a lack of robustness of the signal under motion / physical activity. On the other hand, many mobile systems have only limited resources available for correction and further processing of the data. In order to reduce artifacts and to further process vital data, methods from the field of machine learning have been increasingly used in recent years, in addition to classical signal processing techniques.

Edge AI

Classification of ECG data using deep learning on an Nvidia Jetson Nano System
Leo Hurzlmeier, Christiane Huber (Projektarbeit)

Diseases of the cardiovascular system (CVS) are the most frequent cause of death worldwide. These include cardiac arrhythmias, which can be diagnosed with the help of an ECG. In order to choose the appropriate treatment for the right type of arrhythmia, an exact diagnosis is necessary. This proves to be time-consuming and tedious and is to be reduced through the use of computer-assisted diagnostic programmes. In the winter semester of 2019, a project group successfully classified ECG data using Deep Learning. It was possible to categorise three different clinical pictures with a reliability of 98.06%. In addition, the influence of noise was investigated and the system was implemented on the Nvidia Jetson Nano.

Mole Identifier - Mobile classification of skin lesions
Maximilian Reiser, Gabriel Horkovics-Kovats (students)

The risk of developing skin cancer has increased from 0.17% to 1.00 to 1.33% since the 1960s. A decisive factor in reducing the mortality rate of 14.11% (status 2010) is early detection before metastasis. In a project work, a method for the classification of skin lesions was developed using Deep Learning and a mobile phone camera. In the application, different trained models can be selected. The user can then take a picture of the skin lesion to be classified with the mobile phone camera and have it evaluated. The segmented skin lesions could be classified with an accuracy of 85.11%, the trained network ported to an Android smartphone and controlled with an app.

Deep-PPG ("Strukturimpuls Forschungseinstieg" Programme)

Contact: Prof. Dr. Andreas Breidenassel
Duration: 2021-2023 (3 Jahre)
Partner: OSRAM Opto Semiconductors GmbH, Regensburg
Associate Partner: Prof. Dr. O. Amft (FAU Erlangen)

Research Assistent: Maximilian Reiser

Cardiovascular diseases are the most common cause of death in Germany. Medical wearables that measure vital parameters such as blood pressure, heart rate and blood oxygen levels in real time could help detect these diseases at an early stage and treat them preventively. In everyday life or during sports, the small, portable mini-computers in the form of fitness bracelets or smartwatches are already very popular. They are now also being used more and more frequently in medicine. The problem here, however, is that the mobile systems are not always free of errors. In most wearables, for example, vital parameters are measured using the so-called PPG (photoplethysmography) method. This can lead to signal interference if, for example, the sensors slip during movements. However, medical science depends on reliable measurements. This is precisely where the new "Deep-PPG" research project at Landshut University of Applied Sciences, headed by Prof. Dr. Andreas Breidenassel, comes in. Its goal is to reduce the susceptibility of the PPG signal to interference and thus enable more accurate measurements of wearables in medical applications. The company OSRAM Opto Semiconductors is involved in the project. The project is funded by the Bavarian State Ministry of Science and Arts (mehr).


  • M. Reiser, A. Breidenassel, O. Amft, Simulation framework for reflective PPG signal analysis depending on sensor placement and wavelength, 2022 IEEE 18th International Conference on Wearable and Implantable Body Sensor Networks (Link)



Embedded Systems

EMG Biofeedback-System
Michael Hartl, Michael Gröber, Matthias Laber, Lisa-Maria Kirchner, Michael Gröber, Paul Wingert (students)

Partner: Gerald Gradl (Texas Instruments)


 A mobile biofeedback system was developed as part of a project. With the help of the system, patients can focus their consciousness on the paralysed areas, for example, in order to train them in a targeted manner. It can also be used as a warning system, e.g. for people in a tense posture (office work at the computer) or as a training system for athletes. A signal path for evaluating muscle potential was designed, then simulated and measured using a test setup.

In the subsequent project work, a galvanically decoupled system was developed based on a Raspberry Pi. A galvanically decoupled filter board and an ADC board for measuring muscle tension (EMG) with subsequent digital signal processing on the single-board computer were developed.

Consecption and prototypical implementation of a tonometer

G. Witte, M. Tesik, V. Eilers (students)

The intraocular pressure is decisive for the survival of the organ. If the intraocular pressure is permanently elevated, this can lead to damage to the optic nerve later in life. The project group has built a mobile prototype of a tonometer. In addition to the control board and coil system, a power management system was developed. A PIC24 was used as the microcontroller and the housing was made from polyactide using 3D printing (Fused Deposition Modelling - FDM).

AR, VR und Web Simulations

contact: see projects

In recent years numerous VR, AR and 3D web applications have been developed for teaching purposes as part of teaching projects and theses. The main focus here is on the representation of complex facts such as electric or magnetic fields or on the virtualization of technologies that are difficult to access, such as the rooms and equipment in radiology or radiation oncology.

Rapid Room Planning in Radiology

contact: Prof. Dr. S. Remmele
Arthur Maleta (Masterthesis)

The application was developed in cooperation with Philips DXR in Hamburg and serves as support in the early phases of a space planning process, where often only paper sketches or manual digital sketches are available as a basis for discussion. Furthermore, it is intended to enable the creation of detailed plans in the further course of a project. Ideas of possible room layouts shall be directly sketchable virtually with real equipment models and in real scale. In addition to 2D room layout planning (object arrangement) and 3D verification (visual control) of the layout, the room layout can be experienced in VR to make it more tangible for the user and to reduce planning errors (e.g. regarding distances or ceiling height). (pdf)

Tools for the development of AR Apps

Contact: Prof. Dr. Stefanie Remmele
Project work 2019: T. Feulner, M. Kaiser, R. Stolz (students)

Augmented Reality applications are playing an increasingly important role in many industrial sectors. At the same time, a growing number of development tools, assets and libraries are entering the market.
In summer 2019, a project group compared different strategies for developing augmented reality apps (including Unity, Vuforia, Wikitude, Aframe, AR.js, ...). For the comparison, the project team developed an interactive simulation of an X-ray tube and compared development effort, robustness of the application and user satisfaction. Feel free to try it out for yourself here (pdf):

3D Web Simulations

Contact: Prof. Dr. Stefanie Remmele

Projekt-/Hiwi-/Abschlussarbeiten: Artur Maleta, Roland Stolz, Julian Fischer

Within the framework of a smartVHB grant and various student assistantships in the Medical Technology Laboratory, various 3D web simulations have been developed that enable experimentation at home in digitally supported teaching. Here, the influence of electrical parameters on the characteristics of electric and magnetic fields can be investigated or patients can be examined in MRI, X-ray and CT scanners. The simulations are freely accessible under Link.

Cardboard-Excursion in Medical Rooms

Cardboard-Excursion in Medical Rooms

contact: Prof. Dr. Christopher Auer
student project computer science

Large medical devices such as linear accelerators, CT, MRI or X-ray machines take up a lot of space and are expensive to purchase. At the same time, it is important to understand how they work in order to evaluate their usage. Two-dimensional representations, e.g. on lecture slides, even if they are well made, can hardly reproduce the dimensions and spatial arrangements of the components of the devices.In a student project, a team of computer science students developed a VR application that allows for the exploration of the functionality and components of an X-ray machine, LINAC, CT and MRI, adding a new perspective to lecture topics. Special emphasis was placed on user-friendliness to ensure a quick introduction to the application. In addition, the smartphone VR glasses used ("Cardboard VR") enable uncomplicated use in the lecture hall, without any cables or complicated set-up.For the project, the company EBM Papst in Landshut sponsored over 40 Cardboard headsets!

Design and Manufacturing Methods

contact: see projects

3D printing has taken an important place in medical technology, as it enables the complex anatomical structures and components for medical devices to be produced quickly and precisely. One example of the use of 3D printing in medical technology is the production of patient-specific orthoses that are perfectly adapted to the patient's individual anatomy. In addition, anatomical prints can help in the fitting of implants. The research group is also investigating other manufacturing processes such as silicone casting for the production of phantoms for the validation of image-guided navigation procedures.

3D prints for orthopedic surgery planning

contact: Prof. Dr. Norbert Babel, Prof. Dr. S. Remmele
Partner: Agatharied Hospital
Bachelorarbeit: Edith Gramotke

Currently, various methods are investigated to preoperatively adapt implants to s patient's anatomy either on virtual bone structures or by means of 3D-printed models.
In this project, a process was developed to generate patient-specific 3D prints on which implant adjustments can be made preoperatively. Pre-operative planning can shorten surgical time by eliminating the need to optimize the implant shape multiple times during surgery. This means less trauma for the patient and, consequently, a faster recovery and shorter hospital stay. Both the process for CT image segmentation and for 3D printing have been optimized and evaluated in collaboration with the Agatharied Hospital. To date, several operations have already been planned this way. According to the doctors, this saves around 45min of operating time. Edith Gramotke and the University of Applied Sciences Landshut were awarded the Innovation Prize of the Wissenschaftliche Gesellschaft für Krankenhaustechnik 2022 (more).