Ron Heiman

Ron Heiman

Research interests:

Machine Learning/Deep Learning

Computer Vision

Medical Imaging

Signal Processing



Video-based gait phase estimation

Gait analysis is an important aspect for the clinical assessment of certain neurological diseases, as e.g. Hemiparesis, Parkinson or Multiple Sclerosis (MS). The pathology of these diseases is usually manifested by a certain gait disorder.  The measurement of predefined gait parameters, e.g. certain ankle, knee and hip angles, can give an estimation about the progress of the disease.
Nowadays, patients with gait anomalies need to visit a gait laboratory from time to time in order to get their gait assessed with the help of special software tools (usually marker-based) and human experts (doctors and physiotherapists) which are present in the laboratory only. However, the assessment of relevant gait parameters usually takes place manually by analyzing the video streams which is a time-consuming task.

The human gait can be classified into several gait phases. The most important ones are the initial contact, the loading response and the terminal stance.
In the scope of this project, we aim for an automatic marker-free video-based gait analysis system under real-world conditions.  The system should be capable of detecting the relevant gait phases and calculating the crucial gait parameters out of it.
We aspire to make use of Machine Learning/Deep Learning methods in order to extract the relevant features from the videos. The latter have shown to beat more classical approaches in the relevant subareas of time series analysis and classification, object detection and pose estimation.


Cell Segmentation from Microscopy Images of Growing Microbial Populations


In malignant tumors and microbial infections, cells are commonly growing under confinement due to rapid proliferation in limited space. Nonetheless, this effect is poorly documented despite influencing drug efficiency. Studying budding yeast grown in space-limited microenvironments is a great tool to investigate this effect, conditioned on a robust cell instance segmentation. Due to confinement, cells become densely packed, impairing traditional segmentation methods. In this research project, we therefore aim to develop a deep learning based cell segmentation algorithm in order to tackle that problem.

Gait improvement based on an intelligent  orthosis

Children with cerebral palsy (CP) suffer from movement disorders, e.g. a deviant and harmful gait. Video-based gait analysis combined with an orthosis that is worn in everyday life is one approach to improve a child’s gait. However, the child might fall back to his habitual gait when there is no corrective action. Therefore, we equip the orthosis with different sensors (e.g. pressure, bending and acceleration) and analyze the data to differentiate normal from disordered gait. The analysis software is based on machine learning. The gait data is acquired in the lab and used to train a neural network. Finally, an app collects the sensor data and the classification can be accomplished directly on the smartphone. This enables a direct feedback to the patient.

Automatic solutions for production line analysis

In coorportation with our industrial partner STIHL, we develop an image-based algorithm which automatically detects the number and type of objects on a production line at a given time point and with high reliability.

Ultra low-dose tumor tracking based on single photon counting detectors

Respiratory motion is one of the big challenges in radiation therapy of tumors in the lung, liver or upper abdomen. To track the tumor during the irradiation using the conventional CBCT unit mounted on the linac gantry a high dose would be required which makes this approach unfeasible. The goal of this project is to use ultra low-dose (ULD) source and detector (based on single photon counting technology) to allow for continuous imaging during irradiation. From a 4D-CT of a patient several breathing phases can be defined and serve as a reference. A maximum likelihood approach is used to differentiate between the breathing phases. To potentially improve the robustness of this estimation an image-feature based approach is currently investigated.