Deep learning

Autonomous Vehicles and Sensor Fusion

Topics Cooperative fusion for tracking of pedestrians from a moving vehicle Radar-Video sensor fusion LIDAR-Radar-Video sensor fusion Radar pedestrian detection using deep learning Multimodal vision 2D multi-modal video fusion for wide-angle environment perception Visible-thermal video enhancement for detection of road users Automotive High Dynamic Range (HDR) imaging Classic multi-exposure HDR reconstruction Intelligent HDR tone mapping for traffic applications Learning-based HDR video reconstruction and tone mapping Efficient multi-sensor data annotation tool Point-cloud processing Fast Low-level Point-cloud processing Point-cloud based Object detection and tracking Environment mapping and odometry Liborg - Lidar based mapping LIDAR based odometry Monocular visual odometry Automotive occupancy mapping Obstacle detection based on 3D imaging Real-time sensor data processing for autonomous vehicles using Quasar - demo video Cooperative fusion for tracking of pedestrians from a moving vehicle Autonomous vehicles need to be able to detect other road users and roadside hazards at all times and in all conditions.

Image and video quality enhancement

Topics High Dynamic Range imaging Denoising of time-of-flight depth images and sequences Multicamera image fusion Non-local image reconstruction Multiframe superresolution Demosaicing Error Concealment Wavelet-based denoising of images Non-local means denoising of images Restoration of historical videos Joint removal of blocking artifacts and resolution enhancement High Dynamic Range Imaging Conventional display and image capture technology is limited to a narrow range of luminosities and images associated with such systems have been retroactively called Low Dynamic Range images (LDR). A High dynamic Range (HDR) image on the other hand, refers to an image that encodes a greater range of brightness and luminosity than a reference LDR image. HDR images find a use in near-future, more capable display and camera systems. They allow to more faithfully match the visual impression of a scene to human vision compared to LDR images. Recent developments in so-called HDR televisions have yielded prototypes that can display peak luminosities in the range of 6000 nits.

Intelligent surveillance and sensor networks

Topics Networked sensors Sensor networks and methods for wellness monitoring of the elderly Collaborative Tracking in Smart Camera Networks Distributed Camera Networks Multi Camera Networks 3D reconstruction using multiple cameras Real-time video mosaicking Scene and human behavior analysis Foreground background segmentation for dynamic camera viewpoints Foreground/background segmentation Automatic analysis of the worker's behaviour Gesture Recognition Behaviour analysis Immersive communication by means of computer vision (iCocoon) Material Analysis using Image Processing Sensor networks and methods for wellness monitoring of the elderly Addressing the challenges of a rapidly-ageing population has become a priority for many Western countries. Our aim is to relieve the pressure from nursing homes’ limited capacity by pursuing the development of an affordable, round-the-clock monitoring solution that can be used in assisted living facilities. This intelligent solution empowers older people to live (semi-) autonomously for a longer period of time by alerting their caregivers when assistance is required.

Real-Time Industrial Inspection

Topics Real-time monitoring in Additive Manufacturing Real-time monitoring in Additive Manufacturing In additive manufacturing, items are 3D printed layer-by-layer using materials like plastics, polymers, and metals. Unfortunately, instabilities in the printing process can produce defects like cracks, warping, and pores/voids within the printed item. Our goal at IPI is to develop computer vision systems that identify the creation of these defects in real time, then provide the 3D printer with sufficient information to intervene in a way that corrects, or avoids, the defect. Since these defects can occur over a very short amount of time (< 1 ms), our monitoring systems need to operate at very high speeds while also providing accurate results. Melt pool monitoring: GPU-based real-time detection of pore defects using dynamic features and machine learning Using high speed cameras and photodiodes (sampling rates > 20 kHz), we are exploring AI models that can highlight defect creation.

IPI researcher Danilo Babin contributes to AI algorithm that detects erosions, ankylosis with high accuracy in patients with sacroiliitis

The work has been published in European Radiology and covered by the press

Intelligent super-fast camera prevents errors in 3D printing

IPI researcher Brian Booth discusses the icon Vision-in-the-Loop project results - KanaalZ video, press release, and news articles.

Join IPI on 19 and 20 May 2022 at two IOF-platform events within the UGent Industrial Liaison Network

iMatch (Image-based Material Characterization platform) and AM Platform (Additive Manufacturing platform)

IPI joins Additive Manufacturing R&D Day on 21 June 2022

The must-attend networking event if you need, offer, or are interested in R&D on Additive Manufacturing

Webinar: The Future of Solar Site Management

IPI researcher Michiel Vlaminck and partners showcase the results of the icon Analyst-PV project with a webinar and article

IPI Additive Manufacturing Research Highlighted in Recent Youtube Video

Demonstration of a real-time monitoring system for 3D metal printing based on AI and active learning

Online Talk: Cooperative sensor fusion for detection and tracking

Watch IPI researcher ​​​​​​​David Van Hamme talk about Cooperative Sensor Fusion research

IPI joins Industry Leaders in AI for Manufacturing Webinars

Brian Booth joined industry leaders earlier this year to speak on the use of AI in additive manufacturing workflows

Flanders AI

Groundbreaking artificial intelligence research enabling a meaningful impact on people, industry and society. IPI researches real-time and power-efficient AI in the edge for various applications. National project (Flemish EWI), 7/2019 – present


Researchers in ACHIEVE are designing highly integrated hardware-software components for the implementation of ultra-efficient embedded vision systems as the basis for innovative distributed vision applications For IPI, the first goal of this project is to design algorithms for distributed multiple targets tracking through a decentralized approach. The second goal is to improve object detection and tracking using a multi-sensor approach. Thermal cameras have promising potential in surveillance applications, especially when combined with optical cameras. The third goal of the project is to provide solutions for behaviour analysis and action recognition. The research will use high-level analysis to automatically determine which cameras observe the same or similar action, such as pedestrians waiting to cross the street. Deep learning is a promising approach. H2020-MSCA-ITN, 10/2017 – 9/2021