IPI: Image Processing and Interpretation

Welcome to IPI, the Image Processing and Interpretation Research Group of the department TELIN of the Faculty of Faculty of Engineering and Architecture at Ghent University (Universiteit Gent).

Projects

IPI actively engages in collaborative research with other institutes and companies in the context of National and European projects. We are currently active in the following collaborative projects:

.js-id-running

DistriMuSe

DistriMuSe will develop intelligent systems with a variety of distributed and unobtrusive sensors to monitor people, to support human health, and improve safety in driving and traffic situations and in factory environments involving robots. The DistriMuSe project continues and extends the work started in the NextPerception project.

IPI investigates people and vehicle object detection based on multi-sensor fusion as well as multi-purpose continuous adaptive learning algorithms of AI models deployed in object recognition.

Chips JU, 5/2024 – 4/2027

Vertiports

The goal of the Vertiports project, funded by EFRO GTI West-Vlaanderen, is to develop a functional airlift (including workflow and test flights) from Ostend to a wind farm so that infrastructure inspection at sea can be carried out by an unmanned aircraft, as well as to goods transport to and from the wind farms with drones instead of vessels.

The project focus will be on issues needed to establish an operational airlift: making drones fly further and longer (is hydrogen a valid energy source for drones?), making drones operate in all weather conditions (what works in rain and wind), and how can all this be done in a regulatory, safe and efficient manner?

IPI's role in the project is to develop a hyperspectral payload and software workflow for measurements in the North Sea.

EFRO, 1/2024 - 12/2026

LivingLAPT

Future apt LIVING Lab for Autonomous Public Transport. LivingLAPT will deliver sustainable driver-less shuttle/logistics services among various European cities by phasing out the need for safety drivers in shuttles and moving towards remote operators who overlook a number of services simultaneously.

IPI will evaluate the safety near autonomous public transport. This applied research builds on our sensor fusion technology whose ongoing development is co-financed by the Flanders AI research program.

EIT Urban Mobility, 1/2022 - 12/2023

FARAD2SORT

The goal of the Fast Deep Learning and Deployment for Products Sorting (FARAD2SORT) project is to realize a technological framework to help engineers that have a general understanding of deep learning technology, but are not experts in it, to design, develop, and deploy deep learning vision based on 2D images for applications that require industrial object detection, object recognition and surface defects / anomaly detection & classification. The FARAD2SORT results will build on existing open-source deep learning software by adding tools that will make implementation easier, cheaper and more accurate and robust.

FM-ICON project, 10/2022 - 9/2024

MultipLICITY

The IMultiple Lasers and Integrated Cameras for Increasing Trustworthy Yields (MultipLICITY) project aims to address the challenges of 3D printing, which is used to create a variety of products, from car parts to medical implants and custom-made tools. Unfortunately, a sizable percentage of these components still show defects, caused by insufficiently sophisticated quality monitoring of the printing process. The MultipLICITY project aims to improve the quality of 3D printed products, reduce waste, and save energy.

ICON project, 9/2022 - 8/2024

SafeNav

The SafeNav maritime safety project promises a path towards safer and more secure navigation for the navigator on the bridge today and then moving towards remote-operated and autonomous shipping. One key aspect to boost maritime safety is accurate and efficient detection and tracking of vessels and floating objects as well as marine mammals, in order to avoid navigational hazards such as collisions and subsequent damages to ships, crew members and the marine environment.

In SafeNav, IPI will use innovative sensor setups, including cameras to solve the main challenge = to improve the detection performance in difficult conditions: distant or semi-submerged marine animals or containers, waves crests and sun glitters, poor weather conditions.

EU-HORIZON, 9/2022 - 8/2025

SeaDetect

The SeaDetect project, part of Europe's LIFE initiative, aims to halt the biodiversity loss due to collisions between ships and cetaceans by implementing and developing new technologies. To considerably reduce this risk of collision and protect marine life and biodiversity, the SeaDetect project aims to develop two innovative, complementary systems. The first is a detection system to be deployed on ships composed of multiple highly sensitive sensors, of which the data will be fused and processed with artificial intelligence in order to detect cetaceans up to 1km. The second is a network of Passive Acoustic Monitoring (PAM) buoys that will detect and triangulate the position of cetaceans in real-time in order to prevent collision for all vessels in usual maritime roads.

IPI's role in the project is to develop novel detection algorithms based on raw data fusion to improve the detection capabilities of the on-board systems.

EU-LIFE, 9/2022 - 8/2026

BoB

Building Information Modelling (BIM) allows making elaborate, information-rich models of building designs. However, there is no easy way yet to couple these models to what is actually happening on-site, during a construction. The BoB project aims to create a 2-way link between BIM models and the actual building, improving building efficiency and avoiding costly errors

IPI researches camera networks and cross-modal 3D matching to link on-site camera images with 4D BIM models to estimate construction progress in highly challenging environments.

ICON project, 1/2022 – 3/2024

HoloWrist

Holographic Skeletons for Wrist Surgery: The goal of HoloWrist is to use augmented reality to accurately show the location of a patient's wrist bones during surgery. This will be done by creating a hologram of the patient's wrist bones (e.g. from their CT scan) and using augmented reality headsets (e.g. Microsoft's HoloLens) to display the hologram on the patient's wrist.

IPI's focus is on motion tracking to ensure the hologram is aligned to the patient throughout the surgery.

FWO, 1/2022 – 12/2024

RELAI

The Realtime AI for Industrial Applications (RELAI) project aims to develop techniques to create smart systems using single or multiple edge devices. In this case, the developer does not need extensive knowledge of AI and edge programming. The project focuses on GPU (graphic processing units) and FPGA (field programmable gate arrays) devices. These are the devices that run AI models that obtain real-time solutions from sensor data such as cameras, 3D point clouds and virtual sensors. In particular, this takes into account minimal data transfer delay (latency) and energy-efficient computation.

AI-ICON project, 4/2022 - 3/2024

VISION2REUSE

The goal of VISION2REUSE is to demonstrate the potential of smart cameras for the automatic monitoring of the quality of reusable packaging in the food and packaging industry. Based on these camera technologies and state-of-the-art machine learning, it will be measured in an accurate and fast way whether the packaging material in question is still suitable for a new reuse cycle or whether it should go to a dedicated end-of-life stream (e.g. recycling).

REACT-EU EFRO, 1/2022 - 12/2023

ANALYST PV

IPI researches integrated sensors and data analysis fault detection tools for photovoltaic plants. See the project results.

ICON project, 10/2019 – 9/2022

CHARAMBA

CHARAMBA aims to solve the problem of costly and labour-intensive sampling and chemical analysis of complex material/waste streams

EIT RawMaterials, 1/2020 - 12/2021

Press release on the project results

COSMO

In the COSMO project, researchers are designing, developing, and evaluating effective, personalized, and scalable AR and VR technologies for support and training in manufacturing operations

ICON project, 5/2019 - 4/2021

Comp4Drones

Framework of Key Enabling Technologies for Safe and Autonomous Drones

IPI contributes to a novel framework of key enabling technologies for safe and autonomous drones. We focus on hyperspectral imaging from manually flown drones for inspection of offshore turbines structure to detect imperfections, such as corrosion deterioration of paint.

ECSEL JU, 10/2019 – 1/2023

EXIST

Demand for image sensors in automobiles, the Internet of Things, medicine or security and surveillance applications is challenging businesses to improve the performance of their integrated systems. EXIST is developing breakthrough image sensors that will expand the functionality of future vision systems.

ECSEL JU, 5/2015 - 12/2018

NextPerception

NextPerception aims to develop next generation smart perception sensors and enhance the distributed intelligence paradigm to build versatile, secure, reliable, and proactive human monitoring solutions for the health, wellbeing, and automotive domains

IPI investigates cooperative sensor fusion in support of safety and comfort at road intersections especially for vulnerable road users such as pedestrians and bikers

ECSEL JU, 5/2020 – 7/2023

PANORAMA

PANORAMA will deliver solutions for applications in medical imaging, broadcasting systems and security & surveillance, all of which face similar challenging issues in the real time handling and processing of large volumes of image data.

ENIAC JU, 4/2012 - 12/2015

SONOPA

The SONOPA project - SOcial Networks for Older adults to Promote an Active Life - employs technologies to develop an end-to-end solution for stimulating and supporting activities at home.

EU, 5/2013 - 4/2016

Vision-in-the-loop

The VIL project aims to improve print quality, reduce waste, and cut the cost of additive metal manufacturing

ICON project, 4/2020 - 3/2022

MoCCha-CT

This project develops tools for X-ray imaging of dynamic processes in materials, with the goal of helping materials scientists to develop better and more sustainable materials.

IPI's focus is on developing algorithms capable of handling large 3D datasets including research on GPU processing, (ROI) CT reconstruction, and efficient optimization algorithms with the emphasis on huge amounts of data.

FWO-SBO, 1/2018 – 12/2022

Past ICON projects

iPlay: Interactive sports platform for comprehensive, 3D monitoring of athletes, 2017-2019

ARIA: Augmented Reality for Industrial Maintenance Procedures, 2016-2018

BAHAMAS: A toolset for the processing, compression and analysis of big data in life and materials science applications, 2015-2016

HD2R: Creating Images with Higher Dynamic Range and Richer Colors for Cinemas and Living Rooms, 2015-2017

wE-MOVE: Games to improve children’s rehabilitation and fitness level, 2015-2016

GIPA: Laying the foundation for a generic, state-of-the-art augmented reality (AR) platform, 2014-2015

FIAT: Unlocking the potential of ‘functional imaging’ to quantify tumor response to treatment earlier and more accurately, 2014-2015

Past National Projects

AVCON, 2018-2019

EMDAS (Autonomous vehicles), 2015-2016

3DLicorneA, Brussels Institute for Research and Innovation, 2015-2018

Flanders AI

Groundbreaking artificial intelligence research enabling a meaningful impact on people, industry and society. IPI researches real-time and power-efficient AI in the edge for various applications.

National project (Flemish EWI), 7/2019 – present

ACHIEVE

Researchers in ACHIEVE are designing highly integrated hardware-software components for the implementation of ultra-efficient embedded vision systems as the basis for innovative distributed vision applications

For IPI, the first goal of this project is to design algorithms for distributed multiple targets tracking through a decentralized approach. The second goal is to improve object detection and tracking using a multi-sensor approach. Thermal cameras have promising potential in surveillance applications, especially when combined with optical cameras. The third goal of the project is to provide solutions for behaviour analysis and action recognition. The research will use high-level analysis to automatically determine which cameras observe the same or similar action, such as pedestrians waiting to cross the street. Deep learning is a promising approach.

H2020-MSCA-ITN, 10/2017 – 9/2021

News

1) Edge/cloud deep learning systems for industrial applications; 2) Real-time 3D view synthesis using Neural Radiance Field models

From the IEEE Signal Processing Society, for outstanding editorial board service for the IEEE Transactions on Image Processing

Topic: Hyperspectral data processing and sensor fusion for scene understanding

Listen to them discuss sensor fusion, sensors, weather and environmental conditions, and difficult corner cases

Thermal imaging to the rescue, radar advancing at rapid pace and handling the complexity of automotive sensor fusion

Cooperative Sensor Fusion for Autonomous Driving

Contact