Information! Accepted papers of ICMVA 2024 (ACM proceedings-ISBN 979-8-4007-1655-3) has been online | Delegate Registration-Quick Path (Click)
logo
  • Home
  • Committees
  • Program
    • Keynote Speakers
    • Invite Speakers
    • Special Sessions
    • Conference Program
  • Call for papers
  • Submission
  • Registration
  • For Attendees
    • Conference Venue
    • Visa Application
  • History
    • 2024 7th ICMVA
    • 2023 6th ICMVA
    • 2022 5th ICMVA
    • 2021 4th ICMVA
    • 2020 3rd ICMVA
    • 2019 2nd ICMVA
    • 2018 ICMVA
  • Contact

KEYNOTE SPEAKERS

 

Keynote Speaker I
 

Prof. Gunther Notni
Fraunhofer IOF, Jena, Germany

Speech Title: Applications of Multimodal 3D-sensortechnologies
Abstract:
Multimodal sensors capture and integrate diverse characteristics of a scene to maximize information gain. Here multimodal imaging means to combine 3D sensor data with image data from different wavelength ranges such as color images - RGB, NIR, SWIR (short-wave infrared), thermographic images, polarization and multispectral image data.
Combining multimodal camera data with shape data from 3D sensors is a challenging issue. Multimodal cameras, e.g., hyperspectral cameras, or cameras outside the visible light spectrum, e.g., thermal cameras or short wave infrared have different resolutions, sensitivities and image qualities.
The presentation will look at the various aspects that need to be considered when setting up such systems. These include in particular the calibration of multi-camera / multi-sensor systems and how to superimpose pixel-accurate multimodal image data onto 3D data.
It will be discussed how multimodal 3D imaging is used in various areas, such as medical technology for the estimation of vital parameters, trust-based humanrobot collaboration, material recognition for recycling and forestry. Furthermore, it will be shown how this technology and the use of thermal cameras can be used for the 3D scanning of optically uncooperative surfaces in the VIS and NIR spectral range, e.g., transparent or specular materials without object preparation.

Biodata: Gunther Notni studied physics at the Friedrich Schiller University in Jena from 1983-1988 and obtained his doctorate there in 1992. He has worked at the Fraunhofer Institute for Applied Optics and Precision Engineering IOF in Jena since 1992. Here he headed the Optical Systems department from 1994-2020 and has been responsible for the Optical Sensors & Metrology business unit on the Fraunhofer IOF Board of Directors since 2020. In October 2014, he was also appointed to a W3 professorship at TU Ilmenau, where he heads the ‘Quality Assurance and Industrial Image Processing’ department. His work focuses on the development of optical 3D sensors and the principles of multimodal and multispectral image processing and their application in human-machine interaction, quality assurance, medicine and forestry.
 

Keynote Speaker II
 

Prof. Jun Zhou
Griffith University, Australia

Speech Title: Hyperspectral Computer Vision and Its Applications
Abstract:
Hyperspectral imagery contains rich information on the spectral and spatial distribution of object materials in a scene. Traditional hyperspectral remote sensing methods mainly focus on pixel-level spectral analysis. On the contrary, computer vision has discovered colour, texture, and various spatial and structural features of objects, but not spectral information. It is necessary to bridge the gap between spectral and spatial analysis to invent new tools for effective image analysis and understanding. This talk gives an overview of hyperspectral imaging technology and material-based spectral-spatial analysis techniques, and how they can be used to address challenges in computer vision tasks. Several case studies in object detection, image classification and video tracking will be presented in this talk.

Biodata: Jun Zhou is a Professor and Deputy Head of School for the School of ICT at Griffith University, Australia. He received his Ph.D. degree from the University of Alberta, Canada, in 2006. Before joining Griffith University, he had taken research positions at the Australian National University and NICTA. His research interests include pattern recognition, hyperspectral imaging and computer vision with their applications to remote sensing, environment, agriculture and medicine. He was awarded the Australian Research Council Discovery Early Career Researcher Award in 2012. Prof. Zhou has published more than 300 papers in leading image processing, computer vision and remote sensing journals and conferences. He is an associate editor of five international journals, including IEEE TGRS and Pattern Recognition. He is the President of the Australian Pattern Recognition Society.

Keynote Speaker III
 

Dr. Auxi Padron
Instituto de Astrofísica de Canarias (IAC), USA
(On behalf of Prof. Jeff Kuhn, University of Hawaii, USA)

Speech Title: Revolutionizing the Next Generation of Ground-Based Telescopes with Machine Learning
Abstract:
Telescopes are our windows to the universe, and ensuring that they capture clear, sharp images is essential for exploring distant worlds. At the Laboratory for Innovation in Opto-Mechanics (LIOM) of the Instituto de Astrofísica de Canarias, we are developing the Small ExoLife Finder (SELF), a 3.5-meter prototype telescope. SELF serves as a testbed for innovative technologies that will eventually be implemented in the future ExoLife Finder (ELF) — a planned 30-meter-class telescope designed to detect signs of life on exoplanets.

A major innovation in SELF is the application of machine learning (ML) to enable autonomous telescope alignment. We generate synthetic datasets from a detailed optical model of the telescope and use these to train deep neural networks. These networks learn to precisely infer the positions of the mirror actuators from telescope images, and subsequently, effectively teaching the system how to adjust the mirrors for optimal image resolution. This method not only helps us determine the ideal number and arrangement of actuators, but also allows us to test whether a neural network trained on simulated data can successfully align the telescope under real-world conditions.

In addition, ML plays a key role in correcting distortions caused by Earth’s turbulent atmosphere. One strategy uses real-time images from the telescope to provide feedback to fast-moving secondary mirrors for coarse corrections. A complementary approach for fine corrections employs integrated photonic devices — such as photonic lanterns and multimode fibers — which act as sensors to capture the complex effects of atmospheric aberrations. Advanced ML algorithms are used to decode the nonlinear behavior of these devices in real time, enabling precise and rapid corrections.

We are also applying neural networks to reconstruct surface maps of exoplanets. By learning to recognize distinct reflective features from simulated data, our models aim to identify biosignatures as well as features like vegetation, deserts, and ice caps.

Together, these advances will enable us to build larger and more powerful optical telescopes, opening new pathways for discovering and studying distant planets in our quest to find signs of life.

Biodata: Dr. Auxiliadora Padrón Brito holds a Bachelor's degree in Physics (2012) and a Master's in Astrophysics (2015) from the University of La Laguna, and a PhD in Photonics (2021) from the Institute of Photonic Sciences (ICFO), where she also worked as a postdoctoral researcher. Her research focused on quantum optics experiments with cold Rydberg atoms for quantum communication protocols. She is currently a postdoctoral researcher at the Instituto de Astrofísica de Canarias (IAC), where she works on integrated photonics for cophasing and wavefront sensing of the Small ExoLife Finder (SELF). Her work also involves collaborating with machine learning experts to interpret the behavior of photonic systems. Auxiliadora is passionate about making science more accessible and inclusive. She has organized outreach talks, developed interactive learning experiences, and helped coordinate a summer school on quantum communication. She is actively involved in content creation and communication strategy at the Laboratory for Innovation in Optomechanics (LIOM), and also holds a Master’s in Teacher Training.

 

 

 

  • © 2025 8th ICMVA - International Conference on Machine Vision and Applications
  •