In the framework of the research project “Innovative systems and services for transport and production” IDEX/I-SITE CAP 20-25 (Challenge 2) and the LabEx IMobS3, and thanks to a FrenchTech chair program, a 3-year PhD grant is proposed for highly motivated candidates interested in completing a PhD thesis on Neuromorphic Vision, Robotics, and Computational Neuroscience.
Autonomous learning in a neuromorphic vision system
This PhD is funded through the FrenchTech chair program. The candidate will prepare their PhD at the Université Clermont-Auvergne (UCA) in Clermont-Ferrand, France, and will be joining the Image, Perception Systems and Robotics group of Institute Pascal which has long experience in computer vision and mobile robots. The research will be conducted in the context of an ongoing collaboration between Institut Pascal and Prof. Jochen Triesch from the Frankfurt Institute for Advanced Studies (FIAS) in Frankfurt, Germany, who will be staying at Institut Pascal at least 3 months per year. The candidate will also have the opportunity to have a research experience at FIAS.
In this PhD project at the intersection of Neuromorphic Vision, Robotics, and Computational Neuroscience, we aim to explore the benefits of a new class of biologically inspired vision sensors for mobile robots. These sensors mimic the operation of the primate retina and represent the visual scene via asynchronous event streams rapidly indicating changes in luminance at every pixel. These sensors have a number of decisive advantages compared to traditional video cameras. First, they have drastically improved speeds (equivalent to >10,000 frames per second), because pixels operate and communicate independently and asynchronously. This is important for any applications where short reaction times are critical. Second, they transmit information with massively reduced bandwidth, because they mostly register and communicate luminance changes in the image. Third, they can operate over a much greater dynamic range of light intensities (>120 dB). Fourth, they have substantially lower power consumption (<10 mW). These advantages could offer important benefits for mobile robots. However, because of the fundamentally different data format of asynchronous streams of events instead of sequences of images, classic image and video processing techniques are either not applicable or do not support the unique advantages of these sensors. While first systems for, e.g., stereo vision or object tracking with these sensors have been demonstrated (e.g. Everding & Conradt, 2018), the project focuses on active perception for mobile robots, where the robot moves through its environment and has movable cameras with which it samples the scene. The goal is to develop an active vision system that can autonomously self-calibrate based on principles from information theory. Specifically, over the last years we have developed a theoretical framework called Active Efficient Coding (AEC) for how robots can autonomously learn such sensor movements (Zhao et al., 2012; Lonini et al., 2013; Teulière et al., 2015). Here we propose to build the world’s first event-based binocular vision system that can self-calibrate pursuit and vergence eye movements using AEC. For this, we will combine unsupervised learning techniques for learning compact representations of the binocular event streams with reinforcement learning for generating the motor commands for the cameras to maximize coding efficiency. The system will be built with a pair of commercially available event-based cameras (e.g. Prophesee Onboard, https://www.prophesee.ai).
Everding, L., & Conradt, J. (2018). Low-Latency Line Tracking Using Event-Based Dynamic Vision Sensors. Frontiers in neurorobotics, 12, 4.
Lonini, L., Forestier, S., Teulière, C., Zhao, Y., Shi, B. E., & Triesch, J. (2013). Robust active binocular vision through intrinsically motivated learning. Frontiers in neurorobotics, 7, 20.
Teulière, C., Forestier, S., Lonini, L., Zhang, C., Zhao, Y., Shi, B., & Triesch, J. (2015). Self-calibrating smooth pursuit through active efficient coding. Robotics and Autonomous Systems, 71, 3-12.
Zhao, Y., Rothkopf, C. A., Triesch, J., & Shi, B. E. (2012, November). A unified model of the joint development of disparity selectivity and vergence control. In ICDL, 2012 (pp. 1-6).
- Prof. Jochen Triesch (Frankfurt Institute for Advanced Studies)
- Dr. Céline Teulière (Institut Pascal, UCA)
- Prof. Vincent Barra (LIMOS, UCA)
Research Group: Institut Pascal
University: Université Clermont Auvergne (UCA) – Clermont Ferrand - France
Contact: firstname.lastname@example.orgApprenez-en davantage
|Intitulé||PHD POSITION - Autonomous learning in a neuromorphic vision system|
|Job location||IMobS3 Laboratoire d'Excellence - Université Clermont Auvergne 4 Avenue Blaise Pascal TSA 60026 63178 AUBIERE cedex, 63178 AUBIERE|
|Publié||avril 12, 2019|
|Date limite d'inscription||Non Spécifiée|
|Types d'emploi||PhD  |
|Domaines de recherche :||Neuroscience,   Langages de programmation,   Robotique,   Biologie computationnelle,   Apprentissage automatique,   Vision par ordinateur  |