PhD Studentships available!
We have two PhD studentships available (application deadline 31st January 2022).
More information
Apply
- Human Activity Analysis In Smart Environments
- Area of Research: Assisted Living, Machine Learning, Artificial Intelligence
- Aim and Objectives: Analysing human activity has a wide variety of applications, including but not limited to human-robot interaction, security and assisted living. Although limited by some technological, economical, ethical and privacy constraints, the impact of these applications is no longer a promise for the future, but a reality. Chairs and glasses are currently being used to monitor older people’s safety to promote healthy independent living with simple sensors such as fall detectors. Autonomous cars and robots analyse pedestrian behaviour to assess risk and comfort. The main goal of the project will be to analyse behaviour based on vision sensors that allow for non-invasive analysis. When considering vision-based data acquisition methods, non-invasiveness, accuracy and privacy form a triangle where one corner is sacrificed. Methods based on markers or wearables are accurate and only gather the necessary data, but are invasive. Non-invasive methods are either of limited accuracy or prone to generate serious privacy concerns. Processing the acquired data has similar limitations in terms of accuracy, privacy and response time. The aim of the PhD will be to remove or mitigate to the greatest possible extent these limitations. The student will design and develop models to implement data acquisition and activity analysis, and evaluate how these models perform considering accuracy, response time, privacy and non-invasiveness.
- Self-supervised Monocular Depth Estimation
- Area of Research: Computer Vision
-
Project Summary, Aim and Objectives: This PhD project is a collaboration between Aston University and
Aurrigo Ltd, a global leader in autonomous vehicle technology and manufacturer of autonomous
passenger transportation systems.
A detailed understanding of the 3D environment structure and the motion of dynamic objects is essential
for autonomous navigation. This is usually achieved through a mixture of sensors such as Li-DAR, RADAR
and cameras located around the vehicle. Recently there has been a major drive by some of the major
Autonomous Vehicle (AV) manufacturers to remove their reliance on Li-DAR sensors. The reasons cited
include their large size, high cost as well as lack of flexibility arising from the need to have pre-mapped
Revised December 2021
environments for Li-DAR localization to work. In the last two years, several Deep Learning systems have
been proposed that automatically convert a simple camera feed into a depth-map feed. However,
despite promising results, these systems are not yet drop-in replacements for Li-DAR because there is a
significant gap in accuracy and reliability.
Our goal is to develop a monocular depth estimation technology that convincingly competes with state-
of-the-art supervised methods of depth estimation as well as expensive sensor equipment (Li-DAR). To
achieve this, we will leverage Aurrigo’s experience with autonomous vehicle perception systems as well
as Aston’s leading research in 3D reconstruction from visual data. Alongside our main goal the PhD
student will also be making contributions in the following secondary goals:
- estimating the movement of dynamic objects in an image and the camera movement. The analysis will focus mainly on self-driving vehicles and develop a method of perception understanding for the camera.
- the developing area of self-supervised learning to utilise more efficient learning of scenes.
- using monocular depth estimations for scene construction and scene understanding.
- making a detailed comparison between human perception and camera perception.