Abstract
In the past few years, mobile augmented reality (AR) has attracted a great deal of attention. It presents us a live, direct or indirect view of a real-world environment whose elements are augmented (or supplemented) by computer-generated sensory inputs such as sound, video, graphics or GPS data. Also, deep learning has the potential to improve the performance of current AR systems. In this paper, we propose a distributed mobile logo detection framework. Our system consists of mobile AR devices and a back-end server. Mobile AR devices can capture real-time videos and locally decide which frame should be sent to the back-end server for logo detection. The server schedules all detection jobs to minimise the maximum latency. We implement our system on the Google Nexus 5 and a desktop with a wireless network interface. Evaluation results show that our system can detect the view change activity with an accuracy of 95.7% and successfully process 40 image processing jobs before deadline.
Original language | English |
---|---|
Pages (from-to) | 99-115 |
Number of pages | 17 |
Journal | Cyber-Physical Systems |
Volume | 4 |
Issue number | 2 |
DOIs | |
State | Published - 3 Apr 2018 |
Keywords
- Augmented reality applications
- mobile computing
- mobile offloading