Towards Trustworthy Outsourced Deep Neural Networks

Louay Ahmad, Boxiang Dong, Bharath Samanthula, Ryan Yang Wang, Bill Hui Li

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review


The rising complexity of deep neural networks has raised rigorous demands for computational hardware and deployment expertise. As an alternative, outsourcing a pre-trained model to a third party server has been increasingly prevalent. However, it creates opportunities for attackers to interfere with the prediction outcomes of the deep neural network. In this paper, we focus on integrity verification of the prediction results from outsourced deep neural models and make a thread of contributions. We propose a new attack based on steganography that enables the server to generate wrong prediction results in a command-and-control fashion. Following that, we design a homomorphic encryption-based authentication scheme to detect wrong predictions made by any attack. Our extensive experiments on benchmark datasets demonstrate the invisibility of the attack and the effectiveness of our authentication approach.

Original languageEnglish
Title of host publicationProceedings - 2021 IEEE Cloud Summit, Cloud Summit 2021
PublisherInstitute of Electrical and Electronics Engineers Inc.
Number of pages6
ISBN (Electronic)9781665425827
StatePublished - 2021
Event2021 IEEE Cloud Summit, Cloud Summit 2021 - Virtual, Online, United States
Duration: 21 Oct 202122 Oct 2021

Publication series

NameProceedings - 2021 IEEE Cloud Summit, Cloud Summit 2021


Conference2021 IEEE Cloud Summit, Cloud Summit 2021
Country/TerritoryUnited States
CityVirtual, Online


  • Deep neural network
  • Homomorphic encryption
  • Outsourcing
  • Steganography
  • Verification


Dive into the research topics of 'Towards Trustworthy Outsourced Deep Neural Networks'. Together they form a unique fingerprint.

Cite this