20 20

Transactions on
Data Privacy
Foundations and Technologies

http://www.tdp.cat


Articles in Press

Accepted articles here

Latest Issues

Year 2026

Volume 19 Issue 2
Volume 19 Issue 1

Year 2025

Volume 18 Issue 3
Volume 18 Issue 2
Volume 18 Issue 1

Year 2024

Volume 17 Issue 3
Volume 17 Issue 2
Volume 17 Issue 1

Year 2023

Volume 16 Issue 3
Volume 16 Issue 2
Volume 16 Issue 1

Year 2022

Volume 15 Issue 3
Volume 15 Issue 2
Volume 15 Issue 1

Year 2021

Volume 14 Issue 3
Volume 14 Issue 2
Volume 14 Issue 1

Year 2020

Volume 13 Issue 3
Volume 13 Issue 2
Volume 13 Issue 1

Year 2019

Volume 12 Issue 3
Volume 12 Issue 2
Volume 12 Issue 1

Year 2018

Volume 11 Issue 3
Volume 11 Issue 2
Volume 11 Issue 1

Year 2017

Volume 10 Issue 3
Volume 10 Issue 2
Volume 10 Issue 1

Year 2016

Volume 9 Issue 3
Volume 9 Issue 2
Volume 9 Issue 1

Year 2015

Volume 8 Issue 3
Volume 8 Issue 2
Volume 8 Issue 1

Year 2014

Volume 7 Issue 3
Volume 7 Issue 2
Volume 7 Issue 1

Year 2013

Volume 6 Issue 3
Volume 6 Issue 2
Volume 6 Issue 1

Year 2012

Volume 5 Issue 3
Volume 5 Issue 2
Volume 5 Issue 1

Year 2011

Volume 4 Issue 3
Volume 4 Issue 2
Volume 4 Issue 1

Year 2010

Volume 3 Issue 3
Volume 3 Issue 2
Volume 3 Issue 1

Year 2009

Volume 2 Issue 3
Volume 2 Issue 2
Volume 2 Issue 1

Year 2008

Volume 1 Issue 3
Volume 1 Issue 2
Volume 1 Issue 1


Volume 19 Issue 1


An Adaptive Technique for Neural Network Training with Private Features and Public Labels

Islam A. Monir(a), Muhamad I. Fauzan(a), Gabriel Ghinita(b),(*), Mohamed M. Abdallah(a)

Transactions on Data Privacy 19:1 (2026) 1 - 28

Abstract, PDF

(a) College of Science and Engineering, Hamad Bin Khalifa University, Doha, Qatar.

(b) Computer Science Department, University of Massachusetts Boston, MA, USA.

e-mail:ismo58166 @hbku.edu.qa; mufa68183 @hbku.edu.qa; moabdallah @hbku.edu.qa; gabriel.ghinita @umb.edu


Abstract

Differentially-private stochastic gradient descent (DP-SGD) represents the de-facto stan- dard for privacy-preserving training of neural networks (NNs) under the differential privacy (DP) model. Its canonical formulation assumes that both the input features and the corresponding labels of training instances require protection. Newer developments explore scenarios in which only the la- bels are private, while the features are public. Doing so decreases the amount of required noise, lead- ing to improved model accuracy. We investigate a complementary and underexplored setting where labels are non-sensitive, but the input features contain private information. Instead of perturbing gradients, our proposed methodology for training private NNs adds noise at a designated sanitiza- tion layer within the network. We analyze key architectural and algorithmic trade-offs inherent in this design and demonstrate how modifying the network architecture to reflect these considerations can lead to improved predictive performance. We also devise two adaptive algorithm optimizations: the first one identifies early stopping conditions in the learning process in order to save privacy budget and boost the protection strength; the second customizes the clipping threshold at each learning it- eration in order to improve accuracy. Extensive experiments on real data show that our approach significantly outperforms the DP-SGD baseline.

* Corresponding author.


ISSN: 1888-5063; ISSN (Digital): 2013-1631; Web Site: http://www.tdp.cat/
Contact: Transactions on Data Privacy; Vicenç Torra; Umeå University; 90187 Umeå (Sweden); e-mail:tdp@tdp.cat
Note: TDP's web site does not use cookies. TDP does not keep information neither on IP addresses nor browsers. For the privacy policy access here.

 


Vicenç Torra, Last modified: 22 : 42 January 31 2026.