Secure Audio: Privacy Preserving Speech Processing
Speech processing systems require complete access to the speech signal. Yet, speech is often considered the most private of human communication. Thus, subjects who may otherwise be willing to use a voice-processing system may consider it unacceptable that their voices may be recorded and listened to by third parties in the process.
The problem extends beyond mere unwillingness of active users to expose their speech. Call centers and voice data warehouses possess large quantities of voice recordings that might be mined for useful information; however they cannot do so as that would violate the privacy of the users whose voices are stored. Similarly, security agencies who wish to monitor conversations etc. to perform the vital act of securing citizens from potential terrorist or other malicious activity will necessarily also end up invading the privacy of innocent citizens whose conversations they overhear.
In this project, we are developing secure and private mechanisms that will allow voice to be processed without exposing its contents. Using these methods a user may, for example, contribute voice recordings to mining or learning system secure in the knowledge that nobody including snoopers or the recipient of his data will actually be able discern any useful information from what they obtain besides what the users themselves are willing to reveal.
Publications
- Manas Pathak and Bhiksha Raj.
Large Margin Gaussian Mixture Models with Differential Privacy. IEEE Transactions on dependable and secure computing, 2012.
- Manas Pathak and Bhiksha Raj.
Privacy Preserving Speaker Verification as Password Matching.
ICASSP 2012. [pdf]
- Jose Portelo, Bhiksha Raj and Isabel Trancoso.
Attacking a Privacy Preserving Music Matching Algorithm.
ICASSP 2012.
- Manas Pathak, Mehrbod Sharifi and Bhiksha Raj.
Privacy Preserving Spam Filtering.
[arxiv] [pdf]
- Manas Pathak and Bhiksha Raj.
Efficient Protocols for Principal Eigenvector Computation over Private Data.
Transactions on Data Privacy, 2011. [pdf]
A preliminary version of this article appeared in PSDML workshop at ECML/PKDD, 2010.
- Manas Pathak and Bhiksha Raj.
Privacy Preserving Speaker Verification using Adapted GMMs.
Interspeech 2011. [pdf]
- Jose Portelo, Alberto Abad, Bhiksha Raj and Isabel Trancoso.
On the Implementation of a Secure Musical Database Matching.
19th European Signal Processing Conference (EUSIPCO) 2011.
- Manas Pathak, Shantanu Rane, Wei Sun and Bhiksha Raj.
Privacy Preserving Probabilistic Inference with Hidden Markov Models.
ICASSP 2011. [pdf]
- Manas Pathak, Shantanu Rane, and Bhiksha Raj.
Multiparty Differential Privacy via Aggregation of Locally Trained Classifiers.
Neural Information Processing Systems 2010. pdf
- Manas Pathak and Bhiksha Raj.
Large Margin Multiclass Gaussian Classification with Differential Privacy.
PSDML
Workshop at ECML/PKDD, 2010.
[pdf] [arxiv]
- Paris Smaragdis and Madhusudhan Shashanka
A framework for secure speech recognition.
IEEE Transactions on Acoustics, Speech and Language Processing,
Vol. 15, 1404-1413, 2007.
[pdf]
- Madhusudhan Shashanka and Paris Smaragdis
Privacy-preserving musical database matching.
IEEE workshop on Applications of Signal Processing to Audio and Acoustics
(WASPAA), 2007.
[pdf]
Support
Prior support: National Science Foundation Grant No. 1017256 Funded Proposal: Privacy-Preserving Techniques for Speech Processing. pdfFoundational Papers
- Andrew Yao
Protocols for Secure Computations.
IEEE Symposium on Foundations of Computer Science (FOCS), 1982.
[pdf]
- Michael Ben-Or, Shafi Goldwasser, and Avi Wigderson. Completeness Theorems for Non-Cryptographic Fault Tolerant Distributed Computation. ACM Symposium on Theory of Computing (STOC), 1988 [pdf]