Accessibility, transparency, and reproducibility of privacy algorithms and their performance.
DPComp is a web-based system intended to support a broad set of users: data analysts who are novice users of privacy mechanisms, privacy researchers who develop new privacy algorithms, and data owners who manage sensitive data.
The main goals of DPComp are:
- To improve the accessibility and transparency of privacy algorithms by allowing easy browsing and performance comparison;
- To provide a reproducible methodology for evaluating the performance of privacy algorithms; and
- To help guide future research by highlighting cases for which existing privacy algorithms do not provide acceptable accuracy.
We welcome contributions and feedback. We hope visitors to this site will contribute to DPComp by submitting new datasets, new algorithms, or new task descriptions. For now this is a manual process; please contact us at this address
Principled Evaluation of Differentially Private Algorithms using DPBench
Michael Hay, Ashwin Machanavajjhala, Gerome Miklau, Yan Chen, and Dan Zhang.
ACM Conference on Management of Data (SIGMOD) 2016
- Exploring Privacy-Accuracy Tradeoffs using DPCOMP
Michael Hay, Ashwin Machanavajjhala, Gerome Miklau, Yan Chen, Dan Zhang, and George Bissias.
Demonstration, ACM Conference on Management of Data (SIGMOD) 2016
Open source repository
The differential privacy tools used to generate the results on this site are available in the dpcomp_core
open source repository. The repository contains datasets, workloads, and algorithms used in DPBench. With dpcomp_core, a user can reproduce previous evaluations, compare provided algorithms with new data, or evaluate new algorithms.
This project is supported by the NSF, DARPA and Center for Data Science at the University of Massachusetts College of Information and Computer Sciences.
DPComp was inspired in part by MLComp.org
which was designed for “objectively comparing machine learning programs across various datasets.”