mirror of
https://github.com/privacyguides/privacyguides.org.git
synced 2025-08-01 10:56:14 -04:00
add annotation
This commit is contained in:
parent
21ecc1a9c7
commit
91cc1bb0b7
1 changed files with 10 additions and 1 deletions
|
@ -31,4 +31,13 @@ Most of the concepts I write about seem to come from the 70's and 80's, but diff
|
|||
|
||||
The paper introduces the idea of adding noise to data to achieve privacy. Of course, adding noise to the dataset reduces its accuracy. Ɛ defines the amount of noise added to the dataset, with a small Ɛ meaning more privacy but less accurate data and vice versa. It's also referred to as the "privacy loss parameter".
|
||||
|
||||
Importantly, differential privacy adds noise *before* it's analyzed. k-anonymity relies on trying to anonymize data *after* it's collected, so it leaves the possibility that not enough parameters are removed to ensure each individual cannot be identified.
|
||||
Importantly, differential privacy adds noise *before* it's analyzed. k-anonymity (1) relies on trying to anonymize data *after* it's collected, so it leaves the possibility that not enough parameters are removed to ensure each individual cannot be identified.
|
||||
{ .annotate }
|
||||
|
||||
1. k-anonymity means that for each row, at least k-1 other rows are identical.
|
||||
|
||||
|
||||
### Google RAPPOR
|
||||
|
||||
In 2014, Google introduced [Randomized Aggregatable Privacy-Preserving Ordinal Response](https://arxiv.org/pdf/1407.6981) (RAPPOR), their [open source](https://github.com/google/rappor) implementation of differential privacy, with a few improvements.
|
||||
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue