Data anonymization

From HandWiki

Data anonymization is a type of information sanitization whose intent is privacy protection. It is the process of removing personally identifiable information from data sets, so that the people whom the data describe remain anonymous.


Data anonymization has been defined as a "process by which personal data is altered in such a way that a data subject can no longer be identified directly or indirectly, either by the data controller alone or in collaboration with any other party."[1] Data anonymization may enable the transfer of information across a boundary, such as between two departments within an agency or between two agencies, while reducing the risk of unintended disclosure, and in certain environments in a manner that enables evaluation and analytics post-anonymization.

In the context of medical data, anonymized data refers to data from which the patient cannot be identified by the recipient of the information. The name, address, and full postcode must be removed, together with any other information which, in conjunction with other data held by or disclosed to the recipient, could identify the patient.[2]

There will always be a risk that anonymized data may not stay anonymous over time. Pairing the anonymized dataset with other data, clever techniques and raw power are some of the ways previously anonymous data sets have become de-anonymized; The data subjects are no longer anonymous.

De-anonymization is the reverse process in which anonymous data is cross-referenced with other data sources to re-identify the anonymous data source.[3] Generalization and perturbation are the two popular anonymization approaches for relational data.[4] The process of obscuring data with the ability to re-identify it later is also called pseudonymization and is one way companies can store data in a way that is HIPAA compliant.

However, according to ARTICLE 29 DATA PROTECTION WORKING PARTY, Directive 95/46/EC refers to anonymisation in Recital 26 "signifies that to anonymise any data, the data must be stripped of sufficient elements such that the data subject can no longer be identified. More precisely, that data must be processed in such a way that it can no longer be used to identify a natural person by using “all the means likely reasonably to be used” by either the controller or a third party. An important factor is that the processing must be irreversible. The Directive does not clarify how such a de-identification process should or could be performed. The focus is on the outcome: that data should be such as not to allow the data subject to be identified via “all” “likely” and “reasonable” means. Reference is made to codes of conduct as a tool to set out possible anonymisation mechanisms as well as retention in a form in which identification of the data subject is “no longer possible”.[5]

There are five types of data anonymization operations: generalization, suppression, anatomization, permutation, and perturbation.[6]

GDPR requirements

The European Union's General Data Protection Regulation (GDPR) requires that stored data on people in the EU undergo either anonymization or a pseudonymization process.[7] GDPR Recital (26) establishes a very high bar for what constitutes anonymous data, thereby exempting the data from the requirements of the GDPR, namely “…information which does not relate to an identified or identifiable natural person or to personal data rendered anonymous in such a manner that the data subject is not or no longer identifiable.” The European Data Protection Supervisor (EDPS) and the Spanish Agencia Española de Protección de Datos (AEPD) have issued joint guidance related to requirements for anonymity and exemption from GDPR requirements. According to the EDPS and AEPD, no one, including the data controller, should be able to re-identify data subjects in a properly anonymized dataset.[8] Research by data scientists at Imperial College in London and UCLouvain in Belgium,[9] as well as a ruling by Judge Michal Agmon-Gonen of the Tel Aviv District Court,[10] highlight the shortcomings of "Anonymisation" in today's big data world. Anonymisation reflects an outdated approach to data protection that was developed when the processing of data was limited to isolated (siloed) applications, prior to the popularity of big data processing involving the widespread sharing and combining of data.[11]

See also


  1. ISO 25237:2017 Health informatics -- Pseudonymization. ISO. 2017. p. 7. 
  2. "Data anonymization". The Free Medical Dictionary. 
  3. "De-anonymization". 
  4. Bin Zhou; Jian Pei; WoShun Luk (December 2008). "A brief survey on anonymization techniques for privacy preserving publishing of social network data". Newsletter ACM SIGKDD Explorations Newsletter 10 (2): 12–22. doi:10.1145/1540276.1540279. 
  5. "Opinion 05/2014 on Anonymisation Techniques". EU Commission. 10 April 2014. 
  6. Eyupoglu, Can; Aydin, Muhammed; Zaim, Abdul; Sertbas, Ahmet (2018-05-17). "An Efficient Big Data Anonymization Algorithm Based on Chaos and Perturbation Techniques". Entropy 20 (5): 373. doi:10.3390/e20050373. ISSN 1099-4300. PMID 33265463. Bibcode2018Entrp..20..373E.  CC-BY icon.svg Text was copied from this source, which is available under a Creative Commons Attribution 4.0 International License.
  7. Skiera, Bernd (2022). The impact of the GDPR on the online advertising market. Klaus Miller, Yuxi Jin, Lennart Kraft, René Laub, Julia Schmitt. Frankfurt am Main. ISBN 978-3-9824173-0-1. OCLC 1303894344. 
  8. "Introduction to the Hash Function as a Personal Data Pseudonymisation Technique". Spanish Data Protection Authority. October 2019. 
  9. Kolata, Gina (23 July 2019). "Your Data Were 'Anonymized'? These Scientists Can Still Identify You". The New York Times. 
  10. "Attm (TA) 28857-06-17 Nursing Companies Association v. Ministry of Defense" (in yiddish). Pearl Cohen. 2019. 
  11. Solomon, S. (31 January 2019). "Data is up for grabs under outdated Israeli privacy law, think tank says". 

Further reading

External links