What can be riskful?
If you have information about a person and this person has never consented in you sharing this information; it is considered riskful data if this information contains enough to identify this particular person.
Is it bad?
Riskful data can be useful, vital even for your business.
- Unintentional sharing of this data to others may be punishable by law (intentional sharing too by the way).
- Others with access to this data may want to harm you by sharing the data.
- The person owning the data may request things from you, things like fixing errors or withdrawing it altogether. This may screw up more than you're willing to admit.
All of the above can be addressed by making the data as harmless as possible, without violating functionality.
How to make data harmless?
It's not easy, but you can do a lot at the cost of hardly anything. You can't make all your data harmless, save the complicated security measures for these last 5%.
The painful aspect of making data harmless is that although the database may be under your control, the applications are a different story. Hence, adapting for a harmless operation means being backwards compatible with regards to what the applications expects of the RDBMS in question.
Anonymising riskful data may be enough to make it harmless. The
pretext of the GDPR document writes in
“The principles of data protection should therefore not apply to … personal data rendered anonymous in such a manner that the data subject is not or no longer identifiable.”
As for pseudonymisation (sic!) it writes:
“To ascertain whether means are reasonably likely to be used to identify the natural person, account should be taken of all objective factors, such as the costs of and the amount of time required for identification, taking into consideration the available technology at the time of the processing and technological developments.”
This means that anonymous data is harmless, but one-way hashes will only be acceptable for that if they are hard enough to crack.
In the title anonymise and tag simultaneously we try to do just that.
(29) give way for a solution which is presented in the
above mentioned title.
The application of pseudonymisation to personal data can reduce the risks to the data subjects concerned and help controllers and processors to meet their data-protection obligations. The explicit introduction of ‘pseudony misation’ in this Regulation is not intended to preclude any other measures of data protection.
In order to create incentives to apply pseudonymisation when processing personal data, measures of pseudonymi sation should, whilst allowing general analysis, be possible within the same controller when that controller has taken technical and organisational measures necessary to ensure, for the processing concerned, that this Regulation is implemented, and that additional information for attributing the personal data to a specific data subject is kept separately. The controller processing the personal data should indicate the authorised persons within the same controller. ”
About this title
The first four or five titles were written in some kind of rage after visiting the Big Data Expo, Utrecht 2017. I then knew about GDPR and had implemented various mechanisms to avoid running risks. The commercial heavy lifting on that expo was terrible. People should be informed about GDPR without FUD.