Recommendation : 1. Reduce the impact of data from its storage and access
Is sensitive user data secure ?
What safeguards are in place for sensitive data?
The GDPR requires the implementation of a PIA, which must be effectively monitored and integrated into the quality processes. Beforehand, all solutions for securing sensitive data should be deployed and validated.
Operational issues related to the project
Rule for assessing the level of compliance of the criterion
Number of sensitive data analyzed / Number of sensitive data
19 other criteria related to the recommendation: Reduce the impact of data from its storage and access
Is the number of requests kept to a minimum (no looping) ?
Is an alternative to SQL queries used when possible (local storage or similar) ?
Do implemented queries use joins rather than multiple queries ?
Can data be backed up incrementally ?
Is the removal of obsolete data being managed ?
Are database indexes consistent with operations ?
Is an alternative to the relational model being considered ?
Is a NoSql solution more efficient than its relational equivalent ?
Have the different data access solutions (queries, triggers, stored procedures) been tested ?
Are EXPLAIN clauses used on "Slow queries" to optimize indexes ?
Are the slow query detection thresholds set effectively ?
Are "live" and "dead" data handled differently (eg: Slow storage for "dead" data) ?
Is frequently accessed data available in RAM ?
Are data replications between multiple Database Engine (Cluster) instances appropriate for sensitivity and availability requirement ?
Does the data have an expiration date when it is deleted ?
Is sensitive data collected ?
Is the data collected really useful ?
Does the API provide limits, filters and the list of fields to return ?
Does regulated data (personal, health, financial) comply with the recommendations for structuring these categories of data ?