Recommendation : 1. Reduce the impact of data from its storage and access
Does the API provide limits, filters and the list of fields to return ?
Data processing
B People
A Planet
B Prosperity
Difficulty
**
Priority
High
Récurrence
OnUpdate
Tests
What is the maximum output returned by APIs?
Precisions
User behavior and needs are very difficult to anticipate. There is a tendency to provide the front end with as much data as possible to cover the maximum number of use cases. This practice is inefficient from a Sustainable IT point of view as the collection, processing and routing of data generates a load and therefore a significant energy consumption without any certainty that all this data will actually be used. The service footprint should be reduced by collecting only the necessary data.
Use Case
Source code checks track API calls parameters to avoid unlimited returns. Development tools & browser console are used to track network flows, size & frequency (CHROME: console / network)
Opquast 73 74 / GreenIT 74 75
Additional elements
Operational issues related to the project
Rule for assessing the level of compliance of the criterion
Number of API data collected analyzed / Number of API data collected
Life cycle
Réalisation
19 other criteria related to the recommendation: Reduce the impact of data from its storage and access
Data
Is the number of requests kept to a minimum (no looping) ?
Data
Are the slow query detection thresholds set effectively ?
Data processing
Does regulated data (personal, health, financial) comply with the recommendations for structuring these categories of data ?
security
Is sensitive user data secure ?
Data processing
Is the data collected really useful ?
Data processing
Is sensitive data collected ?
Data processing
Does the data have an expiration date when it is deleted ?
Data
Are data replications between multiple Database Engine (Cluster) instances appropriate for sensitivity and availability requirement ?
Data
Is frequently accessed data available in RAM ?
Data
Are "live" and "dead" data handled differently (eg: Slow storage for "dead" data) ?
Data
Are EXPLAIN clauses used on "Slow queries" to optimize indexes ?
Data
Have the different data access solutions (queries, triggers, stored procedures) been tested ?
Data
Is a NoSql solution more efficient than its relational equivalent ?
Data
Is an alternative to the relational model being considered ?
Data
Are database indexes consistent with operations ?
Data
Is the removal of obsolete data being managed ?
Data
Can data be backed up incrementally ?
Data
Do implemented queries use joins rather than multiple queries ?
Data
Is an alternative to SQL queries used when possible (local storage or similar) ?