You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
For large repeated queries whose results are not frequently changing, it's better to have them cached, than have them repeatedly evaluated. We can use node-cache , as a lightweight solution for caching.
REQUIREMENT
Implement class FastCache in services , which will use node-cache internally and facilitate usage
Where to look
src/services/ implement here
PS: Currenly admins are attending to the class design of it and reading up on doc. If anybody wants to pitch in can work on it. 😃
The text was updated successfully, but these errors were encountered:
rajatkb
changed the title
[Feat.] add utility class responsible for provided in memory cache
[Feat.] add FastCache class responsible for providing in memory cache
Mar 20, 2020
Redis would be a better fit for this case. This is because your server needs to scale.
For example during peak load your backend scales to a 4 server cluster. With node-cache package each node server will have it's own cache, defeating the purpose of caching. Better have a dedicated Redis server to handle this.
A Reddis will be a overkill for the purpose of storing in memory a few query results that may be repeated often. (trying to reduce network overhead of contacting mongodb)
Current implementation can be abstracted away with an interface to follow , later if required reddis can be introduced , swaping out the existing implementation.
Scaling is important but for now a lightweight implementation suffices for the usecase.
For large repeated queries whose results are not frequently changing, it's better to have them cached, than have them repeatedly evaluated. We can use node-cache , as a lightweight solution for caching.
REQUIREMENT
Where to look
PS: Currenly admins are attending to the class design of it and reading up on doc. If anybody wants to pitch in can work on it. 😃
The text was updated successfully, but these errors were encountered: