System design vol.2

1. systerm design basic

1.1 caching

1.1.1  application sever cache


Placing a cache directly on a request layer node enables the local storage of
response data. Each time a request is made to the service, the node will
quickly return local cached data if it exists. If it is not in the cache, the
requesting node will query the data from disk. The cache on one request
layer node could also be located both in memory (which is very fast) and on
the node’s local disk (faster than going to network storage).
What happens when you expand this to many nodes? If the request layer is
expanded to multiple nodes, it’s still quite possible to have each node host
its own cache. However, if your load balancer randomly distributes requests
across the nodes, the same request will go to different nodes, thus increasing
cache misses. 

Two choices for overcoming this hurdle are global caches and
distributed caches.

 

即使用全局缓存,或分布式缓存。e.g. redis memeche 


1.1.2 Content Distribution Network (CDN)

CDNs are a kind of cache that comes into play for sites serving large amounts
of static media. In a typical CDN setup, a request will first ask the CDN for a
piece of static media; the CDN will serve that content if it has it locally
available. If it isn’t available, the CDN will query the back-end servers for the
file, cache it locally, and serve it to the requesting user.
If the system we are building isn’t yet large enough to have its own CDN, we
can ease a future transition by serving the static media off a separate 
subdomain (e.g. static.yourservice.com) using a lightweight HTTP server like
Nginx, and cut-over the DNS from your servers to a CDN later.

1.1.3 Cache Invalidation

Write-through cache:

Write-around cache:

Write-back cache: 


Comments

Popular posts from this blog

77.combinations

90.subsets-ii

40.Combination Sum II