Blog Post by Kyle Carpenter, Senior Solutions Engineer
Delivering content at scale always has associated challenges. In this post I will discuss making requests back to a CDN origin and how to avoid performance problems and end-user experience problems that can crop up.
When a user requests content, if it’s found in cache – a cache hit – Limelight will deliver the content from cache. If the content isn’t found in cache – a cache miss – Limelight will retrieve the object from origin and deliver it, and will also replicate it into cache in the Point of Presence (PoP) that the user accessed the network from. This is known as a “pull” model. It will allow user demand to tailor where and when we store your objects in cache. However using only this behavior might leave performance gains on the table when discussing certain workflows.
Specifically, if each CDN PoP independently makes requests to origin, it might be possible to over utilize resources on the origin and create a negative user experience. This could lead to slow downs, time outs, errors and increased bandwidth costs at origin.
Here are a couple solutions for improving delivery performance while maintaining stability.
In instances where protecting the CDN origin is of utmost importance or where the highest cache efficiency is desired, CDN behavior can be tuned specifically to provide extra layers to the caching model described above. The following technique of intelligent cache management is called origin shield (some may know it as peer grouping).
For example, let’s say a user connected to the Los Angeles PoP requests a piece of content. Under standard operations, the Los Angeles PoP will check itself before requesting the content from origin storage, and once retrieved will cache the content in Los Angeles for future requests. While this situation is better than all requests going to origin, it still means that requests to other PoPs -- potentially hundreds if not thousands of concurrent requests -- will go to origin if a piece of content suddenly becomes popular, or in the case of chunked streaming, is constantly being updated. Fortunately there is a solution to mitigate this issue.
With origin shield, in addition to having the Los Angeles check itself for a given piece of content, Limelight can also check all the PoPs en route on the way back to origin in a systematic way. If the origin is in Baltimore, in addition to checking Los Angeles, we would also check Phoenix, Dallas and New York PoPs, and would only request the content from origin storage if none of these other PoPs have the content.
The advantage here is that we will reduce the number of connections back to origin from a potential pool of all Limelight Pops down to just one, effectively shielding the origin from a influx of requests.
In multi-CDN environments this benefit can also be extended to all CDNs, where Limelight would be the last point of presence before hitting the origin server.
Another way to shield origin and increase performance of assets on the CDN would be offloading to Limelight Origin Storage. With this method, assets are migrated from the current origin server to a delivery-optimized origin, reducing vulnerability of your current origin and infrastructure while improving performance.
Limelight Origin Storage offers a built in distribution method that is not present in other storage solutions by default. In its Standard access policies, Limelight storage will automatically replicate three copies or more of an asset geographically. Locations are determined by geographic policies you choose based on where your audience is located. If you have globally dispersed audiences, a global policy will replicate content to storage locations in the Americas, Europe and Asia. If your user base in concentrated in one or two regions, a regional policy may be preferable. The origin storage is directly collocated in delivery PoPs on Limelight’s private network. This means that rather than having to rely on scaling your current origin server or adjusting peer grouping, resilience and performance are automatically built in. Requests will be served from the closest storage node, improving performance while preventing a funnel effect against any origin.
As a side note, there is one use case where peer groups and Limelight Origin Storage can be used together. If you utilize an Infrequent Access storage policy at Limelight, this means that content is stored in one location rather than three, with redundancy within that location. Because of this, performance will vary more depending on user location, and performance could be impacted if a piece of content suddenly became popular. This type of setup might be necessary for things like a long tail VOD library where content is infrequently access but requires good performance if it does become popular again.
While not an exhaustive list, this post should shed some light on methods available for making content highly available and performant without jeopardizing origin health.