The purpose of sfManagedCachePlugin is to allow automatically refresh cached data in the background without interrupting delivery of existing cached data. This is a quite natural but neglected part of caching. The very point of caching is that it’s faster and less resource intensive to deliver cached data than creating it on the fly. With existing cache solutions, when the cached data expires, it has to be refreshed while the user waits. In environments that rely heavily on caching, such behavior has the potential to create thread pileups and other cascading failure scenarios.
Currently, the plugin supports one mode of operation that works best with heavily used cached data. It’s used similar to sfFunctionCache. You select a storage method and the data is return via a callable and an array of arguments. The hash of the callable and the arguments serves as the cache key. Cache storage, callable and arguments are stored in a database along with an refresh time that is shorter than the cache expiry time.
I have a remote service call that on average takes about 10 seconds to return a result. On a regular basis, it needs to be called with 120 different sets of parameters. The data the service call returns is supposed to be refreshed every 30 minutes.
In such a situation, you have data expiring, on average, about every minute. With a normal cache, the user would have to wait 10 seconds for every time the data of one of those 120 calls expires. That can add up quickly. Without a grace period for expired data, you also face two other problems. If the cache is locked while refreshing you risk a thread pileup while more and more requests for the expired data come in while it’s being refreshed. If the cache is not locked, every request for the expired data will itself try to refresh the data, which might slow down the overall refresh time or even overload the data source.
These problems are avoided with a managed cache.
In the above example, I use a sfFileCache object with an expiry time of 1 day as the underlying storage. Then I use a static callable to make the individual service calls, marking the returned data to be refreshed every 30 minutes. That’s it, code-wise. A cron job takes care of refreshing the data. The expiry time of 1 day serves as the grace time for the data. When the refresh time is up, old data can be served for up to one day until new data is in.
Warning: This is not a fire and forget solution. You have to take care with the items you store with this method. Since the cache key is derived from a hash of the callable and arguments, every variation of those two variable creates a new, permanent entry (as of now).