Using LazyCache for clean and simple .NET Core in-memory caching

Dev Tips

I’m continuing to use .NET Core 2.1 to power my Podcast Site, and I’ve done a series of posts on some of the experiments I’ve been doing. I also upgraded to .NET Core 2.1 RC that came out this week. Here’s some posts if you want to catch up:

Having a blast, if I may say so.

I’ve been trying a number of ways to cache locally. I have an expensive call to a backend (7-8 seconds or more, without deserialization) so I want to cache it locally for a few hours until it expires. I have a way that work very well using a SemaphoreSlim. There’s some issues to be aware of but it has been rock solid. However, in the comments of the last caching post a number of people suggested I use “LazyCache.”

Alastair from the LazyCache team said this in the comments:

LazyCache wraps your “build stuff I want to cache” func in a Lazy<> or an AsyncLazy<> before passing it into MemoryCache to ensure the delegate only gets executed once as you retrieve it from the cache. It also allows you to swap between sync and async for the same cached thing. It is just a very thin wrapper around MemoryCache to save you the hassle of doing the locking yourself. A netstandard 2 version is in pre-release.
Since you asked the implementation is in CachingService.cs#L119 and proof it works is in CachingServiceTests.cs#L343

Nice! Sounds like it’s worth trying out. Most importantly, it’ll allow me to “refactor via subtraction.”

I want to have my “GetShows()” method go off and call the backend “database” which is a REST API over HTTP living at That backend call is expensive and doesn’t change often. I publish new shows every Thursday, so ideally SimpleCast would have a standard WebHook and I’d cache the result forever until they called me back. For now I will just cache it for 8 hours – a long but mostly arbitrary number. Really want that WebHook as that’s the correct model, IMHO.

LazyCache was added on my Configure in Startup.cs:


Kind of anticlimactic. 😉

Then I just make a method that knows how to populate my cache. That’s just a “Func” that returns a Task of List of Shows as you can see below. Then I call IAppCache’s “GetOrAddAsync” from LazyCache that either GETS the List of Shows out of the Cache OR it calls my Func, does the actual work, then returns the results. The results are cached for 8 hours. Compare this to my previous code and it’s a lot cleaner.

public class ShowDatabase : IShowDatabase
    private readonly IAppCache _cache;
    private readonly ILogger _logger;
    private SimpleCastClient _client;

    public ShowDatabase(IAppCache appCache,
            ILogger<ShowDatabase> logger,
            SimpleCastClient client)
        _client = client;
        _logger = logger;
        _cache = appCache;

    public async Task<List<Show>> GetShows()
        Func<Task<List<Show>>> showObjectFactory = () => PopulateShowsCache();
        var retVal = await _cache.GetOrAddAsync("shows", showObjectFactory, DateTimeOffset.Now.AddHours(8));
        return retVal;
    private async Task<List<Show>> PopulateShowsCache()
        List<Show> shows = shows = await _client.GetShows();
        _logger.LogInformation($"Loaded {shows.Count} shows");
        return shows.Where(c => c.PublishedAt < DateTime.UtcNow).ToList();

It’s always important to point out there’s a dozen or more ways to do this. I’m not selling a prescription here or The One True Way, but rather exploring the options and edges and examining the trade-offs.

  • As mentioned before, me using “shows” as a magic string for the key here makes no guarantees that another co-worker isn’t also using “shows” as the key.
    • Solution? Depends. I could have a function-specific unique key but that only ensures this function is fast twice. If someone else is calling the backend themselves I’m losing the benefits of a centralized (albeit process-local – not distributed like Redis) cache.
  • I’m also caching the full list and then doing a where/filter every time.
    • A little sloppiness on my part, but also because I’m still feeling this area out. Do I want to cache the whole thing and then let the callers filter? Or do I want to have GetShows() and GetActiveShows()? Dunno yet. But worth pointing out.
  • There’s layers to caching. Do I cache the HttpResponse but not the deserialization? Here I’m caching the List<Shows>, complete. I like caching List<T> because a caller can query it, although I’m sending back just active shows (see above).
    • Another perspective is to use the <cache> TagHelper in Razor and cache Razor’s resulting rendered HTML. There is value in caching the object graph, but I need to think about perhaps caching both List<T> AND the rendered HTML.
    • I’ll explore this next.

I’m enjoying myself though. 😉

Go explore LazyCache! I’m using beta2 but there’s a whole number of releases going back years and it’s quite stable so far.

Lazy cache is a simple in-memory caching service. It has a developer friendly generics based API, and provides a thread safe cache implementation that guarantees to only execute your cachable delegates once (it’s lazy!). Under the hood it leverages ObjectCache and Lazy to provide performance and reliability in heavy load scenarios.

For ASP.NET Core it’s quick to experiment with LazyCache and get it set up. Give it a try, and share your favorite caching techniques in the comments.

Tai Chi photo by Luisen Rodrigo used under Creative Commons Attribution 2.0 Generic (CC BY 2.0), thanks!

Sponsor: Check out JetBrains Rider: a cross-platform .NET IDE. Edit, refactor, test and debug ASP.NET, .NET Framework, .NET Core, Xamarin or Unity applications. Learn more and download a 30-day trial!

Source link

Leave a Reply