Storing your own data in EPiServer’s blob store

One of the things that you need to consider when deciding where to store your data is the performance of accessing it. Will it be accessed rarely? Will you be updating it frequently?

We recently created access for around 18k worth of virtual non EPiServer pages which are retrieved from a third party system. The way we needed to do this was feeding them through a single EPiServer page using custom routing. Each page would have it’s own slug based on the current page title.

After some discussion we landed in a solution using EPiServer’s blob store rather than the Dynamic Data Store, mainly due to performance. To minimize problems with single blob files being accessed by multiple servers at the same time, we are using a NuGet package called DistributedLock (which in turn employs the standard locking functionality of the SQL database).

You may also be interested in the article Fault tolerant file blob provider for EPiServer websites.

Using EPiServer’s blob storage

To demonstrate, here is a slightly modified version of our slug repository implementation. The guid part of the ContainerId Uri object will be the name of the directory in your blob storage on disk. The guid in the BlobId is the name of the file keeping the actual data. Note that it has to be a valid guid on the format below. We needed guids that all servers know about, that never changes

public class SlugRepository : ISlugRepository
{
  private readonly IBlobFactory _blobFactory;

  public SlugRepository(IBlobFactory blobFactory)
  {
  _blobFactory = blobFactory ?? throw new ArgumentNullException(nameof(blobFactory));
  }

  private static Uri ContainerId => 
new Uri(string.Format(CultureInfo.InvariantCulture, "{0}://{1}/{2}", "epi.fx.blob", "default", "963c7f4132884b409910e5760c96205a"));

  private static Uri BlobId => 
new Uri(ContainerId, string.Concat(ContainerId.AbsolutePath, "/", "d212913f0c3a4c6b92ddc6bdc303e987", ".json"));

  private readonly SqlDistributedReaderWriterLock _blobLock =
new SqlDistributedReaderWriterLock("SlugBlob", ConfigurationManager.ConnectionStrings[Names.EPiServerDb].ConnectionString);

The _blobLock object is used by DistributedLock for setting the actual lock in the EPiServer database.

Reading and writing JSON to EPiServer blob storage

In our solution we have created a serializable Grouping object for our URL slug to ID mappings. Our goal was to serialize all 18k of map groupings into JSON and write it to the blob storage.

For added readability we’ve split the actual reading and writing into separate methods. The public methods are rather straight forward, they just get the proper blob, manages the lock and does some error handling.

public void WriteMappings(IEnumerable<SlugMapGrouping> groupings)
{
  Blob blob = _blobFactory.GetBlob(BlobId);
  using (_blobLock.AcquireWriteLock())
  {
    WriteGroupingsTo(blob, groupings);
  }
}

public IEnumerable<SlugMapGrouping> ReadMappingsOrEmpty()
{
  Blob blob = _blobFactory.GetBlob(BlobId);
  IEnumerable<SlugMapGrouping> mappings;
  try
  {
    using (_blobLock.AcquireReadLock())
    {
      mappings = ReadMappingFrom(blob);
    }
  }
  catch (IOException)
  {
    mappings = new List<SlugMapGrouping>();
  }
  return mappings;
}

Normally it’s not a good idea to allow exceptions to be thrown when they can be avoided, however, in this case it will if the file doesn’t exist (in our case that there are no mappings yet). So not that often.

The actual reading and writing of JSON data is performed using the Newtonsoft Serializer as seen below.

private static IEnumerable<SlugMapGrouping> ReadMappingFrom(Blob blob)
{
  using (Stream stream = blob.OpenRead())
  using (StreamReader streamReader = new StreamReader(stream))
  {
    var serializer = new JsonSerializer();
    var type = typeof(IEnumerable<SlugMapGrouping>);
    return (IEnumerable<SlugMapGrouping>) serializer.Deserialize(streamReader, type);
  }
}

private static void WriteGroupingsTo(Blob blob, IEnumerable<SlugMapGrouping> groupings)
{
  using (Stream stream = blob.OpenWrite())
  using (StreamWriter streamWriter = new StreamWriter(stream))
  {
    var serializer = new JsonSerializer();
    serializer.Serialize(streamWriter, groupings);
  }
}

If the storage directory or file does not exist, they will be created automatically.

Note that if you cache the deserialized content (and you probably should), don’t forget to propagate the clear cache events to all involved servers when needed.