Skip to content
Home » Blog » Query Caching with Selectors

Query Caching with Selectors

TL; DR;

Building a flexible SOQL query result cache usable with Enterprise Patterns’ Selectors. Skip to code here.

Problem

We had an existing static cache implementation that I wasn’t very happy with. It was a fairly simple approach, but very repetitive. Imagine a class full of static Maps (one for each cached object) with methods to add and remove items. Some maps were related to others so those methods got complicated (clearing cache of child records when parent was changed). 

I wanted to slim the class down and somewhat generalise it. I still need to be able to clear the cache from outside and retain the possibility to use non-cached versions, but ideally the caller should not need to necessarily be aware which queries are cached and which are not.

Since we are also gradually adopting the Enterprise Patterns approach I wanted to explore hiding this caching behind Selectors. Naturally I tried to look for existing solutions. I wasn’t really successful though. The only thing I found was this Cross Cutting Concerts repository but to be honest it was a bit overwhelming. I decided to build it up myself, it’ll be a good exercise.

Building the Cache

If the approach is general enough I will be able to easily add different implementations later. For instance, replace static variables with Platform Cache. I will need that very soon as there are some frequent queries to data which doesn’t change much (think Products and prices).

Obvious Parts of a Cache

First thing is getting records out of the cache. The interface for that will be quite obvious. Using Object types will let me keep the same interface when I need to be getting Lists or Maps but have a collection of related caches that I may need to clear at the same time.

public interface IRecordCache {
    Object getRecords(Set<Object> keys);
    void clear();
    void remove(Set<Object> keys);
}

There also has to be a generalised way to fetch records that are not currently cached. Otherwise every Selector that will want to use it will have to know exactly how the data is stored in the cache and access it that way. So there needs to be a way for the cache to know what query to run. Or rather to ask for more records without knowing where they come from. 

public interface IRecordCacheProvider {
    Object fetchRecords(Set<Object> keys);
}

Each cache implementation will just get an instance of a Cache Data Provider and then a generic implementation can be created without dependencies to any other code. I looked at the CacheBuilder for inspiration here, but it seems to be doing just one record at the time.

Few Extras from Experimenting

I will demonstrate the reasons shortly, but my interfaces evolved slightly. Being super generic about key and return types would be great, but it’s very hard in Apex. Also, I found that I need to explicitly work with maps in order to be caching multiple results the same key. There is a unique key method needed to identify the provider in the static “cache store” too.

public interface IRecordCache {
    List<SObject> getRecords(Set<String> keys);
    Map<String, List<SObject>> getRecordsMap(Set<String> keys);
    void clear();
    void remove(Set<String> keys);
    Set<String> getKeys();
}
 
public interface IRecordCacheProvider {
    List<SObject> fetchRecords(Set<String> keys);
    Map<String, List<SObject>> fetchRecordsMap(Set<String> keys);
    SObjectField getKeyField();
    String getUniqueCacheIdentifier();
}

How does it work

This time I want to focus only on the Static Variable caching. But the code I’ll show here is already prepared to work with other implementations too. They will all have quite a bit in common. That’s why there is an abstract “base” implementation which will handle using the cache vs fetching data using a registered provider. At the same time it is also going to serve as a factory and holder of all cache instances. Which is useful when resetting everything (in unit tests).

public abstract inherited sharing class RecordCache implements IRecordCache {
    public interface IConstructable {
        IRecordCache construct(IRecordCacheProvider provider);
    }
 
    private static Map<String, IRecordCache> cacheMap = new Map<String, IRecordCache>();
 
    protected IRecordCacheProvider provider;
    protected SObjectField keyField;
 
    protected RecordCache(IRecordCacheProvider provider) {
        this.provider = provider;
        this.keyField = provider.getKeyField();
    }
 
    public static IRecordCache getRecordCache(System.Type cacheType, IRecordCacheProvider provider) {
        String uniqueCachekey = cacheType.toString() + provider.getUniqueCacheIdentifier();
        if (!cacheMap.containsKey(uniqueCachekey)) {
            cacheMap.put(uniqueCachekey, newInstance(cacheType, provider));
        }
        return cacheMap.get(uniqueCachekey);
    }
 
    private static IRecordCache newInstance(System.Type cacheType, IRecordCacheProvider provider) {
        String constructorTypeName = String.valueOf(cacheType) + '.Constructor';
        Object constructorObject = Type.forName(constructorTypeName).newInstance();
        return ((RecordCache.IConstructable) constructorObject).construct(provider);
    }
 
    public static void clearAll() {
        cacheMap = new Map<String, IRecordCache>();
    }
 
    /** 
    * Specifics to be filled in by different cache implementations
    */
    public abstract void clear();
    public abstract void remove(Set<String> keys);
    public abstract Set<String> getKeys();
 
    public List<SObject> getRecords(Set<String> keys) {
        List<SObject> matchingResults = new List<SObject>();
        for (List<SObject> cachedResult : getRecordsMap(keys).values()) {
            matchingResults.addAll(cachedResult);
        }
        return matchingResults;
    }
 
    public Map<String, List<SObject>> getRecordsMap(Set<String> keys) {
        Map<String, List<SObject>> cachedResults = getFromCache(keys);
        Set<String> keysNotItCache = new Set<String>(keys);
        keysNotItCache.removeAll(cachedResults.keySet());
        if (!keysNotItCache.isEmpty()) {
            Map<String, List<SObject>> queriedRecords = ListUtil.mapToListByString(
                (List<SObject>) this.provider.fetchRecords(keysNotItCache),
                this.keyField
            );
            keysNotItCache.removeAll(queriedRecords.keySet());
            queriedRecords.putAll(getEmptyListsForKeysWithoutResults(keysNotItCache));
            cacheFetched(queriedRecords);
            cachedResults.putAll(queriedRecords);
        }
        return cachedResults;
    }
 
    /** 
    * Specifics to be filled in by different cache implementations
    */
    protected abstract Map<String, List<SObject>> getFromCache(Set<String> keys);
    protected abstract void cacheFetched(Map<String, List<SObject>> fetchedRecords);
 
    private Map<String, List<SObject>> getEmptyListsForKeysWithoutResults(Set<String> keys) {
        Map<String, List<SObject>> emptyResults = new Map<String, List<SObject>>();
        for (String key : keys) {
            emptyResults.put(key, new List<SObject>());
        }
        return emptyResults;
    }
 
}

Notice the bit about putting empty lists for keys that were not found in the database. This is needed otherwise those keys would not actually be in the cache. Then next time they are requested the cache would not know them and have to query again!

keysNotItCache.removeAll(queriedRecords.keySet());
queriedRecords.putAll(getEmptyListsForKeysWithoutResults(keysNotItCache));
cacheFetched(queriedRecords);

Version 1 – Static Variables

Static variable caching version of the Record Cache is now fairly simple. The only extra bit is the IConstructable builder helping to create an instance of it dynamically.

public inherited sharing class RecordCacheStaticVariable extends RecordCache {
    private Map<String, List<SObject>> cache;
 
    private RecordCacheStaticVariable(IRecordCacheProvider provider) {
        super(provider);
        this.cache = new Map<String, List<SObject>>();
    }
 
    public override void clear() {
        this.cache.clear();
    }
 
    public override void remove(Set<String> keys) {
        for (String key : keys) {
            this.cache.remove(key);
        }
    }
 
    public override Set<String> getKeys() {
        return this.cache.keySet();
    }
 
    protected override Map<String, List<SObject>> getFromCache(Set<String> keys) {
        Map<String, List<SObject>> matchingResults = new Map<String, List<SObject>>();
        for (String requiredKey : keys) {
            List<SObject> matchingItems = (List<SObject>) this.cache.get(requiredKey);
            if (matchingItems != null) {
                matchingResults.put(requiredKey, matchingItems);
            }
        }
        return matchingResults;
    }
 
    protected override void cacheFetched(Map<String, List<SObject>> fetchedRecords) {
        for (String requiredKey : fetchedRecords.keySet()) {
            if (fetchedRecords.containsKey(requiredKey)) {
                this.cache.put(requiredKey, fetchedRecords.get(requiredKey));
            } else {
                this.cache.put(requiredKey, null);
            }
        }
    }
 
    public class Constructor implements RecordCache.IConstructable {
        public IRecordCache construct(IRecordCacheProvider provider) {
            return new RecordCacheStaticVariable(provider);
        }
    }
}

Getting Records from Database

Each of the provider classes will need to be initialised somehow. It’s a good idea to unify this (we have a common selector type anyway) and save some repeated code again. For that I’ve added an abstract Provider base class with a factory method.

I have made my life a little bit easier by changing the interfaces to work with String instead of Object. Still I had to add some helper methods allowing the real providers to work with Ids instead of Strings too. I am planning to add similar conversions from Object and to Numbers too, but right now I don’t need it.

The main thing to note here is that I call the fetchRecords method from the fetchRecordsMap, instead of also leaving it not implemented. This way the extending implementations only have to implement one of them and it’ll work even if the source can only get a list. I always need to work with maps here as that’s how I store multiple results for the same key in the cache.

public abstract with sharing class RecordCacheProvider implements IRecordCacheProvider {
    protected fflib_SObjectSelector selector;
 
    private void setSelector(fflib_SObjectSelector selector) {
        this.selector = selector;
    }
 
   public static IRecordCacheProvider newInstance(Selector s, Type providerType) {
        RecordCacheProvider provider = (RecordCacheProvider) providerType.newInstance();
        provider.setSelector(s);
        return provider;
    }
 
    public static Map<String, List<SObject>> convertToStringMap(Map<Id, List<SObject>> recordsMap) {
        return (Map<String, List<SObject>>) JSON.deserialize(JSON.serialize(recordsMap), Map<String, List<SObject>>.class);
    }
 
    public virtual List<SObject> fetchRecords(Set<String> keys) {
        return fetchRecords(convertStringKeysToIds(keys));
    }
 
    public virtual List<SObject> fetchRecords(Set<Id> keys) {
        throw new RecordCacheProviderException('Unsupported call: fetchRecords; Not yet implemented!');
    }
 
    public virtual Map<String, List<SObject>> fetchRecordsMap(Set<String> keys) {
        return convertToStringMap(fetchRecordsMap(convertStringKeysToIds(keys)));
    }
 
    public virtual Map<Id, List<SObject>> fetchRecordsMap(Set<Id> keys) {
        return ListUtil.mapToListById(fetchRecords(keys), this.getKeyField());
    }
 
    public virtual String getUniqueCacheIdentifier() {
        String className = String.valueOf(this).split(':')[0];
        return className.left(10) + Math.abs(className.hashCode()); //key must be alphanumeric, hash is sometimes negative and toString adds "-"
    }
 
    public abstract SObjectField getKeyField();
 
    private static Set<Id> convertStringKeysToIds(Set<String> keys) {
        Set<Id> ids = new Set<Id>();
        for (String key : keys) {
            ids.add((Id) key);
        }
        return ids;
    }
}

Cache meats Selector

So now I have the actual cache, I need to make sure that using it is easy too. There is the provider interface that needs to be implemented with each query that has to be cached. This doesn’t sound very good. We could end up with a large number of classes that just contain one query each. But we are using Selectors so technically there is already a place where queries live. So I want to explore an inner-class approach to hide these extra classes away. 

Should I just add caching to the Selectors where appropriate or is it better to create alternative versions of the Selectors? I prefer the latter. I find it better adheres to the single-responsibility principle. 

Let’s consider this simplified Opportunity Selector. 

public virtual inherited sharing class OpportunitySelector extends Selector{
    public virtual List<Opportunity> selectById(Set<Id> ids) {
        return [SELECT Id, Name, AccountId, Amount, StageName FROM Opportunity WHERE Id IN :ids];
    }
 
    public virtual Map<Id, List<Opportunity>> selectByAccountId(Set<Id> accountIds) {
        return (Map<Id, List<Opportunity>>) ListUtil.mapToListById(
            [SELECT Id, Name, AccountId, Amount, StageName FROM Opportunity WHERE AccountId IN :accountIds],
            Schema.Opportunity.AccountId
        );
    }
}

An extension of it can override what’s needed to be cached, like in the example below. The calling code can happily continue to work with the parent type everywhere and the decision which version to use can be up to the framework.

public inherited sharing class OpportunityCachedSelector extends OpportunitySelector {
    public override List<Opportunity> selectById(Set<Id> ids) {
        return ((RecordCache) OpportunityCache.getOpportunityCache(
                RecordCacheStaticVariable.class,
                RecordCacheProvider.newInstance(this, OpportunityByIdProvider.class)
            ))
            .getRecords(ids);
    }
 
    private class OpportunityByIdProvider extends RecordCacheProvider {
        private OpportunityByIdProvider(OpportunitySelector s) {
            super(s);
        }
 
        public override Object fetchRecords(Set<Object> keys) {
            OpportunityCachedSelector selector = (OpportunityCachedSelector) this.s;
            return selector.selectById(convertKeys(keys));
        }
 
        public override SObjectField getKeyField() {
            return Schema.Opportunity.Id;
        }
    }
}

Well done if you spotted the problem right away. I had to run a test first. Obviously this is a Catch-22 situation. We call selectById, this calls the cache which calls the provider, which is the same selector’s selectById, which calls the cache, … ::FACEPALM::

The Super Methods

I guess the normal and cached selectors need to be siblings, rather than parent-extension. But then they would not really be the same type. They are both selectors, but not the specific selectors with the same Opportunity specific methods. In the proper fflib implementation this is not a problem as there is actually an IOpportunitySelector interface they both could share. I didn’t adopt this approach though (maybe I should) so instead I did this:

public with sharing class OpportunityCachedSelector extends OpportunitySelector {
    public override List<Opportunity> selectById(Set<Id> ids) {
        return ((RecordCache) OpportunityCache.getOpportunityCache(
                RecordCacheStaticVariable.class,
                RecordCacheProvider.newInstance(this, OpportunityByIdProvider.class)
            ))
            .getRecords(ids);
    }
 
    private List<Opportunity> selectByIdSuper(Set<Id> ids) {
        return super.selectById(ids);
    }
 
    private class OpportunityByIdProvider extends RecordCacheProvider {
        private OpportunityByIdProvider(OpportunitySelector s) {
            super(s);
        }
 
        public override Object fetchRecords(Set<Object> keys) {
            OpportunityCachedSelector selector = (OpportunityCachedSelector) this.s;
            return selector.selectByIdSuper(convertKeys(keys));
        }
 
        public override SObjectField getKeyField() {
            return Schema.Opportunity.Id;
        }
    }
}

It kind of goes against the idea of having the cache-using classes add as little code as possible, but I don’t have to have the cached Selector obtain an instance of the non-cached one to give the provider. I feel that could make the Class Factory a bit more complicated and perhaps even be an issue if Selector instances are long lived instances with some specifics at some point. Maybe not a good reason since it’s not the case at the moment, but this is where I am at right now.

Contextual Cache Group

One final mystery to clear. You probably noticed the Selector is getting the Cache from a class called OpportunityCache instead of RecordCache. Just like the latter is meant to hold all caches to clear at the same time as needed, the former is doing the same for the Opportunity context. 

E.g. once I have cached the second method in my Opportunity selector as well, I can remove affected records during updates from both caches without having to iterate over different instances related to different records. Something like this:

public with sharing class OpportunityCache {
    private static Map<String, IRecordCache> cacheMap = new Map<String, IRecordCache>();
 
    public static IRecordCache getOpportunityCache(System.Type cacheType, IRecordCacheProvider provider) {
        String uniqueCachekey = cacheType.toString() + provider.getUniqueCacheIdentifier();
        if (!cacheMap.containsKey(uniqueCachekey)) {
            cacheMap.put(uniqueCachekey, RecordCache.getRecordCache(cacheType, provider));
        }
        return cacheMap.get(uniqueCachekey);
    }
 
    public static void clear() {
        for (IRecordCache opportunityCache : cacheMap.values()) {
            opportunityCache.clear();
        }
    }
 
    public static void remove(Set<Id> ids) {
        for (IRecordCache cache : cacheMap.values()) {
            cache.remove(new Set<String>((List<String>) new List<Id>(ids)));
        }
    }
 
    public static void removeBasedOnTriggerContext(Map<Id, Opportunity> triggerRecords) {
        Set<Id> accountIds = ListUtil.getUniqueIds(triggerRecords.values(), Opportunity.AccountId);
        remove(triggerRecords.keySet());
        remove(accountIds);
    }
}

Some final thoughts..

Next time I want to talk about the Platform Cache version of this cache. It’s been quite a lot of fun trying to get it to work (I am still finding surprises). Also, I have to keep thinking about how to efficiently solve a problem this whole approach is introducing to the neat Enterprise Patterns Selectors: Query Ordering is not respected!

Thanks for reading. Do let me know what you (would have) done differently. If you wanna give my version a try, find the full source here. And as I said, next time I’ll talk about using the Platform Cache with this.

Where to next

Picture illustrating waste

Wasting Time in Disabled fflib Trigger Handlers

Jun 27, 20234 min read

Did you know that you can dynamically disable fflib_SObjectDomain Trigger Handlers? You did? Ok then, did you know that when…

Deleting Test Results via CLI

Deleting Test Results via CLI

Oct 17, 20228 min read

TL; DR; Clearing Test Execution History from a Sandbox automatically as a workaround to a known issue with CLI failing…

0 0 votes
Article Rating
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x