Skip to content
Home » Blog » Async Apex Job

Async Apex Job

TL; DR;

Unified Async Apex base class to execute batch record processing asynchronously without having to worry (too much) about limits and how it actually executes. Intended for simpler jobs in order to ease off the pressure on trigger handlers rather than true big-batch tasks. Just keep calling executeAsync and let the class worry about how (ideally).

Shortcut to the full source

Woffle about Context (a.k.a. Async Apex 101 recap)

One of the recommended best practices in Apex is to defer any logic that is not strictly required to complete instantaneously to some sort of asynchronous context. This helps to keep the transaction lean and quick and also can make debugging easier by keeping logs for different business logic separated.

Obviously this is an oversimplification, one has to carefully consider the context. Sometimes things have to finish in the same transaction or their errors have to be fatal to the transaction. Then there is also the other extreme where we can have dozens of different log files appear when we open a single page. Finding the right one there could also be quite the challenge.

Primarily I’m looking at the Trigger context. Most Salesforce orgs I have seen over the years tend to rely heavily (if not exclusively) on record operations to execute any business logic and automations. There is usually an awful lot happening in the trigger cycle of the orgs’ most utilised objects. 

The pattern is fairly repetitive: 

  • Go through the trigger records evaluating some criteria and building a list of relevant records or Ids
  • Do some thing(s) with the identified records
  • Save the records

The second step in particular can be quite complex. We may have to query a number of related records, build maps and sets, compare and transform,.. all sorts. The load grows, the save operation slows down, until eventually it blows up. Then comes the time to spend a sprint or two on refactoring. If we want to improve the situation but don’t want to completely rebuild everything, pushing some parts off to Asynchronous execution contexts is one of the options here. The pattern to handle such situations asynchronously is also fairly similar each time, basically keeping the first step in the trigger and moving the other two to some Async context.

Async Apex Options

We have a number of options in Apex: Future, Queueable, Batch and Schedulable

Future is great at handling simple tasks, but it only allows primitive parameters and is also not great when we talk about processing batches of things. The processing may be quite complex sometimes and we may not be able to handle all the records at the same time before the Governor shuts us down. Schedule has the second issue as well. That is why we would usually use Queueable or Batch. 

Both of the above options are fairly simple to implement. We choose between them somehow, we implement the interface and launch our job when appropriate. Done. 

But we have to choose first. Or alternatively we can implement both and decide when calling, what option to go with. Either one has very little boiler-plate code around it, but if we want to support both it adds up. We should check limits before launching too. So the complexity grows quickly and starts making the trigger handler, which previously spoke fairly business-like language, quite cluttered.

I wanted to create something where I could really just provide “the way” to handle a batch of records and nothing else at all. I came up with an ApexJob class that does just that. It handles the following for me:

  • Choosing between Batch or Queueable based on Limits or total number of records
  • Re-queuing the Job if going via Queueable route
  • Logging (to some extent)

Implementing Async Jobs

The one method required of a “participating” class is handleBatch(). This is what the job is supposed to be achieving (equivalent of the Batchable execute() method except Queueable too). Separate interface is probably a good idea so this is it:

public interface IAsyncHandler {
    
    void handleBatch(List<SObject> records);
    
    /**
	   * Must override standard Object toString to return the full
	   * name of the class including any outer class. e.g. AsyncJobTest.TestHandler
	   *
	   * @return return the full name of the Class implementing the interface
	   */
    String toString();
}

Ok so not strictly only one. I’m forcing the implementing class to override toString() as well to return the fully qualified name of the class. It’s one of the things I ran into as I went. I need to be able to construct an instance of the class from the type name (more on that later) and the known approach* to get it from the instance doesn’t play nice with inner classes.

* String.valueOf(myInstanceVariable).split(‘:’)[0]

Inheritance over Composition

Yes, this does not sound right, I know. But here it actually makes a lot of sense. I did start the other way round with AsyncJob doing the work using a provided “handler instance” implementing the above Interface. It took exactly 1 day in Production to notice that the Apex Jobs menu in Setup then becomes completely unreadable. Just an endless list of “AsyncJob” entries.

Image of the Apex Jobs list from Salesforce setup showing AsyncJob in each row
Setup > Apex Jobs

So in this case it is actually a lot better to have the implementing classes extend AsyncJob instead. They themselves then become the Job Class listed in setup for admins to make sense of a lot better. So no interface, but rather an abstract method after all.

Running Async Jobs

This is what needs to be called in order to “schedule” an async job. I need the records to act on and the type name of the handler. Here I started with an option of passing in an instantiated handler. This allowed for it to be as complex to initialise as I needed it to be. In an Ideal scenario though the “handler” can be instantiated via an empty constructor. But again, more on that later.

public static Id executeAsync(List<SObject> records, String className)
public static Id executeAsync(List<SObject> records, String className, Integer batchSize, Strategy asyncStrategy)

There is an overloaded version of this method which gives us a little bit more flexibility and hints at one of the core features of the AsyncJob utility. We can control the Batch Size and influence the type of Async Job.

Queue or Batch

Queueable jobs are fast and simple. I believe they are the best choice for processing a relatively small number of records. It’s not normally an issue, but if there are many large batch jobs to be run, we want to keep the limits of concurrent batch jobs and the Flex Queue open. Also, each batch comes with 2 extra async executions (start and finish) that can add up in an extremely busy org and seem like overkill if we only have 1 or 2 batches to get through. Then again with a large number of batches it can be a lot faster and definitely more readable in your Apex Jobs list to run a Batch instead.

So the default setting I went with is what I call “Prefer Queueable”. This means that as long as Transaction Limits permit we run the jobs as Queueable. But if the number of batches is higher than say 5 we run a Batch job instead. I can choose to “Prefer Batch” and essentially reverse the selection, or I can force Batch or Queueable and nothing else. If that can’t be done because of the transaction context we get an Exception.

private static Id executeAsync(AsyncJob job, Strategy asyncStrategy) {
    if (job.records == null || job.records.isEmpty()) {
        System.debug(LoggingLevel.DEBUG, 'No records provided. Not running async job!');
        return null;     
    }
 
    Long noOfBatches = Decimal.valueOf(job.records.size() / job.batchSize).round(System.RoundingMode.UP);
    if (
        asyncStrategy == Strategy.QUEUEABLE_ONLY ||
        asyncStrategy == Strategy.NO_BATCH ||
        (noOfBatches < BATCHABLE_PREFERRED_SIZE &&
        asyncStrategy != Strategy.BATCH_ONLY &&
        asyncStrategy != Strategy.NO_QUEUEABLE)
    ) {
        return runPreferQueueable(job, asyncStrategy);
    }     
    return runPreferBatch(job, asyncStrategy);
}

Queueable Batches

When the Queueable job is supposed to handle more records than are allowed in a single batch it has to re-requeue itself. AsyncJob handles this using the same instance of the handler. So as long as that is initialised with all the records in mind it does not have a problem handling the next batch in a completely new job and transaction.

private void handleBatchQueueable() {
    List<SObject> currentBatch = new List<SObject>();
    while (currentBatch.size() < this.batchSize && !this.recordsToProcess.isEmpty()) {
        currentBatch.add(this.recordsToProcess.remove(0));
    }
    this.handleBatch(currentBatch);
    if (!this.recordsToProcess.isEmpty()) {
        reQueueNotYetProcessedRecords(this.recordsToProcess);     
    }
}
 
private void reQueueNotYetProcessedRecords(List<SObject> recordsToStillProcess) {
    if (Test.isRunningTest()) {
        System.debug(LoggingLevel.DEBUG, 'Re-queuing unprocessed records/batches blocked in Unit Tests because of limits!');
        return;
    }
    System.enqueueJob(this.setRecords(recordsToStillProcess));
}

Governor Limits

In order for this to be truly useful, I have to not worry about Governor Limits. That’s why the AsyncJob contains methods like isQueueableAvailable() and isBatchAvailable().

For the former it’s just using the Limits class to see how many Queueable jobs are still available.

private static Boolean isQueueableAvailable() {
    Integer queueableJobsLimit = Limits.getLimitQueueableJobs();
    Integer queueableJobsUsed = Limits.getQueueableJobs();
    return queueableJobsLimit > queueableJobsUsed;
}

The latter is not so easy and for now the class is just being a bit silly and checking it’s not running a batch already. I didn’t find an efficient way to keep checking the Flex Queue or any way to find out I’m inside a Batch.finish() method (which is the only place in isBatch() where it’s possible to execute another one). Lots to improve here.

The isNotQueueableInTest is another hurdle I fell over during testing. For some reason running Database.executeBatch() in Queueable inside a Unit Test is no-go. Not yet sure why.

private static Boolean isBatchAvailable() {
    Boolean isNotExecutingBatchOrFuture = !System.isBatch() && !System.isFuture();     
    Boolean isNotQueueableInTest = !System.isQueueable() || !Test.isRunningTest();
    return isNotExecutingBatchOrFuture && isNotQueueableInTest;
}

Ultimate Backup – Event

If we can’t run Queueable or a Batch the AsyncJob falls back to a Platform Event. Those should always be available. The Event is designed to serialise the records (so yea, there will be a limit to this) and a Trigger on its insert will re-try to run the job again via the Queueable-first route.

EventBus.publish(
    new AsyncJob__e(
        Payload__c = JSON.serialize(job.records),
        HandlerTypeName__c = job.handlerClassName,
        BatchSize__c = job.batchSize
    )
);
trigger AsyncJob on AsyncJob__e(after insert) {
   for (AsyncJob__e event : Trigger.new) {
       String handlerClassName = event.HandlerTypeName__c;
       List<sObject> records = (List<SObject>) JSON.deserialize(event.Payload__c, List<SObject>.class);
       if (records == null || records.isEmpty()) {
           return;
       }
       AsyncJob.executeAsync(records, handlerClassName, batchSize, AsyncJob.Strategy.PREFER_QUEUEABLE);
   }
}

In order for this to work the implementing class must provide a public empty constructor (hinted at that earlier) so that it can be initialised from the Type name. If the handler requires complex setup I have to use the “No Event” async strategy. And of course there needs to be a suitable overloaded version of the executeAsync() method that takes already initialised AsyncJob as an argument instead of its name to work with more complex constructors.

OnFinish

Recently I also added an onFinish() method to trigger any subsequent processing or notifications. I can’t collect data during the processing so easily though which brings me nicely to the final section.

/**
 * Executed from Batchable finish method or after last handleBatch in Queueable. Sub-classes
 * should maintain their own supporting data structures.
 *
 * !! Does not support Database.Stateful. If member variable persistence across "batches"
 * is required use Queueable. Ideally NO_BATCH strategy which will fall back to Event when
 * initialised from context without Queueable jobs left - from there Queueable is available.
 */
public virtual void onFinish() {
}

Limitations and Lessons

A big limitation is the inability to monitor state. Queueables do this fairly easily, but one has to be careful when using the Platform Event route as the state can be lost when serialising. Batchable provides the Database.Stateful interface, but the major issue I found is that a single class cannot implement it at the same time as also implementing Queueable. My AsyncJob class now has a NoBatch strategy for this purpose, but that’s far from ideal.

AsyncJob can now handle only lists of SObjects and those better have all the queried fields that you need. You can of course re-query to be safe, but that seems unnecessary as long as you have control over launching the job and/or use the Domain Layer consistently. Another option is to work on a List or Set of Ids to define a batch. That way you can force all the data collection to be done in the job itself, making it safer. This makes it more difficult when using the Batch route though (source is not always a query).

If you go a little overboard with this approach there could potentially be many async jobs competing to update the same records. So care needs to be taken here to not run into record locking issues. 

There may be situations where we have 2 potential candidate tasks for AsycJob, but one depends on the other to be completed. I have previously implemented a solution for that in the predecessor to AsyncJob that allowed different “handlers” to be registered with the AsyncJob within a transaction in a given order. Each of those had their own list of records to deal with. Once this first one is finished with all of its batches it launches the next in line. Worked great with Queueables based on Set of Ids, but there will be a couple of extra challenges here so I will tackle that another time.

Next Steps

Well done if you got this far. Thanks for reading and if you have any questions or improvement suggestions, let me know. Here’s a link to my AsyncJob class on GitHub. I intend to follow-up with progress on addressing the limitations I mentioned so if you’re interested make sure to check back again in some time.

Where to next

Hand reading through from outside to grab a box

UoW and SObject Instances

Oct 28, 20234 min read

I previously talked about the need to be careful about how you register records as dirty with the Apex Common’s…

Eliška Bušáková

Eliška Bušáková

Sep 8, 20226 min read

TL; DR; WiT community group leader Eliška Bušáková recommends to not be afraid to face challenges head on. Intro Hi,…

0 0 votes
Article Rating
Subscribe
Notify of
guest
2 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
trackback
Account Hierarchy Update - Pragmatic Bear's Salesforce 201
2 years ago

[…] I like to keep the record evaluation and collecting relevant records away from Trigger Handlers to keep those easier to read. In this case in AccountHierarchyBranchUpdate (link) class which has methods to call from triggers to see if relevant fields have been updated and collect their new values. Besides consulting the recursion control mentioned above it also remembers which field updates were already seen in the transaction in case Account Triggers go round more than once. Very much the same filtering happens as during the updateHierarchy process. Once ready it “schedules” the whole process to run as AsycJob (link). […]

trackback
Trigger Performance - The Battle Plan - Pragmatic Bear
9 months ago

[…] that needs to be explored as well. If you have been to my blog before you might know I am a fan of Async Apex (link). This time the actual Apex Code complexity is not the biggest problem we have, but it could be […]

2
0
Would love your thoughts, please comment.x
()
x