You are viewing the preview version of this book
Click here for the full version.

Pipeline resolvers

AppSync resolvers come in two varieties: unit and pipeline. Unit resolvers are "normal" resolvers when you define a data source, write a request and a response mapping template, and finally, attach it to a field. When AppSync resolves the field, it transforms the request using the template, call the data source, then transform the response.

The main drawback of unit resolvers is that they allow only a single call to a data source. How to implement a resolver when it needs to call the database multiple times? Or when you need to interact with a different system and write some data? For example, if users are managed by Cognito but all of them also needs to be inserted into a database, then you'd need two data sources: an HTTP that interacts with the Cognito API, and a second one to put an item to DynamoDB.

With only unit resolvers, this is only possible with a Lambda function that the resolver calls and that does both operations. We've covered this in the Direct Lambda resolver chapter.

A Lambda resolver can call other services

Pipeline resolvers provide a way to solve this without opting out of the AppSync resolver architecture. In practice, they allow extending a unit resolver into a pipeline for simple cases, so you only need to resort to Lambda functions when the resolver becomes too complicated. Note though that there is a hard limit of 10 functions per pipeline.

A pipeline is made up of individual functions, each behaving like a unit resolver with a data source, a request, and a response mapping template. Moreover, each of them gets the result of the previous function, allowing them to generate a response in steps. Finally, the whole pipeline also has a request and response mapping template acting as a transformer for the first and the last functions.

As each function is a full-blown resolver, they can interact with different data sources. The first might make a HTTP request, while the second executes a DynamoDB transaction. In practice, pipeline resolvers are powerful: especially as the HTTP resolver can call AWS APIs, AppSync can handle a surprising amount of processes.

Pipeline resolver

On the console, functions have a dedicated menu where you can manage them:

Resolver functions have a separate menu entry

Note that each function has a data source configured.

Then for the resolver, add a Pipeline for the field:

A pipeline resolver is configured for a field

And in the resolver, specify the functions to run:

A pipeline resolver with two functions

In this chapter, we'll see the programming constructs available for pipeline resolvers and the best practices when working with them.

Previous result

Since each function returns a value and in a pipeline they run sequentially, each function has access to the previous one's result. This value is in the $ctx.prev.result and is constantly overwritten when a function finishes.

This value is the primary connecting link between the functions in the pipeline: each interfacing with a data source, and each getting the result of the previous function, and the last value is the result of the pipeline itself.

$ctx.prev.result is the result of the previous function

For example, the first function might call an SSM parameter:

  "version": "2018-05-29",
  "method": "POST",
  "params": {
    "headers": {
      "Content-Type" : "application/x-amz-json-1.1",
      "X-Amz-Target" : "AmazonSSM.GetParameter"
    "body": {
      "Name": "${}",
      "WithDecryption": true
  "resourcePath": "/"

Then return a value stored there:

#if ($ctx.error)
  $util.error($ctx.error.message, $ctx.error.type)
#if (
  $ctx.result.statusCode < 200 ||
  $ctx.result.statusCode >= 300
  $util.error($ctx.result.body, "$ctx.result.statusCode")

Then the next function can access this value in the $ctx.prev.result:

  "version": "2018-05-29",
  "method": "POST",
  "params": {
    "headers": {
      "Content-Type" : "application/json"
    "body": {
      "chat_id": $util.toJson($ctx.args.chat_id),
      "text": $util.toJson($ctx.args.message)
  "resourcePath": "/bot$ctx.prev.result/sendMessage"

Here, the resourcePath includes this token.

Results flow through the pipeline

What each function returns and how the next one uses that data is up to you. Sometimes it's just a scalar value, such as a string or a number, but it's possible to pass more complex data through the pipeline. It is a programming choice how a pipeline is implemented to produce the expected result.

Early return

The #return function terminates the function with the given value. This provides an easy way to skip one step and forcibly return a value. While it can be used in the response mapping template too, usually it is to skip the data source.

It accepts a value also:

  • #return means the result is null
  • #return(value) returns with the value provided

A common pattern is to combine it with a conditional to implement skipping the function:

#if($ctx.outErrors.size() > 0)

In this example, if the condition is true then the function runs, and if it's false then it returns the previous result.


While the $ctx.prev.result is the primary way to pass data in the pipeline, there is a more flexible way available in AppSync: the stash. This is an object accessible in all templates in all functions in a resolver for a single run, so if one function puts something there, later functions can read that value.

The stash is clear when the resolver runs, so it can not pass data between queries. Also, as functions run in sequence, a value added at some point is only accessible in later steps.

The stash is available for the full pipeline

To put a value to the stash, use:

$util.qr($ctx.stash.put("test", "data"))

This works as the stash object is a Java Map and that has a put method. Then the $util.qr silences the output.

To read a value:


The value can be anything, such as strings, numbers, objects, lists. And a resolver function can overwrite existing values. In short: the stash is a traditional mutable key-value store.

A common use-case for a stash is to generate a random ID in the first step and make it available through the resolver execution.

There is more, but you've reached the end of this preview
Read this and all other chapters in full and get lifetime access to:
  • all future updates
  • full web-based access
  • PDF and Epub versions