A/B testing in Zendesk

Hi dear reader,

This article is part of a larger context of how to do A/B testing which involves 2 sides to the story. The other half of the story is here and it is about how to add a tag in Zendesk to a New ticket if attachment is present 🍤

Also, there is an original article published on Zendesk here.

Context

Client wants to introduce an AI tool to automate some of their processes for the Chat and Email channels. For Chat they want to serve predefined messages and for email they want to suggest resolutions based on their past data and documentation: Knowledge base and Macros list.

I don’t need to expand on what a big disruptor AI tools are. Whether we like it or not, they are the future.

As any big company, you don’t have the flexibility to just plug something in and see how it works. It’s bad practice to introduce a new tool that can potentially disrupt serving ~ 100K tickets/month and flip your support system upside down. You want to test first.

How do you test introducing a new tool that can potentially save you hundred of thousands of dollars per year? Well, you create a control group to test with.

You want to have a way of differentiating between flows: flows that will be served by the AI tool and flows that will be handled by your support team as per usual. In trying to make the testing as relevant as possible you want to choose a month or a week when to run the test. In our case, we chose a month. Within the month of June, we want to randomly flag 50% of incoming tickets with the tag “control“ while the other 50% with the tag “experiment”. The 50% of tickets containing the tag “experiment“ created in the month of June will be subject to being served by the AI tool and the other 50% will be handled as per usual.

We will then run reports on the tickets created in the month of June and see which of the 2 halves had better results. We measure Customer Satisfaction, tickets reopened, number of replies and a few other metrics as well.

Here’s how to do it:

Step 1: Create an HTTP Target

Before you start building anything, you need to create an HTTP Target that points back to your own Zendesk account.

  1. Navigate to Admin > Extensions.

  2. Create a new HTTP Target extension.

  3. Enter https://subdomain.zendesk.com/api/v2/tickets/update_many.json?ids={{ticket.id}} as your URL

  4. Select PUT as the target method.

  5. Select JSON as the content type.

  6. Add your username and password to the Basic Authentication section, then click Save.

This has a drawback, though: if the user whose username and password you're using to authenticate this request leaves the company or otherwise has their Zendesk access deactivated, your target will stop working. If at all possible, dedicate one of your Zendesk seats for these kinds of callback requests to prevent outages. (Or come up with a transition plan for when your admins change.)

Also, note that I’m using the “tickets/update_many” endpoint here, rather than the regular tickets endpoint. The reason for that is because we'll be adding tags to the ticket, which requires the update_many endpoint.

Step 2: Create or A/B Test trigger

Create a new trigger and add “Ticket > Is > Created” as a condition to the ALL section.

  1. Optionally, if you don’t want all of your tickets to get tossed into the A/B test, add any other extra criteria. In our case, we’re only A/B testing on tickets submitted with a particular tag, so we added a tag condition.

  2. Set your action to Notify Target and choose the HTTP Target you just set up.

  3. Add this JSON response:

{% assign randomizer = ticket.id | modulo:2 %}

{% case randomizer %}

{% when 0 %}

{“ticket”: {“additional_tags”:[“control”]}}

{% when 1 %}

{“ticket”: {“additional_tags”:[“experiment”]}}

{% endcase %}

Let’s break down what that JSON payload actually does. Liquid is a markup language built by Shopify and written in Ruby that allows you to incorporate a handful of logical expressions into your Zendesk triggers. In this case, we’re using Liquid to accomplish a few things:

First, we’re using Randomizer to assign a random number value to the newly created ticket. We’ve specified modulo:2 to tell Randomizer that there are two possible values for it to create.

When Randomizer settles on 0, we’re calling that ticket the Control and adding a control tag to the tag list using the additional_tags endpoint. The Control is the current behaviour that you’re testing against. This should act as normal, working through your existing workflow or process.

When Randomizer settles on 1, we’re calling that ticket the Experiment and adding an experiment tag to the tag list using the additional_tags endpoint. The Experiment will be using your new behavior.

This effectively splits your ticket volume, 50/50, into two buckets that can be routed in two different ways.

‼️ Please note that it’s very important how you copy the above code. Avoid weird hyphens and extra spaces. It will not work if formatting is incorrect.

Also, again, you can ignore that error message, the JSON target is trying to validate your code as JavaScript rather than Liquid.

Step 3: Add Your New Behaviour

Now that you’ve split your volume, you can create a trigger that only acts on that experiment tag. You may want to update any existing triggers that would otherwise act on these tickets to not fire on the experiment tag, just to be safe.

That's a relatively simple use-case. Anything that can act on a ticket based on the presence of a tag—triggers, automations, SLAs, skills—can be tested with this method.

If it looks complicated, no worries, drop me a line or book a call with me and I can help you.