Allison Sparrow

Tech Marketing

Author: allison.sparrow@gmail.com

Staying sane with a million marketing campaigns

When Marketing is responsible for generating pipeline each quarter, be sure that lots of events, emails, and webinars are going to be planned. As a Marketing team lead, it’s my job to ensure all online campaigns are set up appropriately, and scheduled cohesively. This requires a lot of attention to detail, constant communication, and a full-proof process.

This process empowers 10+ managers to launch their campaigns on time, and allows me to focus on A/B testing and running demand gen campaigns.

Submitting a Campaign Request

MA requests Wiki

We get campaign requests of all types: ebooks, emails, event pages, and webinars. Each type of campaign requires different timelines, assets and deliverables. Whenever someone asks a Marketing Automation manager for help, we point them to a Wiki page. The Wiki page holds a custom JIRA link for every campaign type, and includes the appropriate checklist necessary for that campaign. This way, it’s super simple for anyone to submit a request, and ensure her/his campaign gets prioritized.

Here is what one of the custom links looks like:

jira request

We pre-populated the summary with the asset type, along with custom labels needed for reporting (more on that later), and have appended the right google doc checklist.

Planning & Implementing

Setting up a campaign in Marketo is complex, and requires a lot of attention to detail. We use two systems to ensure we don’t forget the small, important stuff (like meta-tags, alt-text, etc).

Google Doc Checklists

google template

We use google docs to document all copy, image assets, and anything else needed for campaign setup. I have a template custom for each campaign type, and it acts as a large, souped-up checklist for both  planning  and setting up the campaign.

At any point in time, we can easily look at this doc to see all of the elements required for a campaign. Here’s an example of one of our docs. If our google doc template interests you, please contact me with any questions you have! I would love to geek out with you, it’s something that has evolved throughout the years.

Marketo Tokens

If you are a Marketo user, and you don’t use tokens for your assets, do it. You will thank me. If you need help, I recommend Etumos, a great Marketing Tech consultancy who has helped me in the past! Of course, you can always reach out to me as well!

The implementation piece is pretty easy once all of the other foundational places are there. Essentially, we have a google doc checklist to mirror each program type in Marketo. Once the google doc is filled, we simply have to transfer the values from the google doc to Marketo! 

marketo programs

Visibility & Prioritization

All Marketing campaigns are logged in JIRA. In the past, we used Asana, and that was also effective!

We’ve structured our board by Assignee, and then have each column by stage in process.

Kanban_stages

The task stages are:

  • To Do
  • Assets Requested
  • Needs Approval
  • Ready for MA
  • In Progress
  • Out for Review
  • Scheduled
  • Complete

Marketing Calendar

Because all JIRA tickets are submitted with custom labels, we are able to present our Marketing Calendar using a JIRA calendar widget. You can simply hover over the green button, and you can see what is scheduled for that day.

marketing calendar

 

A/B Testing Emails in Marketo

This post outlines how to test emails when you want to:

  • test for statistical significance
  • test an email in a nurture
  • have clear reporting when using an email performance report
  • track what variation a lead received

Email Programs are great to use if you are just getting started with A/B Testing. Email programs allow you to test a variety of different elements in your email, and then declare a winner after a send.

email_program_dashboard

But what about when you want to A/B test an email in your nurture program? Email Programs come with their limitations:

email_program_limitation

I’m going to show you how I A/B test my email blasts and email nurtures. See this as a different way to test if you have one of the challenges above.

An A/B test for a simple email is easy to set up, and usually your objective will be opens and/or clicks. As pre-work, I strongly recommend outlining your hypothesis, goals and variable for your A/B test. You can use the framework I’ve outlined in my other post as a reference. Here’s an example of an email A/B test outline.

In this example, we are going to

  • test an email used in a nurture program
  • create a 50/50 split test
  • test the subject line as a variable

Start from a default program in Marketo

First, create a program in Marketo that contains an email. If you are going to test an existing email that’s already live, skip this part and jump to the next section: Emails.

default_program

Emails

Create your control email. This is the email that you’re going to try and beat out with your variation email.

You may be testing an existing email, or you may be creating a new email. You want your control email to be the content that you want to beat. So if your hypothesis is that “a funny subject line will be more effective than a serious one,” your control email will be the serious one.

Once your control email is completely reviewed, tested and approved, simply clone that email and append v2 to the new variation:

emails

Smart Campaigns

OK! Now you’re going to set up campaigns that will automate the testing for you.

Campaign: Send Email

Depending on whether this is a blast or an email nurture, the Smart List will vary. Everything else is the same. In this example, I’ll show how you set it up as a nurture email.

Smart List

Member of any Engagement Program (the filter lies in the stream itself, when you drag the entire program into the stream).

engagement_program

Flow

send_email_flow

Schedule

Do nothing

For Nuture

Add to Engagement Stream

 

nurture_smart

Activate the campaign

nurture_Activate

Reports

Let’s create a basic report so you can check your test at a moment’s notice!

Report: Email Performance

Smart List: none

Setup:

Sent Date: All Time

Marketing Activities Emails: Email .v1, Email .v2

email_filter

That’s it! So simple! Now when you look at your email performance report, you will easily see which one is performing better:

email_Results

Resources

Create a High Impact Nurture Program, Every Time: Implementation

Build a data-driven engagement program in Marketo

The hardest part is behind you, the pre-work will not only force your team to align, but you now have a clear roadmap for what to track in Marketo! You will thank me, I promise. Now for the fun stuff.

In this example, we are going to create a 50/50 split test! First, create your Engagement program.* Once you’ve done that, we will add the following elements to it.

program screenshot

*If you’re interested in the naming conventions we use, I stole this from Edward Unthank, who’s concepts I really like.

What we will create:

  • Smart Lists
  • Static Lists
  • Reports
  • Smart Campaigns

Smart Lists

I like having reports to reference and play with in Marketo. I’ll walk you through how I create Baseline reports and Smart Lists for the engagement program.

You’ll be making 3 Smart Lists:

  1. Baseline Target Audience
  2. Baseline Conversion TA
  3. Target Audience (All Time)

Smart List: Baseline Target Audience

You should already have identified how your target audience has performed in the past against the variable you want to improve. We are going to create two smart lists in Marketo so that you can easily refer to the leads meet this criteria.

Here’s my Baseline Target Audience list. You won’t always need to restrict it by year, but it may make you more confident in the numbers.

baseline target audience

Smart List: Baseline Conversion Target Audience

Now from that list, I want a smart list that shows how many of those leads converted. Clone the TA 2015 list, and add an additional filter, for the variable you’re tracking

baseline target audience smart list

Smart List: Target Audience (All Time)

As a nice to have, I also create a Smart List that looks for any lead that qualifies, despite the year, for easy reference.

target audience all time

Static Lists

Static lists are helpful because they don’t need to cache. Super complicated smart lists can take a long time to load.

You’ll be making 4 Static Lists:

  1. Target Audience
  2. Control Group
  3. Variation Group
  4. Converted Group

Static List: Target Audience

This list will be a catch-all bucket for all leads that, moving forward, qualify for the nurture.

For this example, I’m not going to retroactively put leads into this program. When I activate it, it will only be effective for new leads that apply moving forward.  I think it’s cleaner to start fresh.

Static List: Control Group

Half of the leads that qualify for this nurture will go into this list. This is the group that *won’t* be receiving the new email nurture.

Static List: Variation Group

The other half of the leads that qualify for this nurture will go into this list. This is the group that *will* be receiving the email nurture.

Static List: Converted Group

This is going to be where you add any lead that qualifies for nurture, and that performs the desired activity you wish for them to do. They could be a member of either list!

Reports

Let’s create some basic reports so you’re ready to check on your nurture program’s progress at a moment’s notice! Reports can be a little clunky to set up, and if you’re using smart lists, they can take a while to load. Fortunately, you’ve set up Static Lists that will automatically be up to date once you have your Nurture Program running. Your reports will load super quickly.

You’ll create 3 reports:

  1. Baseline Conversion Lead Performance
  2. Control Group Conversion Lead Performance
  3. Variation Group Conversion Lead Performance

Report: Baseline Conversion Lead Performance

This is the report you can link to in your Nurture Roadmap doc when presenting the baseline conversion rate for your target audience.

Smart List:

Member of Target Audience 2015

baseline ta 2015

Setup:

Lead Created at: 2015, Group Leads by Account Created Date (variable metric)

baseline report setup

Report: Control Group Conversion Lead Performance

This report will allow you to monitor the Control Group’s conversion performance. Select a Lead Performance Report out of the Report options.

Smart List:

Member of Control Group List

controlgroup_sl

Setup:

Group Leads by Account Created Date (variable metric)

control group setup

Now you’ll see anyone that has that field populated v. those who doesn’t!

Report: Variation Group Conversion Lead Performance

Now, clone your Control Group report and simply replace the list with the Variation Group Static List.

Smart List:

Member of Variation Group Static List

converted_ta

Setup:

Group Leads by Account Created Date (variable metric)

converted_variation_setup

Now when someone asks you how the campaign is going, you can easily look at your reports to give an answer. Proactively preparing for these ad-hoc sort of questions is really valuable. It can be deflating to have to respond with: “let me email you the results once I pull the numbers” when you’ve already done so much work!

Smart Campaigns

OK! Now you’re going to set up campaigns that will automate the testing for you. I won’t go over how to make sure the flow of your campaigns are set up correctly. Please prioritize at least a week to insert test leads through: it will save you work in the long term.

You’ll create 5 Campaigns:

  1. Add to List
  2. Add to Test
  3. Add to Stream
  4. Converted

Campaign: Add to List

I like the “Add to List” trigger as much as the “Request a Campaign” trigger. It ensures you aren’t bogging up your campaign queue in Marketo.

This campaign is going to be your Trigger List: using whatever action the lead takes in order to qualify for this nurture program. This example nurture is for new leads created from a particular source. I do not recommend using the Lead Created trigger in engagement programs. Ideally you are only using the Lead Created trigger once in your entire Marketo instance. If your target audience is TOFU, I recommend doing a Daily Batch campaign. That way you don’t have crazy backlogs, and it’s easier to control. Here’s how I would set up this example:

Smart List

campaign add to list

Note these filters should be very similar to your Target Audience Smart List!

Flow

campaign flow

Schedule

Daily, Each Lead can flow through ONCE

campaign_schedule

Campaign: Add to Test

Smart List

add to stream

Flow

Request Welcome Email (if you want an email to go out right away), Add to List using Random Sample

Schedule

add_to_test_schedule

 

Campaign: Add to Stream

Smart List

add_to_stream_sl

 

Flow

add_to_stream_flow

Note that I’m only adding those who receive the email as members of the Engagement Program. That means I’m going to perform my A/B test analysis outside of the Program metrics. This way the Engagement Program only looks at conversions for those who actually received the emails. If the program is successful, all I have to do is deactivate the Add to Test Campaign, and change the Add to Stream trigger to be Add to Target Audience (If it does go through however, I’d probably create a clean static list. But I’m OCD like that).

Schedule

add_to_stream_schedule

Campaign: Converted

Smart List

Triggered, based on a field update. My filter is member of both variations.

converted_sl

Flow

if they are part of the variation (leads receiving the email) their status will change. All members will be added to the list. If they are part of the variation (leads receiving the email) their status will change. All members will be added to the list.

converted_flow

Schedule

converted_sch

Intro | Part 1

Create a High Impact Nurture Program Every Time: The Roadmap

Create your nurture test roadmap

Having a roadmap discussion ensures program alignment before jumping into implementation phase. Set up a Roadmap meeting, use my template, and set an agenda. Before you do anything in Marketo, it’s important to understand the framework of your test. Get aligned on what part of the business you’re trying to impact, and who your target audience is. Here’s the template you can use to write out your testing roadmap.

What we’ll cover:

  • Target audience
  • Test variable
  • Baseline conversion rate
  • Your nurture goal
  • Hypothesis

Target Audience

You won’t have a successful nurture program if the target audience is “everyone in the database.”  Who will benefit most from receiving this content? The more specific the better. Ideally, it will be a larger dataset so your test will be as impactful as possible. Some examples could be: trade show sourced leads, Enterprise customers in the retail vertical, or prospects who signed up for a free trial.

For this post, our example target audience will be prospects created from trade shows.

Once you have identified who you want to target, write down the total number of people that make up this segment.

Test Variable

What are you trying to improve?

You’re not going to solve all of your business problems with one nurture.  An ineffective test variable is:  “to get more SALs, opportunities and more customer retention. Oh, and increase product usage by 50%.” This is actually 4 variables. A nurture program’s primary goal should be an action that can happen soon after someone clicks an email.

What is the primary activity you want to drive with your nurture? Click rates? Product engagement? Demo signups? Write it down on your roadmap doc.

In this example, our variable to test is accounts created.

Baseline Conversion Rate

Historically, what has the conversion rate been for your target audience and your primary variable? In this example the question would be: how many trade show leads have created an account in the past?

  1. Using the same list of people in your target audience list, find how many of those have executed the goal you’re looking to improve
  2. Divide the number of leads that completed the goal by the total number of your target audience list. Then multiply by 100. That’s your baseline conversion rate.
  3. Your baseline conversion rate will be the number you want to *beat* with your new nurture program.

E.g. Currently 20% of our new trade show leads create an Optimizely account.

Nurture Goal

Here’s where you define what success looks like for your nurture program. In this example, we want to increase that conversion rate by 15%.

Hypothesis

How will this nurture program meet your goal? You’ve done most of the work already, now you just have to crystallize it.

Example: My hypothesis is that by sending a 4 touch email nurture to our trade show leads, they will be more likely to create an account than if they weren’t sent it. By staying top of mind after the trade show is over, leads will be curious to explore what Optimizely can offer.

Resources
Roadmap Template

Email Content Template

 

Intro | Part 2

Create a High Impact Nurture Program, Every Time: Intro

Why your nurture programs flop.

Nurture programs should be not taken lightly. They take a long time to plan, prepare, and implement. I’m going to show you how to make every nurture program successful.

The two biggest reasons why your nurture program flops:

  1. You can’t measure the program’s success
  2. You waste time creating a nurture program that doesn’t deliver value

It is downright *challenging* to prove Marketing ROI on something you’ve built! I have been there: busting your butt for over a quarter to outline a nurture program strategy, get copy finalized, implement the HTML, create a slide deck for the announcement, only to get from your CMO: “Can you show me some data on how this has positively impacted the business?”

UGH.

On top of that, nurture programs require a lot of upkeep and vigilance. As a Marketing Automation Manager, it’s your responsibility to ensure you are spending time on programs that are make an impact to the business.

There are millions of (valid) reasons why marketers just want to throw up their hands up in defeat. To create a high impact nurture program, you need to own your success metrics and reporting, and easily determine whether it’s making an  impact on the business.

In this 2 part blog post, I’ll show you how to address the two biggest challenges faced when creating a nurture program, using split testing as a framework. With the right preparation, and by owning your reporting, you will create high impact nurture programs, every time.

If you haven’t tried A/B testing your nurture programs, I cannot recommend it enough. A/B Testing allows you to report clear results, and cuts down time spent debating over what programs to run or maintain. A/B testing may sound technical and analytical, and in some ways it is. But it is the simplest way to empower yourself to be a data-driven marketer, with minimal implementation time.

Part 1 | Part 2

Increase Customer Engagement within 30 days

Platform adoption is critical for understanding how engaged your customers are.  There are a lot of reasons why customers engage with your product or not. Testing those hypotheses will help you better understand challenges that customers face.

I ran a split test on a customer nurture program and increased platform engagement by 10% within 30 days. Here’s an overview of the customer nurture test, I hope it inspires you to get curious about testing!

I collaborated with the Optimizely Education team to determine how to increase product engagement with our new self-service customers. We knew that a customer was less likely to churn if they logged in multiple times within the first 30 days. We designed a customer nurture program to educate these new customers on various features and resources Optimizely had to offer. Our hunch was that by providing focused material to our new customers, they would be more inspired to use our product.

Hypothesis If we send our self-service customers a series of resource-rich emails, they will be more likely to engage with our Testing solution within the first 30 days.
Target Audience New self-serve customers
Primary Goal Optimizely logins
Cadence of Nurture Every Thursday for 4 weeks
Results Variation saw a 9% increase in logins within the first 30 days

Our hypothesis: if we send our self-service customers a series of resource rich emails, they will be more likely to engage with our Testing solution. There are a lot of psychological barriers to get started with testing, and our customers may not know where to start.

The program contained 8 emails, bucketed into three offer categories: Resource, Sell, or Feature. CTAs included: an ebook on building a data driven team, a how-to guide for setting up a goal in Optimizely, and a list of popular testing ideas.
cn emails

We held back 50% of our self-service customers from receiving this nurture. After 30 days, we already saw positive results! We tracked multiple activities, and measured significant impact with Optimizely logins and dashboard views. Within 30 days the variation saw a 9% increase in logins. Because of this result, we sent the nurture program to 100% of our customers.

 

How to receive 165% more NPS survey responses

With one simple test, I was able to get a 165% increase in our survey responses. Here’s how I did it.

Optimizely takes customer feedback seriously. We’ve created tight feedback loops that allow customers to share their experience with us. We A/B tested the support feedback email copy that’s delivered to customers who submitted a support ticket.

Test stats and learnings
Elements Support Survey Test
Target Audience Enterprise customers who have submitted a ticket to our support team.
Hypothesis If we send a support survey that refers to the exact support ticket information, then the customer will be more likely to click on the survey to provide feedback.
This will give the customer immediate context and empower them to act quickly.
Variable Email copy
Goal Increase clicks by 50%
Result 165% increase in clicks from variation recipients at 100% Statistical Significance*

 

Email variations
Control Variation
ops-control-email ops-variation-email

 

Modifications made
  • We updated the subject line so the customer would find this email in the same support ticket thread.
  • We linked to the ticket so the customer could review the ticket they were asked to provide feedback for.
  • We reference the support engineer for a personalized experience.

Due to the mass success of this campaign, the biggest learning is that it’s important to personalize. Our customers have reached a level of sophistication that they expect a catered experience with us, even with a support ticket. Every little bit a marketer can do to personalize will result in huge wins from a marketing campaign.

What do you think about this test? Have you had experiences testing out support surveys?

*I’m a geek, so I like to track statistical significance for our AB tests. If you don’t understand statistical significance, that shouldn’t deter you from AB testing!

© 2017 Allison Sparrow

Theme by Anders NorenUp ↑