Measuring Social Media Campaigns: A Philanthropic Case Study

My hope is that more non-profits and hybrids will share their best practices and experimental findings so that everyone can benefit.
This post was published on the now-closed HuffPost Contributor platform. Contributors control their own work and posted freely to our site. If you need to flag this entry as abusive, send us an email.


Bueller? Bueller? Quantifying results need not be boring. Keep it simple to keep it interesting... and actionable. (photo: foundphotoslj)

Have you really seen ROI from Social Media?

Social media marketing! Twitter consultancies! Conversational communications! Oh, these are exciting times.

It seems like everyone and their grandma are getting into social media. On a whole, I think this is a good thing, but here's the problem: whenever technology becomes fashion, return-on-investment (ROI) tends to get lost in the excitement of the latest .com catwalk.

It's going to help "the brand"? Show me data. It's going to drive more "awareness"? Define it, isolate it, and translate it into a sales increase.

In this post, we'll look at some real numbers (total capital, conversions, redemptions, etc.) from my latest educational non-profit campaign, the Twitter-based Tweet to Beat, which was a follow-up to the blog and leaderboard-based LitLiberation campaign, which outraised Stephen Colbert 3-to-1 with no staff and no material hard costs...

My hope is that more non-profits and hybrids (like the impressive Tom's Shoes) will share their best practices and experimental findings so that everyone can benefit.

Here was the basic pitch from the original post:

The gist: To benefit U.S. public school students, I will bribe the entire world to follow me on Twitter for $3 each.

For every new Twitter follower in the next two weeks, I will donate $1 to DonorsChoose.org, and an anonymous supporter will match $2, for a total of $3 to U.S. public school classrooms per follower. For now, the matching limit is tentatively capped at 50,000 new followers, though I'm open to increasing it later. 50,000 new followers would mean $150,000 to U.S. public school education, and I hope to double or triple this total with a few twists.

The goal is directly helping 25,000 U.S. public school students in low-income and high-need areas in two weeks. This timeline is half the time dedicated to LitLiberation. My current follower count is, at the time of this writing, 22,782, so we'll round down and begin the count at 22,500.

How many followers did I get (hint: about 17/hour), and how much was raised? We'll get there, but let's start with the fundamentals.

First, Fundamentals and Hypotheses

I run experiments for one of two reasons: 1) to produce results I feel comfortable predicting, or 2) to gather data and findings I can then (hopefully) incorporate into follow-up experiments. Let's call the latter an "investigative campaign."

Tweet to Beat was an investigative campaign, as I was less comfortable with large-scale predictions about Twitter user behavior. I have mountains of data and replicable outcomes from the blog, but I did not have either for any micro-blogging platforms.

That said, it is important to begin with a hypothesis or hypotheses (predictions) that you will test. Why? Because backwards correlation is bad science.

If you gather enough data and measure enough variables, you will inevitably find chance correlations that you -- in lieu of predefined hypotheses -- will want to label as causal ("A causes B," when these are coincidental). For more in-depth discussions of proper study design and fascinating phenomena like negative publication bias, I highly recommended Bad Science by Ben Goldacre.

Decide what you're testing upfront (e.g., "If A, then B") so you don't succumb to wishful thinking and forced causation. This is also true for online analytics.

For Tweet to Beat, there were just a few assumptions I wanted to test:

1) If I redistribute monies raised, it will increase the per-person donation amount. In other words, if each person helps raise an average of $3 and I return that $3 to each person in the form of a coupon, some percentage of those donors will give more than $3 when the coupons are redeemed.

2) This can be done at close to zero cost. Tweet to Beat was done in partnership with DonorsChoose.org (DC), just as LitLiberation was done in partnership with DC and Room to Read. Since their labor costs aren't zero, the value of the campaign (whether in amount raised or actionable data) cannot be less than the ROI from investing time in other fundraising or activities.

3) People will participate more if just a single click is required. LitLiberation required fundraising or donations, whereas Tweet to Beat would require nothing more than clicking on "follow" in Twitter.

The Findings

Ultimately, the Twitter follower count increased from the 22,500 starting mark to 31,739, for a total follower increase of 9,237 in two weeks. The total donation (x $3 per new follower) is then $27,717. Therefore, the third hypothesis about more engagement with less required action does not appear valid; at least, it is not independently sufficient to overcome other variables that might cause resistance.

So what about increased yield from distributing donations to donors themselves? The announcement of the coupon redistribution was saved for a second-wind PR effort, as I've found phased announcements to increase yield at least 10% in the past with third parties.

In post titled "Upping the Ante," the campaign was also extended by one week and bonuses totaling $168 in value were offered to all followers (DropBox, RescueTime, and PhoneTag).

The coupons were distributed simply:

1) Twitter followers were notified of a link via a protected update.

2) Interested followers clicked on the shortened URL, which took them to a Google Form asking for their e-mail address, to which the coupon code would be sent. The Google Form took 60 seconds to set up and is free.


Google Forms: Simple and Free

3) DonorsChoose automatically generated as many codes as needed and emailed out welcome letters to each donor, including their coupon code and a link to a Project Page where they could apply it to the classroom project of their choice.

4) Donors were shown via screenshots how to use Facebook Connect to share their donation with friends via Facebook status update.

There were two batches of e-mail coupons sent out. The first was a link to a $3 coupon, available to approximately 30,000 followers, while the second was a link to a $12 coupon, available to just the first 1,000 takers.

Batch 1 (Wed. 3/25, 4pm EST, $3 Giving Codes)

To 30,000 Followers, Open to All

Total Sign-ups (coupons sent via e-mail): 1,108 (3.69% conversion)
Total Redeemed: 666 (~60% redemption)
Total Donated: $1998
4 day expiration (March 25 - 29th; Wed. - Sun.)

Batch 2 (Fri. 3/27, 6pm EST, $12 Giving Codes)

To 30,000+ Followers, First 1,000 Respondents
Total Sign-ups (coupons sent via e-mail): 583 (1.94% conversion)
Total Redeemed: 352 (~60% redemption)
Total Donated: $4224
4 day expiration (March 27 - 30th ; Fri. - Mon.)

The below block quotes, and all block quotes from this point forward in this post, are from DonorsChoose with some edits for space:


Total Dollars Allocated by Tim Ferriss' Twitter Followers in the Tweet to Beat Challenge: $6222 (FYI: this is higher than the number on the project page -- $5821 -- because some of the participants may not have redeemed directly via this Giving Page).

But how much more did TweetToBeat redeemers spend in addition to their GivingCards? Total to date: $1868.34

Thus...

Grand Total Dollars Allocated by Tim Ferriss' Twitter Followers in the Tweet to Beat Challenge: $8090.34

The below spreadsheet offers a bit more granularity on how many people decided to add onto their donations and the average size of donations by GivingCard denomination.


Click here for a larger version.

This is the eureka data set I was looking for. Things to note:

-The % of $3 donors who gave more than coupon value was higher than those in the $12 donor group.

-The average donation amount for those who added was greater in the $3 group than in the $12 group: $37.65 vs. $36.72. Keep in mind that the additional differential from the $3 group ($12-3 = +$8) makes the average amount $34.65 above par vs. $24.72 for the $12 group.

-In sum, based on this limited sample size, we were able to increase total donations 30% ($6,221.66 --> $8,090.34) simply by distributing the amount raised back to supporters vs. writing a check to DonorsChoose directly. What this means: enable your supports to "recycle" what you raise and give directly, and you could add 30 cents to every dollar you've already raised.

There are no doubt problems with the data, like the limited sample size, different days of the week for mailings, and our inability to isolate outliers or measure overlap between the $3 and $12 groups, but this preliminary data is both counterintuitive and testable. If you want to throw caution to the wind, as the downside is next to nothing if you automate coupons, it's actionable.

The best part is that this ROI is based on one campaign and not Lifetime Value of each new registered donor. If we factor in the below from DonorsChoose, we see that getting $1.30 for each $1 distributed could well be a minimum assumption, with $2 - 2.60 ROI for each dollar distributed on the high end.

Doing these calculations is as important for for-profits as it is for non-profits, of course. Determining your "contained" LV over 3 or 6 months, for example, helps determine what you can spend to acquire each customer while still remaining profitable.

On Lifetime Value (LV or LTV):

Unfortunately, we don't have a ton of historic data around LTV since we've just recently begun capturing these figures. That said, a recent analysis which sought to look at class of donors to determine how much the group would donate in subsequent years, suggests an appropriate multiplier that we could use for this case. This conservative multiplier would be 2. Obviously, there are some outliers like donors who give large sums of money, but our data suggests this is pretty close.

Ex. If an average class of donors gave $100K in year 1, we can expect them to give $200K in ALL their subsequent years (~5-10 yrs) as a donor. While this doesn't necessarily equate to what the average 'individual' donor would give, it does give us a general benchmark with which we can guesstimate LTV. Also note that the time frame that we're using 5-10 yrs is based on where the data is trending towards, but not 100% exact since we've been in existence for less than 10 years.

In Summary, from DonorsChoose:

In terms of takeaways, I think this was a huge success for a number of reasons in addition to funds raised:

1) We acquired 300+ followers (~42% of our total followers) over the course of the campaign (from when it first launched to when the last batch of codes were mailed), lending mounds of exposure to our very nascent Twitter account

2) Even two weeks after the campaigns conclusion, your fans are still talking about it. [Note from Tim: Thank you to Jonathan Hinson for setting up this tracking page for T2B -- as another data point for building Twitter followings, it shows my current follower add rate to be 17 per hour and my current Qwitter rate as 7 per hour, for a net gain of 12 new followers per hour.

Areas for improvement:


1) We should brainstorm further internally about how we can allocate deliverables quickly for campaigns over social networks. These are a lot more fast-paced than the Blogger Challenge and there was a sense of immediacy that we were not 100% prepared for.

2) We should have implemented an official hashtag from the outset (like #tweet2beat) for this campaign so as to widely broadcast it in the Twitter community (i.e., trending topic).

Bottom line: I think we were all really pleased by the results and are anxious to do something similar on Twitter in the near future. We've already been talking about using Twitter as part of our social media strategy for our upcoming "Give-Back" Birthday Campaign [From Tim: read more about this here]. Trust that we'll be keeping an eye on participants who redeemed the GivingCards and reveal themselves as ongoing donors.

Tim's Conclusions

$29,585.34 ($27,717 + $1,868.34 additional donations) over three weeks is a non-trivial amount, but it was dramatically less than expected, and the LitLiberation response dwarfed it (more than 10x this to date). There are several reasons, subject to further testing, why I believe this was the case.

1) Everyone who participated in LitLiberation was eligible to win large prizes, and their progress up or down was publicly visible. Tweet to Beat did not make everyone eligible for prizes, only those who chose to compete as fundraisers, and there was no public accountability or recognition.

2) Using Twitter follower count seemed self-serving. Though follower count is nice, Twitter was chosen intentionally for several other reasons: one-to-many broadcasts via protected updates, a public counter for transparency, and the expected backlash from purists, which would drive additional links and participants. Alas, the public backlash never came en masse, and the net-effect seemed to be less engagement due to convolution of the campaign goals and self-interest. Powerful partners also couldn't commit publicly due to this. Much of this could have been avoided with a stand-alone site (vs. a post on my blog) like LitLiberation.

3) It was too complicated. More specifically, there were to many incentives for different groups. Once again, simple is better.

So where to from here? Simple: don't repeat the above mistakes, combine LitLib-like leaderboards with the redistribution force multiplier, and test, test, test. The next campaign will be bigger than both of these combined, a melding of best practices from the two, and it will be -- in part -- thanks to a "failed" experiment that yielded extremely valuable data.

Remember: the best vehicle for testing (gathering actionable data) is often not the best vehicle for roll out.

Isolate your variables, test predefined predictions, and -- please -- share your results. Don't expect the p-values to be ideal, but, in a world of surprisingly little measurement, good experimental design can make the difference between pure guesswork and educated sniper shots.

Elsewhere on the Internet: Tim Ferris on TED, with
Derek Sivers of CD Baby (one of the most read entrepreneurial interviews on the web), and on investing.

Popular in the Community

Close

What's Hot