When the Clinton Bush Haiti Fund formed in 2010, one of its first tasks was to give people a way to donate online.
It quickly put up a form, then over three days it took steps that encouraged people to use the form to give far more.
The fund changed the page layout, put new words on instruction buttons, tweaked the page’s font size, and reduced the amount of information donors were required to provide. Those small changes helped raise an additional $1-million over the following month, an improvement of 10.2 percent in dollars per page view.
“As soon as we found out what was working and what wasn’t, we would iterate and keep moving,” says Dan Siroker, co-founder of the testing software company Optimizely, who helped the Haiti Fund and worked in a similar role in President Obama’s 2008 campaign.
The process the fund used to figure out what makes a difference is called A/B testing, or variant testing, which pits two slightly different pieces of content against each other—be it a home page, donation form, advertising, or e-mail subject line—to see which performs better.
Saving Money
The idea isn’t new to marketing or fundraising, but technology and new tools make the tests much easier to perform. Nonprofits are taking advantage of that ease and seeing success.
At Compassion International, the Christian aid charity, the tests have produced 2- to 7-percent increases in the share of people who visit the charity’s site agree to “sponsor a child” since July 2011.
“Optimizing our existing site, in many ways, is a better return on investment than spending money on ad campaigns,” says Tom Emmons, Internet marketing program director at the organization. Because the site already gets a high number of viewers, Compassion is working to turn more of them in to donors.
The March of Dimes, the children’s health charity, has been testing its online display ads and search ads for its annual March for Babies campaign since 2007. Each year the charity has been able to build on what it’s learned and increase the share of people who see an ad and then walk, fundraise, or donate.
“The data that you see tells you the story,” says Robert Field, e-marketing manager at the organization. “Being able to see how some of these front-end changes trickle down into the numbers is gratifying.”
And the Pesticide Action Network, an environmental group, has had similar success testing e-mail subject lines to improve the percentage of people who open the messages and link to the content promoted in them. One early test helped the organization determine the most popular subject line sent to a sample group before it transmitted the message to the entire list.
“We’re getting smarter with our subject lines, so our open rate is going up,” says Andrew Olsen, online communications manager at the nonprofit.
Question Everything
Mr. Siroker, the consultant, urges nonprofits that are starting to test to question all of their assumptions.
“Nothing is sacred,” he says.
While working on the Obama campaign, the campaign tested several home pages featuring videos, assuming that’s what would appeal to donors and supporters. As an afterthought, it added home pages with images, not videos, to the test. The images performed much better.
Mr. Olsen of the Pesticide Action Network also likes the way the testing helps him avoid relying mostly on emotional or intellectual reactions. Sometimes he isn’t in love with an e-mail subject line the organization uses, he says, but he can’t argue with results demonstrated by testing.
Among the other suggestions from experts:
- Statistics can be deceiving, and organizations have to be careful to make sure their results are significant. Compassion International runs just one test at a time, Mr. Emmons says, to make sure that it is measuring only the changes that make a difference.
- Get the right tools. Mr. Siroker uses a statistical tool called a T-test, which can help determine whether a change improved or depressed responses. Other people use testing software that includes the information needed to produce such calculations.
Mr. Olsen says nonprofits are making a mistake if they don’t test everything they can.
“You gotta test,” he says. Before his group conducted tests, he concedes, “we were just flying blind.”