This maxim has been pounded into the heads of direct marketers for generations. For good reason.
Direct marketers that don't isolate one variable at a time can't possibly know exactly what causes differences seen in A/B tests. Was it the new format? The offer? They simply don't know.
On the other hand, many direct marketers lack the budget, sample size, time, and patience to experiment with one element at a time.
That's where our "Mix Everything Up" test comes in.
Let's say we've used the same self-mailer for some time, and its recent performance has been, to say the least, underwhelming. We've always wondered if an envelope package would improve the bottom line. Plus the creative concept doesn't seem all that hot. And the offer was never much to write home about.
The budget won't allow us to run all necessary versions to determine which of our test options, if any, will bear fruit. So instead, we test the "control" self-mailer format -- including the current creative approach and offer -- against an envelope package featuring a brand new creative approach and offer.
To pull this test off properly, we split one or more lists in half, either on a random or nth basis. But always remember this, mis amigos: If we test Direct Mail Format/Creative/Offer Combination "A" on List #1 and Direct Mail Format/Creative/Offer Combination "B" on List #2, we will indeed have a blown test. No doubt about it.
Some direct marketing veterans will no doubt argue that even if we properly split lists, it's a pointless exercise, because if "B" beats "A" by 26%, there's no way of knowing exactly what caused the significant difference.
It's absolutely true that we won't understand the contribution of each variable, but if we receive enough responses to have a high confidence level, we'll know that "B" is about 26% better than "A" -- and we'll be able to roll out "B" with confidence.
We'll definitely know that "B" works better. We just won't know exactly why.
What do you so-called quants out there think?