We AdWords people live and die by the phrase "Always be testing". We test ad copy, try new keywords and tweak bids and budgets looking for improvements. We’re (mad?) scientists tinkering in a lab.
In the past I relied on a combination of the change history in AdWords, some rough notes and my memory to keep track of my testing. It worked but it wasn't great.
It became more difficult as my business grew. I started to feel like I was losing control. I couldn't answer questions about an account without having to root around in Google docs, AdWords and email.
It was even worse on accounts that I worked on with my clients' internal teams. I'd make changes, they'd make changes and we'd get ourselves lost.
I felt like I was being unfair to my clients. I was getting decent results for them, but some of the processes in my agency were, frankly, unprofessional. I'd be ashamed if they looked behind the curtain.
Research scientists in a lab run multiple complex experiments at once. They work in teams and they delegate work to their minions. That sounds a lot like running an AdWords agency.
Researchers use a lab notebook to document their hypotheses and experiments. The lab notebook records what was done, how it was done and why it was done.
Why not keep a lab notebook for AdWords experiments?
Every experiment starts with a hypothesis - a proposed explanation based on limited knowledge. For instance...
Showing the price in the ad will increase the click through rate and conversion rate compared to the existing ads.
We don’t actually know if this is true. We've made an educated guess based on experience. We’re going to experiment to determine if showing the price will in fact increase CTR and conversions on this campaign.
Next up is recording the method we use to do the experiment. In a lab the method is recorded in enough detail to enable a competent scientist to replicate it. That's a practice worth borrowing.
Here's how I recorded the method for this experiment.
I chose the 5 ad groups with the highest number of impressions over the last 30 days as test candidates. The ad groups are [names removed].
I created a new ad in each ad group by copying the existing ads and changing headline two to "Prices from $450".
I set reminders to check the CTR and conversion rate every week for disasters.
I set a reminder to evaluate the experiment after a month.
We record the outcome of the experiment - positive or negative. Here's my outcome.
The experiment ran for one month from [date removed] to [date removed].
The control ads received 109 876 impressions. The CTR for the control ads was 8.45% and the conversion rate was 6.93%. The ads were shown at an average position of 2.1.
The test ads received 107 456 impressions. The CTR was 6.06% and the conversion rate was 11.85%. The ads were shown at an average position of 1.9.
The conclusion is your initial interpretation of the outcome for example
Adding the price to the headline caused the CTR to decrease even though the ads were shown in a slightly higher position.
The conversion rate increased.
The price headline appears to be acting as a filter to reduce clicks from people who are looking for cheaper services.
I built a tool to record my experiments in this format. The tool shows me an overview of all the experiments we're currently running. It keeps a record of what’s worked and what hasn’t. It allows for notes about what was done and, more importantly, why it was done. It reminds me to check stats and do the regular maintenance work
I don't have to rely on my memory or dig through a spreadsheet. Everything I need to know about a campaign is at my fingertips.
The tool will be open for public use in a month or two. Pop me a note if you'd like an early invitation.