I think people writing blogs, or posting on Facebook or Reddit want to help. Their advice on optimising Google Ads comes from a genuine desire to share what's worked for them - as does mine. They don't set out to misinform.
But not all advice is good advice. In fact, following some will harm your campaigns.
I can think of four reasons why good intentions turn into bad advice...
Nobody starts an article on, say, choosing a bidding strategy with a detailed description of the account. How many campaigns, how many keywords, what happened in the past, the goal they're optimising for etc.
Instead, they launch into what worked for them.
But, what worked in their account - their context - might be wrong for your's.
A tactic that increased conversion rates for a nationwide campaign with 500 000 impressions a month might break a local campaign that only gets 1 000 impressions a month.
A routine that works well in an agency might not be a good fit for an in-house PPC manager.
A campaign structure that supports 10 000 keywords might be too complex for a small account.
You get the picture. Advice is correct in the context it was learned in.
We humans tend to accept statements at face value if they’re made by people with authority. We don't question a doctor the way we would if some average bloke said it.
Don't feel bad about this. Even highly educated scientists have been caught by this.
In 1923, leading American zoologist Theophilus Painter declared that humans had 24 pairs of chromosomes. He made a mistake, but his influence and standing were so great, that for the next 30 years his error persisted. Scientists who counted differently thought they were mistaken and altered their conclusions. Even photos in textbooks showing 23 pairs of chromosomes were labelled as having 24.
You'll recognise an appeal to authority - and possibly bias - when you see phrases like:
We humans tend to follow the majority, even if we think they’re wrong.
In the 1950s Solomon Asch documented this by experimenting on college students, in what are now known as the Asch conformity experiments.
Groups of eight participated in a simple task. Seven of each group were actors. Number eight was the unsuspecting lab rat, the subject of the experiment.
Here's how it worked.
The actors were introduced to the subject as fellow participants.
Each group was shown a card with a line on it. They were then shown another card with three lines, labeled A, B, and C. One of the lines on the second card was exactly the same length as the first line. The other two lines were much longer and much shorter. There was no way anyone could get it wrong.
Each person in the group had to say aloud which line matched the first - A, B or C. The group was seated so that the actors spoke first and the subject spoke last.
They repeated this exercise 18 times with each subject.
On the first two rounds the actors gave the correct answer.
On the third round, all seven of the actors gave the same, but wrong, answer. The actors gave the same, but wrong, answer on 11 of the remaining 15 rounds.
The focus of the experiment was to see how many subjects would change their correct answer to match the group's wrong answer.
About three quarters of the subjects followed the crowd and gave the wrong answer at least once. Only a quarter stuck to their correct (but unpopular) answers in every round.
For every winner in an experiment or split test there must be a loser.
I bet you there are important lessons in articles like:
I could write one right now, based on something that didn't work called I moved the best performing search terms into an alpha campaign and the cost per click doubled. But I won't. Nobody writes these articles.
Losers aren’t sexy and they don’t make good traffic-pulling headlines.
Instead we hear only about the winners. The small number of ideas, experiments and changes that lead to an improvement.
The missing losers are as important because optimisation is as much about doing less of the wrong things as it is about doing more of the right things.
We all want to apply best practice, but, the truth is that there is no all-embracing best practice.
What’s best practice for one account isn’t always best practice for another account.
There are some generally accepted good practices.
Nobody would argue with you if you said "always split test your ads" or "try reduce your cost per acquisition". But, even these generally accepted good practices have exceptions.
There would be little point spending lots of expensive hours writing 3 great ads for an ad group that only got 5 impressions a month. It’d take years to get enough data to find a CTR winner. It’d take decades to find a conversion rate winner. Those hours could be better spent elsewhere.
And on reducing CPA. On the surface it makes sense but what if you could only reduce the CPA meant dropping the total number of leads by half. You might reduce the ad spend by half, but having half the leads might also mean half the sales. Nobody would thank you for that.
There are no secrets, no magic, no "One simple trick that gets 15% increase in CTR". Anyone who tells you otherwise is probably trying to sell something or doesn’t yet know enough about Google Ads.
But there is one universal principle: Do more of what works and less of what doesn’t.