With the rise of artificial intelligence and machine learning the one word that comes to mind is “automation”. Especially in marketing, algorithms such as next best offer, customer churn and cross sell prediction are taking a lot of the guesswork out of targeting, moving towards a day where shotgun advertising is a thing of the past, and we are engaging with organisations on a one to one level.
John Wanamaker once said, “Half the money I spend on advertising is wasted; the trouble is I don’t know which half”. Fortunately these days we can accurately say “which half” is wasted, and rectify the problem, resulting in stronger return on investment on marketing spend than ever before.
These innovations have given more power and credibility to marketers within an organisation; however, there is a catch. All machine-learning algorithms are what is known as “interpolation methods”, I.e. they are good at predicting data that they have seen before, but they can become wildly inaccurate when attempting to predict data that falls outside that realm. One such example is time; machine-learning algorithms are often used for forecasting, but utilising date as a predictor is fundamentally flawed.
This also exists for extraneous factors. For example, consumer behaviour is vastly different to what it was 10 or 20 years ago. We used to speak to the salesperson in a store as the source of truth for product information, now we turn to the internet. Thus, automating algorithms may have fundamental issues.
Initial cross validation can only take things so far. Cross validation does one thing well, and that is identify when an algorithm over fits training data. Cross validation does not identify poor research design, nor does it always recognise when extrapolation becomes inappropriate.
The solution is to rinse and repeat. The paradox of automation is that it should never be a set and forget, but rather just remove the heavy lifting. Machine learning may be seen as the industry’s shiny new toy, but it still has the same limitations as traditional methods. Algorithms should be benchmarked, reviewed and retrained where required.