Every once in awhile, we get a question about how Tracking First is different from Adobe’s inbuilt Classification Rule Builder (CRB). It’s a good question, and a pretty easy one to answer. The simplest and most important thing to say is this: use of Tracking First and Adobe CRB is not an either/or kind of a thing. In fact, a best practice would be to use both. The main dIfference is whether your tracking codes will be prepared and described before the campaign goes live (using Tracking First), or only after the campaign goes live (using just Classification Rule Builder).If you’re going to use Adobe CRB, it’s important to keep in mind that you need to take care that your ‘regular expression’ (aka your RegEx) does not overmatch. In other words, you must not use too broad a brush in your matching logic. If you do, you run the risk of inadvertently destroying or overwriting your previous rules -- creating values for classifications that never should have been created. In designing your rules, you have to think not only of what data matches your logic, but of what might accidentally match, and of what doesn't match. You can ruin good data accidentally by using a logic expression that isn’t constrained enough.
This past weekend, my 15-year-old was mowing the grass in our yard -- a Memorial Day tradition for generations of American teenagers. About half way through the job, the lawn mower died. It turns out he had used the wrong fuel for the engine. Though it was taken from a can that was sitting in the garage next to the mower, it was fuel that was intended for use in a chainsaw. Not only did using the wrong fuel cut short that day’s mowing -- it appears to have burned the motor out, permanently.The experience reminded me of the much-discussed challenge in marketing analytics of “garbage in, garbage out.” We are at a stage in the marketing automation revolution where we have a multitude of sophisticated tools. They can handle audience segmenting, attribution tracking, re-targeting and micro-targeting, allowing us to use consumers’ past behavior and preferences to predict their behavior to the finest level of detail and market to them just when they are at the point of considering a purchase.
Have a look at this image. Sound familiar? Web analytics has held out the elusive promise of being a set-it-and-forget-it kind of thing. “Set up your reports, and the data will fill itself in.” That promise has largely held true -- for every part of web analytics except Marketing. That’s because with marketing, the web page you have today isn’t the one you had yesterday. There’s constant change: new information, new deals, new parameters. What everyone wants is a system that runs itself. Otherwise, as the figure shows, you spend all your time making sure the reporting is right. Spending time on data correction takes time away from the analysis that will really help the company. It’s a necessary evil. Wouldn’t it be great if we could get marketing data to the same set-and-forget kind of place as the rest of our web analytics?
In the 2017 world of IT and systems engineering, Test-driven development (TDD) is quickly becoming the new mantra. No one writes a line of code these days without the intent to have that code check/test itself. If there is bug in that code, it gets caught and fixed before it goes live, reducing any risk of breakage.This kind of system has never been deployed on the analytics side. By convention, analytics work has relied on hacks; quick and dirty patches that frequently go awry, and are just as likely to backfire and cut down the analyst, as to cut down her obstacles. If the analyst is winging it, to fill in a little gap in the proverbial data wall, he can unwittingly create a huge chasm with a single stroke. Bringing a TDD approach to analytics would go some way in changing that. It would require that whenever you make any change to your analytics, you make sure the change is fully tested before it’s deployed. This method takes more time -- and may frustrate management -- but will result in better quality control.
Often, when companies release a marketing campaign, their analytics teams spend the next few days scrambling. As quickly as possible, they need to make sure all the data is pulling correctly. Are the tracking codes working? Are the expected data reportable? When something goes wrong, as it often does, it’s hard to know who made the mistake and where. An experienced analyst can sometimes decipher from context. They may see that the broken code came from an email, or a specific social media channel, but it’s challenging detective work -- and it’s a huge pain. Anything you learn may not help anyway, because the data is already damaged.Some companies have taken the lead and tried to solve this by creating their own governance systems to monitor the generation and management of Tracking Codes. Companies like Salesforce and HP have developed their own tools. That’s been their only option up to this point. However, these systems are typically expensive and not core to their business. With maintenance and development time devoted solely to maximizing investments they’ve already paid for, these systems can be a real money pit.
Anyone who knows me knows that I work in data analytics because I enjoy it. For people whose brains work differently, this can be hard to understand. Occasionally someone will ask me, “What intrigues you so much about this field?” I look at it like a game of chess. There are problems to be solved, and questions to be answered. The analyst’s challenge begins as a blank slate, with every conceivable route to data reporting available. Then, specific metrics we want to derive act as constraints that pare down the possibilities to just a handful. Once we know what data picture we want, there are still multiple ways to get there: the challenge and the fun of analytics work is creating an efficient path that doesn’t break and that creates trusted data.Then it’s about bending the technology to serve us (instead of us serving the technology), and it’s about getting the people who work with you to ask the right questions, and do the right analyses to answer those questions.
I’m pleased to be one of the featured speakers at the upcoming Observepoint Analytics Summit. It’s a free, virtual event, and I hope you’ll sign up for my session. To get you excited about it, here’s a sneak peek at what I’ll be talking about: Closing the Loop on Data Validation.Everybody knows the secret to delivering quality data. You check it. You check it right before release. You check it every time a change is made to the campaign or the website, either through a dev release or a Tag Management release. You check it once it’s pushed to production. Then you put it on the list of things to check again periodically.
So many people have written about the pros and cons of Adobe versus Google Analytics (GA). A quick search on the comparison will bring in a huge numberof opinions. As a tracking code solutions provider with a considerable interest in the debate, what fresh perspective can we offer? Let’s start with a bit of background.Origin of GA--launched in November 2005
Task 3: Embed the Tracking Code within Landing Page LinksTracking codes are typically appended to landing pages in the query-string parameter section of the URL. Any time you click on a sponsored ad anywhere on the internet, you’ll see not just your destination’s domain in the URL, but usually a question mark, and a long, unintelligible character string. Somewhere in that morass you’ll find the tracking code, but its placement is something that companies configure independently of one another.