‘Viewable Ads’ and Brand Dollars: We’ve Seen This Movie Before

September 20th, 2012

This article originally appeared in AdExchangerhttp://www.adexchanger.com/data-driven-thinking/viewable-ads-brand-dollars/

Ever since my March column addressing viewable impressions, I’ve been tagged as “that anti-viewable impressions guy.” This misses the point. I’m not against viewable impressions. That’s like being against vegetables – they’re probably good for you. What I am against is the idea that if we can just move the standard to viewable impressions, we’ll unlock the flood of brand advertising dollars that are too afraid to move to online.  Thing is, we’ve seen this movie at least twice before.

Here’s what I mean:

Back in the mid to late ’90s, some advertisers started to realize that many impressions and clicks were being created by robots and spiders deployed by huge companies like Excite, Infoseek and Lycos. Industry experts predicted that if we could just make sure every impression was seen by a human, we’d get those brand dollars. Publishers worried that their impression counts and revenues would drop precipitously. With time and effort the industry adopted some standards that mostly solved the problem, but it didn’t unleash the flood of ad dollars.

Then in the early 2000’s, third-party ad servers really started to take off, and discrepancies began to arise. Advertisers started to insist on paying according to their numbers, not the publisher’s counts.  Again, industry experts predicted that using advertiser numbers would unleash brand dollars.  Again, publishers agonized over losing impressions and revenue. Now, the majority of premium ad buys are paid on third-party ad server numbers, and the money flood hasn’t materialized.

Here we are again.  Everyone knows measuring viewability — despite its obvious flaws — is better than just measuring delivery. Industry groups are pushing for it, because it wins elections as a small but marketable success. Publishers are dragging their feet because of the cost to deploy, and potential impact on revenue. Digital agencies see it as a bright shiny object they can use to prove they “get” digital to their clients. For companies in the space, it’s essential to support it – allowing clients to forecast viewability. But in the end, it won’t solve the real problems of brand advertisers online, so it’s more of a distraction than a panacea.

As I stated before, the real challenges we need to solve are laid out in the other four principles of Making Measurement Make Sense: rationalizing measurement across media, understanding online’s contribution to brand building, and generally making it easier to spend big budgets online and get ROI that makes sense. Let’s focus our efforts on these challenges, so we can grow the market to $200 billion for everyone.

Defaults matter

July 11th, 2012

Disclosure: Yieldex is not directly in the interest-based targeting business, although some of our customers are.  Nevertheless, the opinions below are my own, not my employer’s.

A very few people care a lot about online cookies, on both extremes: some our outraged and some think everything’s fine.  And an extremely large percentage of people don’t seem to care much one way or the other.  I don’t think anybody seriously argues with the idea of choice – if you want to make a choice about your privacy, you should be able to.  And I’d argue opting out of interest-based advertising has been made very easy – every major browser allows opt-out and/or has an incognito mode for anonymous browsing.  If you stipulate that the choice exists, then the debate becomes not about choice, but about defaults.

Defaults are really important.

Here’s a real life example from a study by Eric Johnson and Dan Goldstein: In Germany, citizens check a box to opt-in as an organ donor, but in Austria (a very similar country), they check a box to opt-out. The results? In Germany only 12% consent to organ donation, while in Austria nearly 99% do. Defaults in Austria are saving lives – that’s pretty important.

The default for the first 15 years of the Internet has been for browsers to accept third party cookies, which are often used for interest-based advertising.  Apple’s Safari browser was the first to alter that default, and now Microsoft is effectively changing it (by enabling Do Not Track) in Internet Explorer 10.  There is also legislation that has passed in the EU and pending in the US to restrict the use of cookies.  This means there is a very real possibility that the majority of browsers in use in a few years will have the default reversed.  This could have important repercussions, so let’s analyze the pros and cons of changing this default:

On the “change” side, no data can be collected.  A few people are happy, most don’t really care. Ads are less relevant, and less valuable to the advertiser.  Publishers make less money.  New legislation often has unforeseen and usually negative consequences.  And one more thing: we don’t actually know what the internet looks like without third party cookies, so we’re pretty much flying blind.

On the “leave it alone” side, non-personally-identifiable data is collected.  A few people are forced to click an extra time to opt-out.  Most people don’t really care.  Ads are more relevant. Publishers make more money. Does some of that money go into the pockets of corporate fat-cats who then buy yachts?  Yes.  But does the competitive market also provide incentive for reinvesting a good chuck of that money into more and better free content? You bet. This is not a zero-sum game. By the way, no new laws. And we already know it works.

So, let’s recap.

Incognito default:
  • Most people don’t care
  • Sites might require people to opt-in to get content
  • Less relevant ads
  • Less high-quality free content
  • Fewer yachts
  • More legislation, with possible negative consequences
Anonymous data default:
  • Most people don’t care
  • A few people have to click to opt-out
  • More relevant ads
  • More high-quality free content
  • More yachts
  • Less legislation
I believe that the potential downsides of changing these defaults far outweigh the potential danger of privacy violations. Don’t change the defaults.

Some things actually do change

June 10th, 2012

Anyone who has met me knows that one of the most memorable things about me is that I’m 6’7″ tall.  Leading to the most common small-talk question I get: “Do you play basketball?”  For most of my life, the answer has been very simple.  No.  And if I’m feeling particularly vexed, I’ll answer “No, do you play miniature golf?”

When I was in ninth grade, I grew six inches in one year, which resulted in a tall and comically uncoordinated teenager. On the playground, team captains would always pick me first to be on their basketball team, even though I would protest that I was not good at basketball. “You’re tall – just stand there and shoot!” they would say.  Then, about halfway through the game, after I missed my 10th layup, somebody would say “Dude, you suck at basketball!” Needless to say, it didn’t take long before I swore a vow to never play basketball again.  Ever.

This was one of those rash teenage decisions that somehow stuck.  I kept that vow for nearly 30 years. I rowed crew in high school and college, then learned volleyball after I moved to California, but never played basketball again.  Until last year.

Last year, a friend organized a group of dads more as a social experiment than as a sporting event, and the activity was basketball.  I tried to bow out, but was persuaded to join by the fact that a number of the guys had never played basketball before. Much to my surprise, I found I really enjoyed the game.  I was still terrible, but playing with friends – real friends, who would keep passing to me even as I missed shot after shot – made me want to keep trying. I now play with this group as often as I can, about every other week or so, combining great exercise and bonding time. And now I don’t suck at basketball nearly as much.

This experience made me think about what other assumptions I am carrying from 30 years ago (or 20, or even 10 years ago). Ruts come in many different shapes and sizes, and sometimes can be hard to recognize, but they all benefit from regular re-visiting to see if they’re still taking you the direction you want to go.

Now when people ask me if I play basketball, I still answer “not really.”  But now instead reminding me of failure, it makes me think of spending time with friends and learning new things.  Some things actually do change.

Moving to “viewable impressions” isn’t the answer

March 29th, 2012

I am biased, I’ll admit it.  I wrote the first technical impression counting standards for the IAB in 1998.  And I think that trying to move the industry to “viewable impressions” is a bad idea, for three reasons: it won’t make any difference to marketing ROI, it doesn’t help bring dollars online, and it will be expensive and confusing to adopt.

Let’s start with the argument that using “viewable impressions” improves marketing ROI.  Measurement vendors trumpet “CTRs are higher!” for the marketer, while “CPMs will rise!” for the publisher.  Let’s do a little math.  C3 Metrics claims that CTRs are understated by 179% because so many ads aren’t in view.  Wow – CTRs will double!  Except, publishers will charge double the CPM for “viewable impressions”, so the CPC (and ROI) is actually the same.  On the publisher side, Magid Abraham presented to the IAB an example of 35m premium impressions selling at $5 CPM netting $175k to the publisher.  However, only 75% of those are “viewable” according to ComScore, so the eCPM is “actually” $6.67.  Wow – CPMs will rise!  Except that the publisher can only charge that higher CPM (CPV, actually) for “viewable impressions”, so their revenue stays the same.   And somebody has to pay the measurement vendor.  This is progress?

These “increases” may improve the perception of online advertising, but marketers and publishers are smart enough to know they don’t make any real difference.  Yes, the current impression standard is flawed in many ways, but we have over a decade of experience in setting rate cards, negotiating deals, and measuring results with it.  A new standard will have new as-yet-unknown flaws.  More importantly, it means creating new rate cards for CPV, and then redefining CTR (should it be VCTR?) with viewable impression as the denominator, so people don’t compare apples and oranges when looking at historical data.  The cynic in me says that this apples/oranges comparison is the main reason this idea is getting traction, but I can’t imagine anyone I know falling for that.  Other cynical reasons for the excitement may be that many agencies see this new metric as just the ticket to demonstrate to their clients that they “get” digital, and a few technology vendors see this as their path to revenue.  But in my view, this metric just adds another tax in Tolman Geff’s slide of ad ecosystem intermediaries, without creating any real value.

The real challenges we need to solve are laid out in the other 4 principles of Making Measurement Make Sense: rationalizing measurement across media, understanding online’s contribution to brand building, and generally making it easier to spend big budgets online and get ROI that makes sense.  If we can focus our efforts on these challenges, we can get to the $200 billion market we all want to see.

This article originally appeared in AdExchanger.

A Sermon on Cognitive Bias

January 26th, 2012

This is a talk I gave to a group of friends who periodically gather to present TED-style talks. I dressed in a dark suit (uncharacteristic for me) and adopted a solemn demeanor.

Welcome brothers and sisters, to the sermon of the evening. As with many sermons, let us start with a reading from scripture. In this case, my bible is a book called “Thinking Fast And Slow” by Nobel prize winner Daniel Kahneman.

A reading from the book of Daniel: (slightly edited for brevity)

An inconsistency is built into the design of our minds. We want pain to be brief and pleasure to last. But our memory has evolved to represent the most intense moment of an episode of pain or pleasure (the peak) and the feelings when the episode was at its end, but ignores the duration of pain or pleasure. This is called duration neglect, and the peak-end rule.

We designed an experiment using a mild form of torture that I will call the cold-hand situation. Participants are asked to hold their hand up to the wrist in painfully cold water until they are invited to remove it. Each participant endured two cold-hand episodes: The short episode consisted of 60 seconds of immersion in water at 14° Celsius, which is experienced as painfully cold, but not intolerable. The long episode lasted 90 seconds. Its first 60 seconds were identical to the short episode. At the end of the 60 seconds, the experimenter opened a valve that allowed slightly warmer water to flow into the tub. During the additional 30 seconds, the temperature of the water rose by roughly 1°, just enough for most subjects to detect a slight decrease in the intensity of pain.
After the second trial, the participants were given a choice about the third trial. They were told that one of their experiences would be repeated exactly, and were free to choose whether to repeat the experience they had had with their left hand or with their right hand. Of course, half the participants had the short trial with the left hand, half with the right; half had the short trial first, half began with the long, etc. This was a carefully controlled experiment.

Fully 80% of the participants who reported that their pain diminished during the final phase of the longer episode opted to repeat it, thereby declaring themselves willing to suffer 30 seconds of needless pain in the anticipated third trial. If we had asked them, “Would you prefer a 90-second immersion or only the first part of it?” they would certainly have selected the short option. We did not use these words, however, and the subjects did what came naturally: they chose to repeat the episode of which they had the less aversive memory. The subjects knew quite well which of the two exposures was longer—we asked them— but they did not use that knowledge. Rules of memory – the peak-end rule, and duration neglect – determined how much they disliked the two options, which in turn determined their choice.

Now think about this in the context of some of the decisions you have seen at work, at home, in your social life. When your friend decides to manage another product launch, they are remembering that the last product launch was successful, not the fact that they worked with jerks every day for 3 years. People endure tedium in line for hours to get a glimpse of their favorite celebrity, but only remember how exciting the experience was.

So, now that you know about the peak-end rule, and duration neglect, and I’ve given you some examples, you’ll never make these kinds of mistakes again, right? Wrong.

Here’s the real problem: not only are we blind to these cognitive biases, but we are blind to our blindness. We are confident, even when we are wrong. The only way to avoid them is to constantly question our own thinking, would would be impossibly tedious and inefficient.

But here’s the good news: it is easier, and more fun, to recognize other people’s mistakes than our own. The problem here is it is hard to talk about errors in judgement with other people. Most people would naturally respond with rationalization and defensiveness.

So here’s what I am giving you: a name for this cognitive bias, duration neglect. By naming it, we make it easier to recognize and talk about. And instead of being a mistake, it’s more like a medical condition. Gently saying you’re suffering from duration neglect is more like saying you’re suffering from a runny nose, it’s something over which you have no control, and we can work together to mitigate the effects. And hopefully make better decisions.

To come back to the book of Daniel: “My hope is that by giving you the vocabulary to discuss these and the ability to recognize and name them, we can create a community in which we can help each other avoid or at least mitigate these challenges and limit the potential damage.”

Your brain has blind spots, one of these is duration neglect, which can sometimes result in poor decision making. Avoiding these kinds of biases is almost impossible – they are hardwired into your brain. However, you can often see them in others, and with a little humor and humility, you can sometimes help them avoid bad decisions.

So here is the lesson I leave you with: to err is human. To – gently, candidly, humbly – help someone avoid an error, divine.

 

Different Strokes for Different Folks

May 27th, 2011

I was a co-founder and CTO of NetGravity back in 1995, and I spent a lot of years building ad servers. So I can say this from experience: ad servers are hard to build. There’s a lot of expertise that goes into delivering the right ad to the right person where response times are measured in milliseconds and uptimes are measured with five nines. Now it’s been almost four years since I co-founded Yieldex, and I can say another thing from experience: predictive analytic systems are also hard to build. Moving, managing and processing terabytes of data every day to enable accurate forecasting, rapid querying, and sophisticated reporting takes a lot of expertise. And, as it turns out, it’s very different from ad server expertise.

When I ran an ad server team, all we wanted to work on was the server. We delighted in finding ways to make the servers faster, more efficient, more scalable, and more reliable. We focused on selection algorithms, and real-time order updates, handling malformed URLs and all kinds of crazy stuff. Our customers were IT folks and ad traffickers who needed to make sure the ads got delivered. The reporting part was always an afterthought, a necessary evil, something given to the new guy to work on until someone else came along.

Contrast that with the team I have now, where we obsess over integrating multiple data sources, forecasting accurately, and clearly presenting incredibly complex overlapping inventory spaces in ways that mere humans can understand and make decisions about. Reporting is not an afterthought for us, it’s our reason for being. Our success critera are around accuracy, response time, and ease of getting questions answered through ad-hoc queries. Our customers are much more analytical: inventory managers, sales planners, finance people, and anyone else in the organization working to maximize overall ad revenue.

These teams have completely different DNA, so it’s not surprising that a team good at one of them might not be so great at the other. This is why so many publishers are unhappy with the quality of the forecasts they get from their ad server vendor, and one of the reasons so many are signing up with Yieldex. Good predictive analytics are hard to build, and nobody has built the right team, and spent the time and effort to get them right for the digital ad business. Until now.

It’s All In The Data

February 18th, 2011

Okay, I admit it, I stole the headline for this post from an excellent article by Ben Barokas in the AdMeld Partner Forum Executive Outlook booklet. In fact, I’m going to quote his entire paragraph:

For most digital publishers, ‘ad operations’ really means ‘revenue operations’ and that means a lot of data analysis. Data is the key to exceeding the expectations of clients, partners, and shareholders. Data tells ops how much revenue is being generated from specific sales channels, ad agencies, advertisers, site sections, and even users. Data enables them to make smarter decisions, not only on behalf of their own company, but for the benefit of their clients: what to sell, who to sell it to, and under what conditions to maximize value for everyone.

At Yieldex, we couldn’t agree more. Many major publishers use both Yieldex and AdMeld as complementary tools to get the “revenue radar” they need to dramatically increase CPMs and revenue. AdMeld co-sponsored our own 2010 Yield Executive Summit, leading the agency panel. Their 2011 AdMeld Partner Forum was a terrific event, and we appreciated the opportunity to attend.

Greater than the Sum

January 5th, 2011

Any great partnership should be greater than the sum of its parts. Whether a marriage, artistic collaboration, or business partnership, great partners build on each other’s strengths to achieve much more than either could alone.

In that spirit, I’m delighted to announce Andrew Nibley as the new CEO of Yieldex. I’ve spent quite a bit of time with Andy over the last few months, and I feel that our skills complement each other perfectly to take Yieldex to the next level. More importantly, I like working with him, and we have good chemistry, which is an essential ingredient for any great partnership.

Andy has a terrific background in digital media, having been CEO of five different media companies. Most recently he ran WPP’s Marsteller, a digital agency, and among many other successes was a co-founder of Reuters New Media. He also is an excellent leader of people, and has experience managing rapid growth, and many other skills we will take full advantage of as Yieldex drives tremendous growth in 2011.

I look forward to focusing on my strengths in product vision, strategy, and business development. My favorite part of the day is when I get to solve hard (and valuable) customer problems with cutting-edge technology. Yieldex is well positioned to continue to drive dramatic uplift in revenues for our customer base as we broaden our platform, and I look forward to helping drive that.

I am proud of the success Yieldex has enjoyed so far, and humbly grateful to the team that has worked so hard to make it happen. I am excited that Andy and I are partnering together to continue to drive value for our customers and for Yieldex, through 2011 and beyond!

For more info, see our press release and PaidContent article. Thanks to Ryan Graves and his blog 1+1=3 for the idea for this post.

Ditching the Droid 2 for the Blackberry Torch

August 2nd, 2010

I’ll start with this: I’m a confirmed Blackberry addict. Have had one since they were two-way pagers. But since they seem to be falling behind on the software side, and I’m fed up with AT&T’s crappy service, it seemed like a good time to look for a new phone.

My one non-negotiable point was the physical keyboard. I have had an iPhone and an iTouch, and I simply can’t stand typing on the virtual keyboard. I need to be able to make a quick, sometimes one-handed reply to an email, and I found the iPhone actually prevented me from replying to email out of sheer frustration. I also hate the unpredictability of the auto-correct, and the fact that I can’t type without looking closely at the screen. I type fast enough on my Blackberry to take notes at conferences, and write relatively long emails, and I just can’t give that up.

Given that, I was pretty excited by the Droid 2, which seemed to meet my specs perfectly. Improved keyboard, cool new Android OS, and Verizon service. So I really wanted to love it. I tried to love it. Really.

Since the keyboard was the main thing, let’s talk about that first. The raised bumps are nice, I can feel the keys, but I still find them hard to detect without looking. A huge problem is the top row, which are hard up against the raised bezel of the screen, so it takes extra effort to squeeze my fingers against the bezel to hit them reliably. Often used keys like the backspace key and the numbers are up there, making it doubly annoying. A few design decisions are strange – the Alt Lock key is right next to the A, so I hit it by accident all the time. The arrow keys are great for quick editing, and the search key is nice, but I couldn’t figure out when to use the OK button and when I could just use Enter. Bottom line: I could not get comfortable or fast typing on the keyboard, even after carrying it for two weeks, and using it exclusively for 3-4 days. I even tried Swype for a while, and I think it’s very cool, but I never got that fast with it either. This was ultimately the deal-breaker.

With the punchline out of the way, there were a few other deal-breakers as well. First, the battery life is abysmal. When I was using it as my main phone, with 1-2 hours of talk, lots of email, and some browsing, it lasted only 8 hours. Who works only 8 hours? I can’t charge my phone twice a day, that’s ridiculous.

Another deal-breaker was the fact that I couldn’t search my Exchange email. Not just the email on the server, you can’t even search the email on the *phone*! Completely unacceptable. I also found annoying that I had to tap several times to edit an appointment. Other than that, I thought the Exchange support was decent.

Finally – and this one surprised me – the speed wasn’t actually that fast. Sure, the graphics were nice and all, but actually working on the phone often required annoying waits between task switches. And twice I did that most important of speed comparisons: how long from the time the airplane’s wheels hit the tarmac to the time when emails start hitting your phone? The Blackberry crushed the Droid 2 each time.

Have to say, there were several things I absolutely loved about the Droid 2 as well, and will miss. The voice commands and voice search are terrific. Just being able to say “Navigate to 123 Maple Street” and get turn-by-turn voice nav is an awesome thing. Also, all the apps are great fun to play around with, and I happily wasted several hours trying out lots of apps. The running one in particular was great – listening to Pandora while mapping your run is very cool.

The Verizon data speed is definitely better, and my main regret is that I’m still stuck with crappy AT&T service. So, I’ll keep my Verizon MiFi and use it when necessary, but it’s a hack I don’t like. Wish Verizon had the Torch.

I’m still fairly new to the Torch, but it seems to work pretty well. The keyboard is not quite as awesome as the Bold, but still much better than anything else out there. Love the trackpad and the touch-screen combo, I found all the things I expected to work just *worked*. Not quite as fun or new as the Droid 2, but for getting work done, seems like the right choice for me.

Real-Time Selling: Your Buyers Know How Much Your Inventory Is Worth – Why Don’t You?

March 8th, 2010

Let me be the first to introduce you to the hot new TLA (three-letter acronym) for 2010: RTS. 2009 brought you RTB done by DSPs. On the other side, we’ve just been introduced to SSPs, so that means they should do – you guessed it – Real-Time Selling!

I am only partly joking here. As I’ve written before, RTB continues to skew the power in favor of the buyers. A healthy ecosystem requires a balance of power between buyers and sellers. To help restore the balance, publishers need a complementary system – an SSP – and part of what an SSP should provide is real-time selling.

So, what is real-time selling? We define it as the ability for sellers to set floor prices in real time to make sure they’re getting full value for their inventory, in the same way buyers can set bids in real time based on available information at the time of the impression request.

One of the major selling points of DSPs is they can incorporate information from many sources – agency and advertiser data, third party providers, and the publishers themselves – and use it to decide how much an impression is worth. Somewhat less obvious is that the worth of the impression is actually just a ceiling on what the buyer is willing to pay, and sophisticated bid management programs often bid much lower in an attempt to get the impressions for the lowest value. This means the publisher may get much less than the buyer would have been willing to pay.

This is why publishers need to use all available information to set their floor price for an impression. Some of the information they might want is unavailable – agency or advertiser information, or certain re-targeting cookies – but any third party and publisher data can be incorporated. Then, sophisticated analysis is required to detect segments of inventory where floors should be raised or lowered – the closer to real-time the better. These algorithms will no doubt continue to be refined over the next several years.

Premium publishers have a much more complex problem: they also sell guaranteed impressions over a period of time. As long as those up-front CPM-based buys were much more valuable than the secondary channel, this was easy to solve: serve the guaranteed ads first, them look at the exchange or network. But as the secondary channel begins valuing some impressions more highly, the publisher needs to manage global yield optimization across multiple channels, and the floor-setting algorithms need to incorporate that data as well. This may seem like it’s years off, but several of our customers have asked us about this capability, and we’ve applied for patents in this area.

Lots of three-letter acronyms have been proposed in the last couple of years, and it’s way too early to tell what the final landscape will look like. That said, forward-thinking publishers who are looking at technology platforms to serve them for the next decade or so should be asking about their plans for Real-Time Selling.

This post also appears at AdExchanger.com in “The Sell-Sider,” a column written by the sell-side of the digital media community.