Archive for the 'Advertising' Category

Moving to “viewable impressions” isn’t the answer

Thursday, March 29th, 2012

I am biased, I’ll admit it.  I wrote the first technical impression counting standards for the IAB in 1998.  And I think that trying to move the industry to “viewable impressions” is a bad idea, for three reasons: it won’t make any difference to marketing ROI, it doesn’t help bring dollars online, and it will be expensive and confusing to adopt.

Let’s start with the argument that using “viewable impressions” improves marketing ROI.  Measurement vendors trumpet “CTRs are higher!” for the marketer, while “CPMs will rise!” for the publisher.  Let’s do a little math.  C3 Metrics claims that CTRs are understated by 179% because so many ads aren’t in view.  Wow – CTRs will double!  Except, publishers will charge double the CPM for “viewable impressions”, so the CPC (and ROI) is actually the same.  On the publisher side, Magid Abraham presented to the IAB an example of 35m premium impressions selling at $5 CPM netting $175k to the publisher.  However, only 75% of those are “viewable” according to ComScore, so the eCPM is “actually” $6.67.  Wow – CPMs will rise!  Except that the publisher can only charge that higher CPM (CPV, actually) for “viewable impressions”, so their revenue stays the same.   And somebody has to pay the measurement vendor.  This is progress?

These “increases” may improve the perception of online advertising, but marketers and publishers are smart enough to know they don’t make any real difference.  Yes, the current impression standard is flawed in many ways, but we have over a decade of experience in setting rate cards, negotiating deals, and measuring results with it.  A new standard will have new as-yet-unknown flaws.  More importantly, it means creating new rate cards for CPV, and then redefining CTR (should it be VCTR?) with viewable impression as the denominator, so people don’t compare apples and oranges when looking at historical data.  The cynic in me says that this apples/oranges comparison is the main reason this idea is getting traction, but I can’t imagine anyone I know falling for that.  Other cynical reasons for the excitement may be that many agencies see this new metric as just the ticket to demonstrate to their clients that they “get” digital, and a few technology vendors see this as their path to revenue.  But in my view, this metric just adds another tax in Tolman Geff’s slide of ad ecosystem intermediaries, without creating any real value.

The real challenges we need to solve are laid out in the other 4 principles of Making Measurement Make Sense: rationalizing measurement across media, understanding online’s contribution to brand building, and generally making it easier to spend big budgets online and get ROI that makes sense.  If we can focus our efforts on these challenges, we can get to the $200 billion market we all want to see.

This article originally appeared in AdExchanger.

Different Strokes for Different Folks

Friday, May 27th, 2011

I was a co-founder and CTO of NetGravity back in 1995, and I spent a lot of years building ad servers. So I can say this from experience: ad servers are hard to build. There’s a lot of expertise that goes into delivering the right ad to the right person where response times are measured in milliseconds and uptimes are measured with five nines. Now it’s been almost four years since I co-founded Yieldex, and I can say another thing from experience: predictive analytic systems are also hard to build. Moving, managing and processing terabytes of data every day to enable accurate forecasting, rapid querying, and sophisticated reporting takes a lot of expertise. And, as it turns out, it’s very different from ad server expertise.

When I ran an ad server team, all we wanted to work on was the server. We delighted in finding ways to make the servers faster, more efficient, more scalable, and more reliable. We focused on selection algorithms, and real-time order updates, handling malformed URLs and all kinds of crazy stuff. Our customers were IT folks and ad traffickers who needed to make sure the ads got delivered. The reporting part was always an afterthought, a necessary evil, something given to the new guy to work on until someone else came along.

Contrast that with the team I have now, where we obsess over integrating multiple data sources, forecasting accurately, and clearly presenting incredibly complex overlapping inventory spaces in ways that mere humans can understand and make decisions about. Reporting is not an afterthought for us, it’s our reason for being. Our success critera are around accuracy, response time, and ease of getting questions answered through ad-hoc queries. Our customers are much more analytical: inventory managers, sales planners, finance people, and anyone else in the organization working to maximize overall ad revenue.

These teams have completely different DNA, so it’s not surprising that a team good at one of them might not be so great at the other. This is why so many publishers are unhappy with the quality of the forecasts they get from their ad server vendor, and one of the reasons so many are signing up with Yieldex. Good predictive analytics are hard to build, and nobody has built the right team, and spent the time and effort to get them right for the digital ad business. Until now.

It’s All In The Data

Friday, February 18th, 2011

Okay, I admit it, I stole the headline for this post from an excellent article by Ben Barokas in the AdMeld Partner Forum Executive Outlook booklet. In fact, I’m going to quote his entire paragraph:

For most digital publishers, ‘ad operations’ really means ‘revenue operations’ and that means a lot of data analysis. Data is the key to exceeding the expectations of clients, partners, and shareholders. Data tells ops how much revenue is being generated from specific sales channels, ad agencies, advertisers, site sections, and even users. Data enables them to make smarter decisions, not only on behalf of their own company, but for the benefit of their clients: what to sell, who to sell it to, and under what conditions to maximize value for everyone.

At Yieldex, we couldn’t agree more. Many major publishers use both Yieldex and AdMeld as complementary tools to get the “revenue radar” they need to dramatically increase CPMs and revenue. AdMeld co-sponsored our own 2010 Yield Executive Summit, leading the agency panel. Their 2011 AdMeld Partner Forum was a terrific event, and we appreciated the opportunity to attend.

Greater than the Sum

Wednesday, January 5th, 2011

Any great partnership should be greater than the sum of its parts. Whether a marriage, artistic collaboration, or business partnership, great partners build on each other’s strengths to achieve much more than either could alone.

In that spirit, I’m delighted to announce Andrew Nibley as the new CEO of Yieldex. I’ve spent quite a bit of time with Andy over the last few months, and I feel that our skills complement each other perfectly to take Yieldex to the next level. More importantly, I like working with him, and we have good chemistry, which is an essential ingredient for any great partnership.

Andy has a terrific background in digital media, having been CEO of five different media companies. Most recently he ran WPP’s Marsteller, a digital agency, and among many other successes was a co-founder of Reuters New Media. He also is an excellent leader of people, and has experience managing rapid growth, and many other skills we will take full advantage of as Yieldex drives tremendous growth in 2011.

I look forward to focusing on my strengths in product vision, strategy, and business development. My favorite part of the day is when I get to solve hard (and valuable) customer problems with cutting-edge technology. Yieldex is well positioned to continue to drive dramatic uplift in revenues for our customer base as we broaden our platform, and I look forward to helping drive that.

I am proud of the success Yieldex has enjoyed so far, and humbly grateful to the team that has worked so hard to make it happen. I am excited that Andy and I are partnering together to continue to drive value for our customers and for Yieldex, through 2011 and beyond!

For more info, see our press release and PaidContent article. Thanks to Ryan Graves and his blog 1+1=3 for the idea for this post.

Real-Time Selling: Your Buyers Know How Much Your Inventory Is Worth – Why Don’t You?

Monday, March 8th, 2010

Let me be the first to introduce you to the hot new TLA (three-letter acronym) for 2010: RTS. 2009 brought you RTB done by DSPs. On the other side, we’ve just been introduced to SSPs, so that means they should do – you guessed it – Real-Time Selling!

I am only partly joking here. As I’ve written before, RTB continues to skew the power in favor of the buyers. A healthy ecosystem requires a balance of power between buyers and sellers. To help restore the balance, publishers need a complementary system – an SSP – and part of what an SSP should provide is real-time selling.

So, what is real-time selling? We define it as the ability for sellers to set floor prices in real time to make sure they’re getting full value for their inventory, in the same way buyers can set bids in real time based on available information at the time of the impression request.

One of the major selling points of DSPs is they can incorporate information from many sources – agency and advertiser data, third party providers, and the publishers themselves – and use it to decide how much an impression is worth. Somewhat less obvious is that the worth of the impression is actually just a ceiling on what the buyer is willing to pay, and sophisticated bid management programs often bid much lower in an attempt to get the impressions for the lowest value. This means the publisher may get much less than the buyer would have been willing to pay.

This is why publishers need to use all available information to set their floor price for an impression. Some of the information they might want is unavailable – agency or advertiser information, or certain re-targeting cookies – but any third party and publisher data can be incorporated. Then, sophisticated analysis is required to detect segments of inventory where floors should be raised or lowered – the closer to real-time the better. These algorithms will no doubt continue to be refined over the next several years.

Premium publishers have a much more complex problem: they also sell guaranteed impressions over a period of time. As long as those up-front CPM-based buys were much more valuable than the secondary channel, this was easy to solve: serve the guaranteed ads first, them look at the exchange or network. But as the secondary channel begins valuing some impressions more highly, the publisher needs to manage global yield optimization across multiple channels, and the floor-setting algorithms need to incorporate that data as well. This may seem like it’s years off, but several of our customers have asked us about this capability, and we’ve applied for patents in this area.

Lots of three-letter acronyms have been proposed in the last couple of years, and it’s way too early to tell what the final landscape will look like. That said, forward-thinking publishers who are looking at technology platforms to serve them for the next decade or so should be asking about their plans for Real-Time Selling.

This post also appears at AdExchanger.com in “The Sell-Sider,” a column written by the sell-side of the digital media community.

Publishers Must Demand Real Openness

Monday, January 11th, 2010

Google’s release of the DoubleClick Ad Exchange 2.0 has introduced Real-Time Bidding (RTB) to a much wider audience. While they were not the first, they are probably the biggest, and their entry is starting to legitimize RTB as more than just a niche.

Neal Mohan’s introductory blog post emphasizes the three main principles behind the development of their exchange: simplify the system for buying and selling, deliver better performance, and open up the ecosystem. It’s this last point – openness – that I’d like to explore.

Real-time bidding offers some openness for the buyers: they are delivered each impression, with the floor price, URL, and cookie, and have a fixed amount of time to bid. They are then notified if they win and a request is made to deliver the advertisement. What’s surprising is that unlike a standard auction, at eBay for example, if they lose, the potential buyer has no idea what the winning bid was. Google gets to keep all that information.

Even more incredible is the fact the publishers also aren’t told the winning bid amount. They get an aggregate value for their earnings, but can’t see the value of each impression. This is as if you auctioned 10 things on eBay, and at the end, eBay sent you $100, but refused to tell you what item 1 sold for vs item 2 or item 3.

This information asymmetry is largely to the benefit of Google, but also skews to the buyers. Savvy buying systems can tweak bids up and down in real-time to do crude discovery of the “true” value of different kinds of inventory and how it varies over time. Publishers have no such ability to discover their inventory value at an impression level. Worse yet, while buyers can bid different prices for each impression, publishers have no ability to re-set floor values on each impression to push the bids up. Of course, they would need new tools to do this (SSP, anyone?), but it is much harder without data.

One final point on how the system is stacked against the publishers: any buyer can participate in any of the exchanges, and indeed many of them do. But since Google does not give publishers their impression values, it is very hard for publishers to find out if some of their inventory would perform better on a different exchange. And to add insult to injury, Google makes it almost impossible for non-DFP publishers to participate at all.

Publishers should be wary of using any ad exchange until they get real openness, and the tools – like an SSP – they need to ensure the deck isn’t stacked against them.

This post also appears at AdExchanger.com in “The Sell-Sider,” a column written by the sell-side of the digital media community.

Publishers: Get The Most From The Exchange

Monday, November 30th, 2009

The new real-time bidding (RTB) exchanges seem to skew the buying power further in favor of buyers. They can see each impression in real-time, data-enhance it with their own data, and then bid on it. The problem is that most publishers don’t know where the pockets of value exist in their remnant inventory, so they can’t intelligently allocate that inventory in a way that makes them the most revenue.

Publishers need 3 things to maximize the value of their inventory on an exchange:

  1. Inventory value. Publishers need to get back from their exchanges the value of their inventory, on an impression-by-impression basis. Just getting high-level average CPMs by section or zone isn’t good enough, this masks the high-value impressions that may exist within those sections.
  2. Third-party data. Your buyers are using data to understand the value of your inventory, you need to have the data to fight back. You should be setting floors on your inventory based these data segments, so buyers can’t just cherry-pick your inventory at an overall low value, and you can’t do this without the data.
  3. An analytic system to maximize yield. You will need a system that can capture an analyze all this data – terabytes of it – and spit out a set of floor prices by inventory segment. In an ideal world, this system operates in real-time, re-setting floors based on the most recent data. I call this real-time selling – the antidote to real-time bidding.

Not all of these items are easily attainable. The first item is not generally available from exchanges, so publishers need to demand it. The second is becoming more available, from vendors such as Audience Science, BlueKai, and eXelate. The third will be provided by Yieldex, among others. Forward thinking publishers, who put this stack together, should be able to dramatically increase revenue from their unsold inventory.

This article was also published on AdExchanger.com.

Cloud Computing Lessons

Wednesday, May 6th, 2009

Cloud computing means lots of different things, and much of it is hype. At Yieldex, we’ve been using cloud computing, specifically Amazon Web Services, as a key part of our infrastructure for the better part of a year, and we thought we’d pass on a few of our lessons learned. As you might expect, the services we use have trade-offs. If your challenge fits within the parameters, cloud computing can be a huge win, but it’s not the answer for everything.

All of these lessons are the result of the hard work of our entire engineering team, most notably Craig and Calvin. These guys are among the best in the world at scaling to solve enormous data and computation problems with a cloud infrastructure. We could not have built this company and these solutions without them.

For a startup, there are a number of compelling reasons to use a cloud infrastructure for virtually every new project. You don’t get locked into a long-term investment in hardware and data centers, it’s easy to experiment, and easy to change your mind and try a different approach. You don’t have to spend precious capital on servers and storage, wait days or weeks for them to arrive, and then spend a day or two setting them up. If your application scales horizontally, then you can scale additional customers, storage, and processing with minimal cost and time delay. All these things are touted by cloud providers, and basically boil down to: focus on your business, not your infrastructure.

Sometimes, however, you do need to focus on the infrastructure. We provide our customers with analytics and optimization based on our unique and proprietary DynamicIQ engine. Our first customer was a decent sized web property, and we were able to complete our DynamicIQ daily processing on several gigabytes of data using just one instance in less than an hour. Our next customer, however, was 10x the size. And the one after that, 10x more – hundreds of gigabytes per day. Fortunately, we had designed our DynamicIQ engine to easily parallelize across multiple instances. We spent some time learning how to start up instances, distribute jobs to them, and shut them back down again, but because we had designed the engine for this eventuality, we were able to use the cloud to cost-effectively scale to even the largest sites on the web.

We also have BusinessIQ, which is basically an application server that provides query processing and a user interface into our analytics. Initially we started with this server in the cloud too, but as we bumped up against other scalability issues, we found that the cloud doesn’t solve every problem. For example, we provide a sophisticated scenario analysis capability. To calculate a “what-if” scenario requires processing a huge amount of data in a very short time. For our larger customers, a single cloud instance did not have enough memory to perform this operation. Trying to stay true to the cloud paradigm, we implemented a distributed cache across multiple instances, but this didn’t work well because of limitations on I/O. We ended up having to go to a hybrid model, where we bought and hosted our own servers with large memory footprints, so we could provide this functionality.

We have been very happy users of the Amazon Web Services cloud, and not just because we won the award. We would not have been able to get our business of the ground with out the cost effective scalability of the Amazon infrastructure. While it’s not for every application, for the right application, it truly changes the game.

Yieldex announces $8.5m Series B

Tuesday, February 17th, 2009

We are delighted to announce our Series B financing. The $8.5m round was led by Madrona Venture Group, a really smart group of investors. We met them through the Amazon AWS Start-up Challenge, and as it turns out they had been looking for a company like ours, so they had done a lot of research on the space. We were impressed by their industry knowledge and their experience, and are excited to be working with them.

We also are excited to have Amazon participate. They are the clear leader in their space, and their vision has proven out time and time again. We certainly hope they are right about this investment, too! Their AWS platform is what enables us to scale so cost-effectively.

Finally, a big thank-you to our Series A investors Sequel Venture Partners and First Round Capital, for their advice, counsel, introductions, and yes, their support in the Series B. We could not have build this complex and innovative technology without their support.

This is just the beginning for us. We are now well positioned to succeed, and our success is largely within our own control – just the way we like it. Great athletes always want the ball when the game is tight – we now have the ball, and we will win this game.

Coming out of our shell

Tuesday, February 10th, 2009

This is an exciting week for us at Yieldex, we are finally launching our first product, BusinessIQ! We’ve been working hard for quite a while on this, so it’s very liberating to be able to tell the world about it. Read our press release, or see the MediaPost article, for the details.

Having built enterprise software before, we know how important it is to have real customers banging on the product to make it solid. We are delighted to have Martha Stewart Living Omnimedia as our debut customer, and look forward to announcing more in due course.

MSLO has been a great beta partner for us, give us tons of constructive feedback and being patient through our inevitable growing pains. We have been able to iterate the product very rapidly to address their needs, and we are continuing to improve by leaps and bounds. We are excited by the value we are providing to them; we love seeing our hard work start to bear fruit.

Congrats to the team for this milestone – let’s enjoy it! Okay, that’s enough, get back to work. 🙂