The transition from waterfall to header bidding was the most important architectural change in publisher monetization. I have worked on both sides: at Google where the ad server sat at the center of the waterfall, and at Glance and InMobi where I built unified auction systems leveraging header bidding.
How the Waterfall Worked
The publisher ad server called demand sources sequentially in priority order. Direct campaigns first, then highest-priority partner (say AdX at $3.00 floor), then next (Magnite at $2.00), then next (PubMatic at $1.50). The fundamental problem: a demand partner lower in the waterfall might pay $4.00 but never gets the chance because a higher-priority partner already filled at $3.00. Waterfall optimized fill rate at each level but not total revenue.
How Header Bidding Fixed It
Header bidding runs parallel auctions. Prebid.js fires bid requests to multiple SSPs simultaneously before the ad server is called. Each SSP auctions against its DSPs, and winning bids from all SSPs compete in the ad server. When publishers at Glance switched from waterfall to header bidding, effective CPMs increased 25-40% average — not from new demand but from existing demand competing fairly.
Client-Side to Server-Side
Early header bidding was client-side: Prebid.js ran in the browser, adding latency per partner. Server-side (Prebid Server, Amazon TAM) moved auctions to a server, dramatically reducing client-side latency. At InMobi, we recommend server-side for performance-focused publishers. The trade-off is cookie matching — server-side requests lack direct browser cookie access — but as the industry moves toward cookieless identity, this disadvantage diminishes.
Unified Auctions: The Next Step
At Glance, I built a unified monetization stack: a single auction layer considering all demand — direct, PMP, header bidding, and exchange — in one real-time decision. Google moved in this direction with Open Bidding (formerly EBDA) allowing exchanges to compete within Google Ad Manager natively, but with Google controlling the auction.
What Comes After Header Bidding
The next evolution is fully server-side, AI-optimized auctions. Instead of firing requests to a fixed list with static timeouts, an intelligent system dynamically routes each impression to demand partners most likely to bid competitively, with timeouts calibrated per partner per impression. This reduces unnecessary bid requests, increases fill rates, and optimizes latency dynamically. At InMobi, we are building toward this: an exchange that does not just run auctions but intelligently orchestrates them.
Building Toward the Future
At InMobi, where I lead Web and CTV Exchange product strategy, every aspect of this topic connects to our exchange product roadmap. The decisions we make about auction design, signal enrichment, demand routing, and yield optimization are all informed by deep understanding of these fundamentals. Having built monetization systems scaling to $200M+ at Glance, I know that getting the basics right compounds into massive revenue impact at scale.
The programmatic industry is evolving toward AI-native, server-side, cross-surface architecture. By 2030, exchanges will consolidate, AI agents will participate in auctions, attention-based signals will supplement viewability, and CTV will be the dominant ad surface. The product builders who understand today's fundamentals deeply — and invest in building for tomorrow's requirements — will lead this transformation. That is exactly what I am doing at InMobi and at adsgupta.com, where I am building AI-powered advertising intelligence tools drawing on everything I have learned across Google, Automatad, Glance, and InMobi over the past decade.
If you are building in programmatic advertising, I encourage you to go beyond surface-level understanding. Read the OpenRTB specification. Study bid request logs. Analyze auction dynamics. Trace the supply chain from publisher to advertiser. This depth of understanding is what separates good ad products from great ones — and it is the perspective I bring to everything I build.