Adam Landis, Author at Branch https://www.branch.io/resources/author/adamlandis/ Unifying user experience and attribution across devices and channels Fri, 10 Oct 2025 23:55:06 +0000 en-US hourly 1 ChatGPT Apps Give Us a Glimpse into the Future of the Internet https://www.branch.io/resources/blog/chatgpt-apps-give-us-a-glimpse-into-the-future-of-the-internet/ https://www.branch.io/resources/blog/chatgpt-apps-give-us-a-glimpse-into-the-future-of-the-internet/#respond Fri, 10 Oct 2025 23:55:06 +0000 https://www.branch.io/?p=22382 This week we’ve witnessed a glimpse into the future of the internet. In the coming months and years, we’ll see exciting changes as brands take advantage of this tremendous opportunity.

The post ChatGPT Apps Give Us a Glimpse into the Future of the Internet appeared first on Branch.

]]>
When the mobile app store started its meteoric rise, the growth was unprecedented. Mary Meeker would publish 100-page reports with graphs marching up and to the right about how the world’s access to mobile devices was changing the world. We in the industry would marvel at the growth and pat ourselves on the back to be part of such an amazing time in the industry.

Now, Mary Meeker publishes reports about AI, and the growth rates are unprecedented; a word she uses 64 times. You can’t avoid AI news on the web today, and a lot of the rhetoric is analysis (arguing) about how it will — or won’t — take shape. This week at ChatGPT’s DevDay, Sam Altman announced that ChatGPT has 800 million daily active users; that’s roughly the size of Apple’s App Store. As part of this presentation, he announced the GA release of the ChatGPT Apps SDK. An SDK that allows brands to create applications that run natively in ChatGPT. This can’t be overstated. This will likely be larger than the shift that happened when Apple released an SDK that opened up the App Store. This represents OpenAI’s launch — and monetization — attempt for the coming agentic web.

The emergence of the agentic web

The newly coined term “agentic web” describes a world where AI will lead users across the internet, finding new products or content, doing our shopping, and helping us get instant access to the right information. Today, over half of US adults regularly use LLMs, but — like satellite orbital mechanics or fiber optic wave propagation — most don’t know how the undergirding technology works. What does an Internet of AI agents look like?

We’re seeing it unfold. A group of researchers have outlined three fundamentals that will power an agentic web:

  1. There needs to be an intelligence layer. An AI needs to be able to find the product. Anyone who has used an LLM can attest to the rate at which — if not already — this will happen. LLMs are regularly outsourcing humans at a wide variety of tasks. But it’s not just book smarts; the more a user interacts with an LLM, the more context the AI has for the user. ChatGPT knows my location, my preference, and my favorite brands. It’s not just smarter; it’s better prepared.
  2. Next, the AI needs a baseline communication protocol. Like the internet needed shared protocols (e.g., HTTP, HTML), AIs need a shared language that can allow them to communicate despite their underlying technologies, frameworks, or even the language they were trained in. MCP and A2A are already widely adopted and deployed. Altman recently launched the Agentic Commerce Protocol to handle the monetization.
  3. The final fundamental is the economic layer. Pundits — including myself — have argued for advertising, affiliate fees, or any number of monetization potential models. Altman’s announcement of the Agentic Commerce Protocol is OpenAI’s opening salvo to establish a rev-share model, similar to the Apple App Store, and OpenAI will take a cut of the transaction.

There are doubters

I’ve read compelling analysis that agentic commerce won’t happen. E-commerce walled gardens won’t allow agents to penetrate the fortress of customer ownership. Eric Seufert, in “Agentic Commerce is a Mirage,” compellingly argues that the titans of the e-commerce industry (e.g., Shopify and Amazon) are already blocking AI agents from accessing their pages. And why shouldn’t they? There are few incentives — and actually more downsides — to allowing another company to control the discovery, experience, and purchases of the user. Andrew Lipsman, in “Agentic Commerce is (still) a collective hallucination,” goes further, arguing that e-commerce, only representing ~15% of U.S. retail sales itself, is overhyped; what does it matter if AI takes over a tiny amount?

These articles make compelling points, but they don’t take into account the incredible growth and reliance consumers are showing towards AI. Amazon and Shopify blocking LLMs represent short-term defensive behavior. Of course, they want to control the consumer’s purchase. But Blockbuster wanted us to continue to come by their neighborhood stores. The advent of digital delivery made it so Blockbuster could build all kinds of blocks on user behavior, only to find that they’ve been blocked from the consumer as they changed their preferences and practices. Consumer preference will force the online retail juggernauts to open up — or be passed by. As consumers grow to rely on LLMs for decision-making and information, they’ll either change their tune or run the risk of being Blockbustered. And yes, e-commerce may only represent ~15% of retail spending, but the impact to our economy and the retail space is much larger. Over 80% of shoppers use digital resources to inform their offline purchases.

The GPT app store swings the vote

By launching an app store on ChatGPT, OpenAI is opening up the opportunity for cutting-edge brands to jump to the forefront of the fastest-growing technology trend of all time.

Like Apple’s launch of the App Store, OpenAI will make a cut of revenue earned on the app store, and will likely open up an advertising model that will allow apps to be front-and-center on search, and — if they’re ambitious — an advertising network that will allow users to monetize their apps through in-app advertising.

Much like Apple’s App Store birthed a dawn of mobile, on-everywhere, and user connection, an LLM-enabled app store will enable brands that lean into the framework to capture the zeitgeist of user attention and utility of LLMs. OpenAI has the users, brands will lean into the medium, and OpenAI will be able to bring the promise of the agentic web to the world.

This week we’ve witnessed a glimpse into the future of the internet. In the coming months and years, we’ll see exciting changes as brands take advantage of this tremendous opportunity. This week may determine the birth and death of some of the largest companies in our future.

Have questions about the future of the internet? Reach out anytime.

The post ChatGPT Apps Give Us a Glimpse into the Future of the Internet appeared first on Branch.

]]>
https://www.branch.io/resources/blog/chatgpt-apps-give-us-a-glimpse-into-the-future-of-the-internet/feed/ 0
How To Prepare Your Brand for Personalized AI https://www.branch.io/resources/blog/how-to-prepare-your-brand-for-personalized-ai/ https://www.branch.io/resources/blog/how-to-prepare-your-brand-for-personalized-ai/#respond Sun, 18 May 2025 17:18:03 +0000 https://branch2022stg.wpenginepowered.com/?p=21458 The risks are real, but the opportunities are vast for personalized AI. By learning from SEO’s past, brands can turn personalized AI’s disruption into a competitive edge. Start prepping now.

The post How To Prepare Your Brand for Personalized AI appeared first on Branch.

]]>
We’re witnessing large language models (LLMs) change the world: From search to automation to productivity, the possibilities of harnessing this rapidly evolving technology are staggering. With these advancements comes the emerging promise of personalized artificial intelligence (AI). Although not yet ubiquitous and evolving slower than promised, early examples like Apple Intelligence and Alexa+ hint at what’s on the horizon for personalized artificial assistants, while already-introduced protocols like the Model Context Protocol (MCP) demonstrate early examples of how LLMs can trade context across services. All of these examples provide frameworks for third-party integration and opportunities for a brand to integrate and supply data to these AI systems.

You shouldn’t sleep on this potential. Consumers are excited about agentic AI delivering tailored information retrieval, automated actions, and context-aware decision-making. But for a modern brand, personalized AI represents a fundamental, momentous shift in user interaction. How will this affect discoverability and user retention? For good reason, this shift is met with trepidation, even fear — and that’s warranted. Most analyses agree we’re on the precipice of a profound change in how users engage with the world of information. It’s easy to see a future where AI poses a threat to the status quo, dramatically changing today’s advertising and marketing channels. Yet, with proper preparation, these changes can become opportunities for you as a motivated brand — provided you’re ready.

History as a primer: SEO changed how brands reach consumers

Before readily available digitized information, brands relied on traditional advertising: print, mailers, billboards, and television. In-person stores and massive distribution networks were essential to physically reach customers. These routes favored brands with big budgets, broad appeal, and strong distribution — like Coca-Cola, who dominated the soft drink sphere by airing expensive TV ads and monopolizing retail shelf space. This led to consumer demand tilting toward large, established players.

As the world digitized, consumers turned to search engines for access to brands. This behavioral shift birthed search engine optimization (SEO), and brands optimized discovery and engagement for search intent. High-quality, relevant content became king, allowing the tying of searches like “best organic juice near me” to purchases along with measurable ROI. A niche player like Joe and the Juice could outmaneuver Coca-Cola by ranking well, iterating faster on offerings, and adapting to new advertising methods like paid search placements. Success in SEO required a new set of demands: mobile-friendly, fast-loading websites and esoteric tweaks to climb algorithms. The status quo flipped: Agile, content-focused, digitally native companies thrived, while large established companies fell behind. SEO’s lesson? Changing the medium changes the game.

How personalized AI is taking shape

Much like the shift to SEO, personalized AI will redefine how users discover and interact with brands. In broad strokes, personalized AI is a context-aware LLM, leveraging user data and historical interactions to craft an ever-evolving, personalized model for each user. For the user, this promises better access to tailored information, faster adaptation to their unique preferences, and superior connectivity to the world’s information and actions. Think of personalized AI as a highly customized, highly aware agent acting on the user’s behalf.

But here’s the opportunity: Personalized AI is a proxy, not the system. It handles end-user interactions, but it doesn’t own the user. What does this mean for brands? The AI agent may manage how users engage with brands, but the brands can still influence what they see. Personalized AI might know a user’s habits — like ordering pizza — but brands hold the business data: inventory, purchase history, or item popularity. The AI agent doesn’t enforce business logic like pricing, eligibility, or special offers. It’s a smart UI layer interacting with your brand’s ecosystem, surfacing information or taking action based on what you provide. For example, if Alexa+ suggests a dinner recipe, Instacart can feed it real-time data (“20% off pizza”) to shape the outcome. The brand’s role? Supply the raw material for AI to personalize.

Making personalized AI work for your brand

If AI acts as a proxy for the end user, brands must still engage the end user and influence them toward ideal outcomes. To do this effectively, consider three steps:

1. Focus on the medium used to reach the end user

SEO taught businesses to distill purchase intent from search. “Mexican food near me” became a monetizable term directly mapping to business return on investment (ROI). A restaurant ignoring search and instead investing in a new storefront awning missed the new medium — and became more or less invisible to a large number of potential customers. Personalized AI shifts the focus again. It’s not impressed by flashy webpages, slick animations, or pretty pictures — it consumes data: text, customer reviews, product specs, and availability. A streaming platform like Netflix might have a stunning site, but if its API doesn’t expose what’s currently available to watch, AI could skip it for Peacock’s structured data (e.g., “Die Hard, 4.8 stars, available to stream”). When it comes to AI, clean, accessible API access trumps UI. Wikipedia just proved this point by launching a cleaned, pre-parsed dataset, explicitly for API access. Audit your infrastructure — is your data labeled, accessible, and AI-understandable? Can AI parse your catalog effortlessly?

2. Determine the best data to surface

Beyond data accessibility is data applicability. Can you rely on user identity for context? Take a clothing brand scenario: I ask my AI agent for “clothes for my upcoming vacation.” It checks my calendar, sees the forecast, and searches for “warm-weather shirt in men’s medium.” Without context, a brand like Nike might suggest a generic bestselling T-shirt. But a brand with my purchase history like J.Crew knows I prefer blue, collared linen shirts and then wins my purchase by offering exactly that. Building user profiles (with consent) and integrating them into your data layer via APIs or JSON feeds can increase the value of your end-user offering. The brand surfacing the best offering using contextual data can have the edge.

3. Measure and iterate

Any marketer can attest to the need to test to find optimal results. However, the battle lines for personalized AI are just starting to form, and the eye-watering funds being deployed by the heavyweights indicate we’re looking at a long drawn-out fight. The good news is that this will extend the timeline for a clear winner, giving brands room to experiment. The bad news is brands will need to integrate with multiple platforms during this shakeout. A brand like Spotify might approach this by feeding Apple Intelligence “recently played” data versus sending Alexa+ “mood-based” metadata to see what drives more user listens. Spotify can use both channels to discover which approach delivers better results. Like SEO, success demands constant measurement — it will just need to be spread across multiple platforms while the winners emerge. Start small: Expose a feature on each (e.g., “start streaming” via Siri’s App Intents, “play again” on Alexa+) and optimize based on results. The agile will adapt; the rigid will fade.

Opportunities amid disruption

We’re witnessing a fundamental shift in brand-user interaction — it’s scary. Some companies will adapt, new ones will emerge, and others will fail. SEO changed the world by enabling faster iteration, broader reach, and new engagement methods. Personalized AI promises this at warp speed. Imagine Peet’s Coffee feeding my AI agent “large latte, extra shot” for frictionless reordering, locking in loyalty — or a brand like Instacart winning my attention by suggesting my favorite meal for reordering. The risks are real: Sloppy data or misread intent could render your brand invisible. But the opportunities are vast. By learning from SEO’s past, brands can turn personalized AI’s disruption into a competitive edge  — master the changing medium, leverage your data, iterate relentlessly. The opportunity is knocking. Start prepping now.

We’re here for you. Connect with our team if you have any questions!

The post How To Prepare Your Brand for Personalized AI appeared first on Branch.

]]>
https://www.branch.io/resources/blog/how-to-prepare-your-brand-for-personalized-ai/feed/ 0
How is AI Being Used for Ad Creative Today? https://www.branch.io/resources/blog/how-is-ai-being-used-for-ad-creative-today/ https://www.branch.io/resources/blog/how-is-ai-being-used-for-ad-creative-today/#respond Thu, 03 Apr 2025 21:18:47 +0000 https://branch2022stg.wpenginepowered.com/?p=21216 AI is changing ad creative, but what’s actually working? Explore how top marketers use GenAI to scale, iterate, and improve performance.

The post How is AI Being Used for Ad Creative Today? appeared first on Branch.

]]>
There is an eye-watering amount of AI content released on a daily basis, and you’re probably getting numb to the deluge of postulations and grand suggestions on how AI will change the world. But there’s far less insight into how AI is actually being applied to advertising creative. I reached out to an old friend of mine, Alexei Chemenda, CEO of Poolday.ai — a company that’s actually using generative AI (GenAI) at scale — to understand a little about what’s working (and what’s not) in the industry.

This article builds on insights from our recent webinar “The Role of AI in Creative: The Current and Future State of How Companies Leverage AI for Marketing Creative.”


Thanks for joining us, Alexei. Today we’re diving into how AI is shaping creative in advertising, and you have a ton of experience here. Can you share a bit about yourself and what your company does for clients?

Poolday is an AI-enabled platform that allows advertisers to produce a high volume of videos and iterations in minutes, whether it’s AI user-generated content (UGC) or general videos for performance advertising. Our goal is to quickly and cheaply create videos for marketing to help companies scale their advertising creative. 

One thing worth highlighting you didn’t just come up with this idea, right? You actually built this based on need?

Yes, I run a mobile app studio, and as part of our user acquisition (UA) efforts, we were creating a high volume of social media ad videos with content creators to market our apps. This was a super important part of our growth strategy. Last year, when AI started making big strides in effectiveness, we began building and incorporating these tools into our processes. After seeing their impact, we built a self-service platform to help businesses all over the world. 

AI is already proving its value by making advertising creative better, faster, and cheaper — especially for performance advertisers. While gaming companies are the most obvious adopters of this technology, what other performance-focused companies are exploring this space? And what results are they seeing?

Mobile gaming customers are at the forefront of adoption for most advertising, especially on mobile-dominant social media platforms. This is largely due to their ability to drive performance in a closed-loop system and scale at no incremental cost. So, it’s no surprise that many of our earliest and largest customers are gaming studios. 

However, we’re seeing interest and traction from other leading-edge companies — what I’d call “next-gen” performance advertisers. These are companies well schooled in the performance buying practice and hungry for the performance gains from cheap, fast creative iteration. 

Something that not every customer understands is that initial results may vary. But that’s by design. The purpose of using GenAI for creative purposes isn’t to press a button and instantly create a winning video. It’s about rapid iteration, constantly refining creative to find what performs best over time. Many advertisers already know creative is one of the most effective ways of increasing campaign performance; if you can produce better creative faster and at a lower cost, you can scale your campaign success in the same way. 

If you were a marketing leader at one of these forward-thinking performance advertisers and looking to explore how GenAI can benefit your business, what key fundamentals would you consider? Or put simply — where would you start?

We see inbound interest from a wide range of people — everyone from C-level executives looking to build an AI strategy to video editors on large teams and founders of small companies. There’s no single “right” approach to AI within a company. What matters most is having a team that’s flexible, intellectually curious, and results-driven. 

It’s also important to note that AI is not a magic bullet. If it were, everyone would already be using it successfully. It’s important to step back and look at AI as a significant shift in your processes. How can you integrate it to improve your existing workflows? Remember that any AI platform requires calibration to produce great outputs. If someone expects perfect outputs on day one with no effort, they’re setting themselves up for failure.

The companies seeing the most success on our platform are using AI to address bottlenecks, streamline iteration, and move faster. 

In our webinar session, you mentioned the downsides of AI: While it can remove bottlenecks in traditionally impacted workflows like asset generation, it can completely overwhelm other parts of the business, such as asset management, testing, and legal approvals. For companies just starting this process, how should they identify and circumvent these challenges up front? Put another way, how do you avoid future issues?

We’re constantly finding new ways to break existing systems. It’s really not surprising: When creative has traditionally been the slowest-moving bottleneck within an organization, you stress test the entire system when you turn the bottleneck into the largest producer. A great example is simple asset management: A creative team may suddenly need to upload and test up to 100 times more videos than before. Do you have the budget, time, personpower, or tagging mechanisms to do this effectively?

The short answer is you need to figure out how to introduce this into your existing systems in a way that enhances them without breaking them.

I recently read an article arguing that AI, for all of its interest, skipped some steps in achieving product-market fit — suggesting that while exciting, we have yet to see a truly world-changing application. Since you started Poolday as a GenAI creative company, you obviously disagree. Why?

Our biggest challenge right now is keeping up with inbound demand. Last year, we produced over 100K video ads for a single major platform, and we’ve already surpassed that this year. We recently had a customer post about producing 15 videos for under $7 each in under an hour — four of which tested as winners. To some organizations, this may not be interesting. For others, it absolutely is game-changing. 

One key to our success has been taking a more vertically integrated approach. If you try to use OpenAI as an out-of-the-box video production engine for your campaigns, you’ll run into major challenges and spend a ton of time just learning. Instead, we started building AI mechanisms into already-working campaign optimization workflows. While AI can seem difficult to integrate, we’ve found that when applied within well-bounded workflows and tuned tools, it delivers amazing work for our customers. 

You projected that we’d see GenAI applied at scale to performance media in around 12 months, and that the future leaders should start learning about it today. If a marketing executive asked where to begin, what would you tell them?

I’ve seen enough success with advertisers to know this shift is inevitable, and forward-thinking leaders are already figuring out how it works. You could wait six months, but the companies investing in early learning now will have a massive advantage in accelerating and outpacing the competition. By 2026, I foresee every major brand having some type of AI interaction with consumers — allowing them to rapidly test formats and messages, adapt to new environments, and customize messaging. This is a major change, and those who start learning early will be best positioned to leverage AI at scale as it reshapes the industry. 

Speaking of the future, I have a strong hypothesis that over the next decade, we’ll see a new wave of industry leaders — those who learn to harness AI, much like the rise of internet-driven direct-to-consumer (DTC) brands that mastered retargeting. What do you think will define the companies leading the AI race? Put another way, when we look back 10 years from now, what will be the obvious strengths of those who successfully leveraged AI?

We’re already seeing these traits emerge, even in the smallest ways. For example, we’ve seen many cases where a single word in a video can impact performance. With AI-generated content, these adjustments are faster, easier, and cheaper — allowing companies to iterate at unprecedented speed. And that’s the key: fast-acting companies.

Companies looking to quickly iterate and explore how to use AI to augment their workflows are seeing real, tangible results today and will undoubtedly continue to expand their use of AI. The winners of this new phase will be companies that invest in rapid iteration, leveraging AI to achieve true personalization for their customers.

Thank you, Alexei! How can people get in touch with you?

Thank you, Adam! You can find me on LinkedIn or get in touch at Poolday.ai.

The post How is AI Being Used for Ad Creative Today? appeared first on Branch.

]]>
https://www.branch.io/resources/blog/how-is-ai-being-used-for-ad-creative-today/feed/ 0
How Will AI Actually Change Advertising? https://www.branch.io/resources/blog/how-will-ai-actually-change-advertising/ https://www.branch.io/resources/blog/how-will-ai-actually-change-advertising/#respond Tue, 29 Oct 2024 11:37:55 +0000 https://branch2022stg.wpenginepowered.com/?p=19731 How is AI most likely to shape the landscape of advertising? Here are four main themes that are most likely to impact advertisers.

The post How Will AI Actually Change Advertising? appeared first on Branch.

]]>
Like you, I’m getting tired of artificial intelligence (AI) dominating the narrative of every news article, shareholder update, and new product announcement. You can wade through 100-page manifestos where OpenAI researchers claim AI will be “able to automate basically all cognitive jobs,” or watch Elon Musk haltingly hand-wave an assertion that AI will “provide universal high-income [and make] work optional” for all of humanity, but there’s a decided lack of intelligent discourse on how it will actually change an advertiser’s daily life.

So to help ground the understanding of how AI is most likely to shape the landscape of advertising, I’ve outlined the four main themes that are most likely to impact advertisers. Some are happening today and some are yet to take shape. My goal is to impart some salient tips on what you, as a marketing leader, can do to better position yourself for the changing ecosystem.

1. Machine learning takes on targeting and optimization

The most commonly deployed, active subset of AI in-market today is applied machine learning (ML).

Google’s relatively new Performance Max (PMax) and Meta’s Advantage+ Shopping Campaigns (ASC) are perhaps the best examples of how advanced, self-tuning products are changing the shape of performance advertising. Both employ broad-based machine learning for both targeting purposes and providing feedback loops to increase the efficacy of performance advertising. In the old world, this was largely handled by a campaign manager. Meta lookalike campaigns are a great example: A campaign manager would upload their ideal audience, and Meta would go find similar audiences. Meanwhile, the campaign manager could choose the destination, bid strategy, exclusions, etc. In the new world, an algorithm controls the targeting and bid optimization.

To nonprogrammers, the best analogy I can use for how this mechanism operates is the robotic vacuum cleaner the Roomba. In most cases, a human being can vacuum a floor much more efficiently and faster than a Roomba. Likewise, until recently, a human performance media manager was the best bet for managing your campaign.

While increasing scale, using multiple formats, and other factors are continually making human-controlled systems more difficult, there is one trend in particular that is driving the rise of ML-enabled performance buying: signal degradation. With the deprecation of deterministic identifiers, we’ve lost the most important part of performance advertising’s optimization loop: the tracking data. In our analogy, that’s much like turning the lights off in the room you need to vacuum. All of a sudden, the robotic Roomba has a huge leg up on the human being. They aren’t affected by the dark (as they use infrared sensors to operate), they have a perfect memory, and if not initially, then over time they will use that memory and sensors to vacuum far more efficiently than a human ever could.

This is the new reality with performance media. Without deterministic identifiers, we’re now feeling our way in the dark. No longer can we map users across the web. Instead, we rely on machines that are constantly experimenting with bid prices, placements, ad types, and targeting strategies — at a scale that’s unprecedented and impossible for a human to do — that don’t require following a user across the web. Because Meta and Google make the majority of their money through advertising, over the last couple of years, these scaled advertising companies have invested huge amounts into addressing this new paradigm and have built highly sophisticated ML tools that can run advertising “in the dark.”

What does this mean for the advertiser? The most effective thing you can provide these new ML tools is data. Specifically, data on the value of the users from the campaign. This data provides a feedback loop that will increase the efficacy of the advertising. There are two main themes to consider when architecting this feedback loop: 1. Send broad signals of success and 2. send signals early.

Broad-based signals of success (or failure) will help the algorithm uncover successful attributes with less data. Consider, if your success metric is 1 out of 100 users who convert — a 1% click-through rate (CTR) — the algorithm has a single user to learn from. If you can point to 10 out of 100 users (10% CTR) who are more likely to convert, you’ll have 10x the users to help the algorithm build the attributes of success. Your campaign will learn how to optimize 90% more efficiently.

Sending signals early helps the algorithms iterate and optimize more quickly. If your success signal is the conversion of a user, 30 days after install, the algorithm will need to wait 30 days to determine if its optimization changes are effective. If you can send a signal back on day 1, the campaign will be able to optimize and iterate at 30x the speed.

In both cases, early predictive success signals can help these new ML-based advertising tools find and iterate toward success faster.

2. GenAI runs ad creation and optimization

When it comes to asset creation, this is where things get interesting. Creative is often the biggest return on investment (ROI) on your paid media strategy. Think about it: You’re buying ad space, and that’s a fixed price. The creative you stick into the slot doesn’t materially impact the price of media, but it does impact user conversion. You can serve a creative that works, or one that doesn’t. This means that with a fixed delivery cost, you control performance. Therefore, getting better ROI for paid media is often about producing more effective creative. Creative is traditionally time-consuming and rather expensive to create, and making iterations and experimentation is slow and costly. Generative AI (GenAI) is a clear opportunity.

Unfortunately, advertising outcomes — like statistics — aren’t always intuitive. If you’ve wondered why advertisements are so ugly, it’s because ugly ads are noticed, and noticed ads are more effective. The biggest challenge currently facing the GenAI application is allowing advertisers to maintain brand integrity. Brands are unwilling to risk integrity for performance, unchecked. Even Google’s explanation of one of their app campaign settings, “automatically created assets,” has an example of issues from auto-generated text for a smartphone campaign.

A screenshot of Google's "About automatically created assets" webpage. It reads: 1. Advertiser provided header and description assets: The new Smart Phone 7 Our most innovative Smart Phone yet Safe and secure Order online and pick up today Contact-free delivery available today Ad • example.com The new Smart Phone 7 | Our most innovative Smart Phone | Safe and Secure Order online and pick up today. Contact-free delivery available! 2. Google will automatically create headline and description assets: Impressive from every angle Trade-in offers available

One of the created headlines is “Trade-in offers available,” which is clearly not part of the advertiser-provided text. It’s a perfect example of why guardrails are needed before AI can be released to create media assets.

 

These limitations are temporary and surmountable — companies are already working around this today. One of the most novel ideas I’ve come across is taking existing video assets and using AI to iterate, customize, and localize video assets. Clearly asset creation stands to gain massive benefits from GenAI, once the kinks are worked out.

There is a clear use case for GenAI is augmenting the ad creative process. Leaders in this space are already testing how GenAI can help them, whether it be creating ideas, iterations, variations, or localized options. Future leaders in this space will find methods to add guardrails and protections to ensure these systems help them iterate to winning advertising strategies, without causing disruption.

3. SEO will become “AIO”

You’ve probably already encountered Google’s Gemini-created search summaries. Unsurprisingly, Google has announced the insertion of sponsored results into AI Overviews. Clearly the future of AI-generated search summaries will contain advertising by way of branded AI suggestions. As the nature of search changes with the adoption of large language models (LLMs), so will the nature of search advertising. I’m thinking of this as a shift from “search engine optimization (SEO)” to “artificial intelligence optimization (AIO).”

How do AI-generated search summaries change consumer behavior? Think about your own experience with ChatGPT, Gemini, or Copilot. You ask a question — one that may include quite a bit more detail and context than a general Google search — and expect a singular outcome. Instead of providing a generalized query, expecting to parse multiple existing pieces of content for an answer, you’re looking for a singular, customized, interpretative result to your specific query. Think of a Google search for “Remove wine stain” versus “How do I get this cabernet stain out of my white linen shirt at the restaurant?”

Initially, a search paradigm shift will be advantageous to existing search-engine advertising methodologies — namely Google’s — because it can heavily rely on its existing keyword framework, the backbone of its incredibly successful search ads business. Keywords chosen by advertisers will provide some context for insertion in AI search summaries with no need to change workflows or buying methodologies. Google can immediately apply keywords toward LLM responses.

However, as this use case matures, I expect that innovators and fast movers will quickly start to learn methods for measuring and optimizing on the new paradigm. The most expensive Google keyword today is “insurance.” This (complete) search term is extremely valuable because performance advertisers have found that that search term leads to conversion on a new policy. But with LLM-generated answers, user search behaviors will change and may no longer include previously expected keywords. A more effective search query for selling insurance may turn out to be “How do I reduce my living expenses?” or “What are the financing rates for a new car?” or “What is the best auto loan available in my area?”

The point is, as a user’s method of engagement with information changes, so will the most effective route of advertising. Successful advertisers in a world with AI will need to understand how to better optimize their content for these changing methods of engagement.

4. Personalized AI will create new routes for user discovery

Apple has announced an on-device small language model (SLM) — dubbed Apple Intelligence — which will use protected personalized information to help users better interact with their device. At first, this will be applied for device-centric things like emotive text messages, on-device search, and text editing — all things that Apple largely controls. But it will also control engagement with third-party apps as well.

The App Intents framework allows apps to surface content and actions to Spotlight search — and eventually, Apple AI— for user engagement opportunities. In this year’s Worldwide Developers Conference (WWDC), the example used was searching for a certain hiking trail without opening an app. In the very real possibility that Apple AI becomes an important primary method for users to interact with their device, this will change how apps surface, measure, and optimize user engagement.

It’s difficult to overstate how much impact this is likely to have on discovery and user engagement. Once a competent AI can start stringing together these App Intents, it will change how users find and select vendors and content. An example would be: “Have Thai food delivered when my mom’s Uber gets to the house.” This request will require interaction and content across multiple vendors, where the AI will need to interpret intent and gather context from third-party apps. How the context is surfaced will be up to and controlled by the apps themselves. Those who innovate to move with this trend will be rewarded with increased AI capabilities and by extension, engagement.

This marks a coming gold rush of opportunity as apps learn how to choose, surface, measure, and optimize how App Intents drives user interaction and behavior. App Intents will initially take shape similar to app store optimization, where apps will vie for popular search terms, and it’s not a stretch to envision an evolution into App Search Ads, where apps actually bid on the more popular terms for a chance to get in front of the user. Even if it doesn’t turn into an ad network, it will most certainly bring an opportunity for analytics and optimization of how apps interact with App Intents.

Conclusion

While parsing the actual impact of AI may seem ethereal and vague, there can be no doubt it will fundamentally change the way humans interact with technology — and with brands. And of course, these changing user behaviors will dictate shifts in advertiser strategy. While the future of AI might be tough to grok and consumer patterns are still evolving, it’s clear the coming changes AI is bringing to our lives can’t be understated. Much like how mobile changed how consumers fundamentally connect with the internet, AI will do the same. And like with mobile,  some companies will innovate and emerge as winners, quickly adapting to these fundamental changes. And also like with mobile,  some companies will struggle with these fundamental shifts and fall behind their peers.

The post How Will AI Actually Change Advertising? appeared first on Branch.

]]>
https://www.branch.io/resources/blog/how-will-ai-actually-change-advertising/feed/ 0
The Role of Data Clean Rooms in the World of Advertising https://www.branch.io/resources/blog/the-role-of-data-clean-rooms-in-the-world-of-advertising/ https://www.branch.io/resources/blog/the-role-of-data-clean-rooms-in-the-world-of-advertising/#respond Wed, 04 Sep 2024 13:05:56 +0000 https://branch2022stg.wpenginepowered.com/?p=19423 Explore the concept of a data clean room, how it works, and how it is used in the world of advertising.

The post The Role of Data Clean Rooms in the World of Advertising appeared first on Branch.

]]>
Ad tech never suffers from stagnation; there are constantly new technologies, paradigms, and strategies to explore. While the concept of a data clean room isn’t new, it’s gaining traction and garnering interest as the world grows increasingly concerned about privacy and user-level tracking. In this article, we’ll explore the concept of a data clean room, how it works, and — most importantly — how it is used in the world of advertising.

First, what exactly is a data clean room?

In short, a data clean room is essentially a database with enhanced security controls controlling data access. The idea behind this concept is you can merge two (or more) datasets without revealing personally identifiable information (PII).

A Venn diagram is labeled "Sunglass" on the left and "Cablevision" on the right. Arrows drawn to the CableVision circle say "client list" and "customer emails in the clear." Above "customer emails in the clear" is a bubble showing a list of legible email addresses. An arrow also identifies the overlap between the Sunglass and Cablevision circles.

For instance, let’s say an eyeglasses brand, Sunglass Co., wants to advertise on a major streaming provider, CableVision, to reach new customers. If there are no concerns with privacy, Sunglass Co. could simply compile a complete list of customer email addresses that they could then compare against CableVision subscribers in the hopes of understanding how many of their joint customers they could reach. 

A Venn diagram is labeled "Sunglass" on the left and "Cablevision" on the right. Arrows drawn to the CableVision circle say "client list" and "one-way hashing." Above "one-way hashing" is a bubble showing a list of scrambled characters. An image of a key points to another bubble labeled "customer emails in the clear," which shows a legible list of email addresses. An arrow also identifies the overlap between the Sunglass and Cablevision circles.

But CableVision doesn’t want to reveal its subscriber information, and Sunglass Co. doesn’t want to share its customer data with CableVision; both are considered third parties that haven’t been authorized to have this information. Instead, Sunglass Co. and CableVision can upload hashed email addresses (a one-way coded string that cannot be traced back to the original email) into a data clean room. They can configure this clean room to report only on the matching hashed emails that appear on both CableVision’s and Sunglass Co.’s client lists. This way, Sunglass Co. will only see shared customers without the risk of exposing any of its customers’ data.

Like a database, there are nearly endless potential applications for a clean room. The primary difference between the two is that clean rooms allow for data access that is controlled. This is a benefit because it allows brands to mutually understand shared customer information overlap without exposing sensitive data. The main uses of clean rooms typically involve merging customer datasets to check for overlap, either for ad targeting or reporting purposes.

Understandably, in a period of increased scrutiny of data sharing, data clean rooms are gaining renewed interest as a mechanism for sharing data in a privacy-centric manner. There are many vendors in this space that primarily specialize in data storage: AWS, Snowflake, Google Cloud, and LiveRamp.

Branch works with nearly all clean rooms

Because most data clean rooms operate similarly to data warehouses, Branch can readily connect and populate most existing clean rooms today. For instance, if the clean room is hosted by your cloud provider, we can use Scheduled Log Exports to populate the designated clean room bucket within your cloud provider with Branch data. The mechanisms for matching and shielding the data exist within the clean room solution, so its functionality isn’t dependent on Branch or the data.

The role of data clean rooms

The market largely agrees that user privacy and data sharing will become more strict, so don’t expect to see interest in clean rooms fading anytime soon. I see three clear use cases for clean rooms emerging:

    1. Targeting and audience insights: Traditional ad targeting relied on publishers measuring — in some cases estimating — the demographics of their audience and pitching advertisers on this demographic. As digital performance advertising emerged, advertisers could upload their audiences and the advertiser would actively find and serve ads to these users. See the highly effective Facebook lookalike audiences for an example on how this works in practice. Data clean rooms allow advertisers to continue to retarget users, but instead of uploading a list of names, they’ll upload a more secure list of hashed, one-way identifiers that respect user privacy.

 

    1. Measurement and campaign attribution: In the heyday of user tracking, advertisers could literally attribute the source of every single user that interacted with their campaign. Now with user opt outs, this isn’t possible. Clean rooms offer the advertiser and publisher the option to compare newly acquired users with a list of users who’ve seen the ad, in a privacy-centric manner. Note that depending on your platform or region, matching user data — online or offline — may still be prohibited.

 

  1. Third-party ad bidding and serving: In a more complex example, we’re seeing a variant of clean rooms that are used for creating partnerships that allow advertisers to interact with publishers without sharing data. The best example is Amazon, who has created relationships with Meta, Snap, and Pinterest to allow advertisers to serve targeted ads and users to purchase products without leaving these publisher properties. Eric Suefert does an excellent job outlining how this may work in a recent post (paywall).

They are powerful, but it’s important to realize data clean rooms face their own shortcomings; that is, they aren’t designed to solve all issues that emerge from identifier deprecation. A clean room alone won’t make up for the lack of deterministic identifiers. However, with increasingly creative, complex, and combined efforts, advertisers and publishers are making headway to use the first-party data they have in safe, protected ways to increase the efficacy of their paid media strategies.

The post The Role of Data Clean Rooms in the World of Advertising appeared first on Branch.

]]>
https://www.branch.io/resources/blog/the-role-of-data-clean-rooms-in-the-world-of-advertising/feed/ 0
How To Maximize Paid Media Performance With Privacy-Centric Ad Products Google’s PMax and Meta’s ASC https://www.branch.io/resources/blog/how-to-maximize-paid-media-performance-with-privacy-centric-ad-products-googles-pmax-and-metas-asc/ https://www.branch.io/resources/blog/how-to-maximize-paid-media-performance-with-privacy-centric-ad-products-googles-pmax-and-metas-asc/#respond Tue, 18 Jun 2024 11:13:37 +0000 https://branch2022stg.wpenginepowered.com/?p=19026 In this interview, we discuss today's most effective performance advertising products, Google's PMax and Meta's ASC, with Jonathan Yantz, managing partner at M&C Saatchi Performance.

The post How To Maximize Paid Media Performance With Privacy-Centric Ad Products Google’s PMax and Meta’s ASC appeared first on Branch.

]]>
Google’s Performance Max (PMax) and Meta’s Advantage+ Shopping Campaigns (ASC) are routinely credited with being today’s most effective performance advertising products coming out of privacy-centric advertising. The rise of these ML-driven, broad-targeting ad products is rapidly changing marketing behavior. I sat down with Jonathan Yantz, Managing Partner at M&C Saatchi Performance, to talk through these changes and how advertisers can remain effective amid this changing world.

— Adam Landis, Head of Growth at Branch


Hi Jonathan, thank you for sitting down with me today. So, I know M&C Saatchi Performance has authority in this space. Can you share a little bit about the work you’re doing with customers?

Sure! Thanks for having me, Adam. We’re actively helping both established and challenger brands navigate the ever-changing media landscape. Whether that be across SKAN 4 (soon to be 5) and GA4 adoption, measurement in a cookieless world (it is possible), or how to get the most out of PMax, ASC, and similar campaign types, it’s clearly a dynamic time in media. But when is it not?

Today I really wanted to talk about the broad-based machine learning ad products offered by Meta and Google. Can you give us a very high-level idea of these products and how (and why) they work?

PMax is a comprehensive solution that leverages Google’s advanced machine learning algorithm and a single campaign to reach multiple channels (YouTube, SEM, display, etc.). Think of it as a one-stop shop to reach consumers across most of Google’s ecosystem, letting Google find them with minimal human intervention and direction. Similarly, Meta’s Advantage+ Shopping Campaigns (A+ or ASC) also put the algorithm in the driver’s seat, leveraging broad targeting and the best possible creative permutations based on the assets you provide to find the right consumer at the right time. Both simplify the overall process. I see Google’s key benefit being tapping into their plethora of inventory sources, while Meta’s strongest benefit is the ML-enabled audience targeting. 

These solutions are great for advertisers just starting out or trying to gauge effectiveness at lower budget levels, since combining everything into one campaign is meant to help exit the “learning phase” faster than with manually segmented campaigns. However, these campaigns offer a limited “look under the hood” at how and why decisions are made. Therefore, they require trust from the advertiser and typically provide limited ability to apply deeper learnings to other channels or campaigns. Notably, both companies state they are working on solutions to offer more transparency, where possible.

I’ve heard this from clients. It’s a very different process, and I appreciate you sharing your experience. Readers may already know, but the genesis of these products stems from the shift towards user privacy, which degrades the signal and granularity of advertising. From what I understand, it’s a pretty radical shift in how performance marketing historically operates and requires a drastic change in approach and workflow — and on top of all of this, it reduces the insights into what’s actually working. How are performance advertisers responding?

The past decade has shown a major shift from hyper-segmentation and manual “hands-on-keyboard” control to nearly full ML-led campaigns that aim to simplify things for advertisers. After all, aren’t these algorithms equipped to make real-time shifts quickly and at a larger scale when compared to humans attempting to optimize activity 24/7? Well, yes and no. A lot of factors are at play here. Marketers give up certain levels of transparency in exchange for faster optimizations, which requires a mental shift for advertisers. However, the algorithms can only be as effective as the data signals they are receiving, and experienced manual inputs are very much still required, especially when it comes to creative optimization. 

For example, Meta can optimize effectively off its pixel or SDK, but people are required to make sure these optimizations are in line with a brand-specific ultimate source of truth, whether that be GA4, a mobile measurement partner (MMP) like Branch, or their own data warehouse. As another example, we work with a few brands in heavily regulated industries such as sports betting and financial services, and in these instances, ML-driven creative permutations, GenAI creative, and even audience targeting need to be strictly monitored. These are just some reasons why campaigns like these can’t just be “set and forget” and do require expertise from people well-versed in managing campaigns to ensure success.

I remember when Google’s UAC (Universal App Campaigns) rolled out. We heard many complaints because it forced advertisers to run on YouTube, which back then wasn’t testing very well. Nowadays, we’re hearing similar challenges because PMax is designed to take away advertisers’ choice of channels.

I hadn’t considered regulation coming into play. Can you tell me a little bit more about how that affects advertisers? Is it messaging compliance?

Messaging compliance is a big part of it. Since you can’t see all possible permutations that the system generates, much like with Google App Campaigns (GAC), you may lose confidence in your ability to validate everything being served. For example, Meta encourages brands to try Advantage+ Creative, which brings a whole suite of dynamic optimizations. These can include visual touch-ups, text improvements, music, 3D animation, and image expansion. If you’re in a heavily regulated industry, you may be unable to take advantage of this. Another example: Some of our clients are in the entertainment space where they promote the IP of a different brand (think: audiobook services, CTV platforms, etc.), and contractually they need to ensure their partner logos, IP, etc. are accurately represented in all advertising. In these cases, these new approaches could be tricky to successfully implement, particularly around creative.

That’s a really good point. So, these categories, whether it’s brand safety or regulation, are just prohibited from operating with these new products?

As of now, only to a certain degree. In both cases of Meta and Google, the creative enhancements — or Google’s version within PMax, which takes it a step further as “automatically created assets” — are entirely opt-in. You can still benefit from the machine learning that comes with automatic bidding, automatic placements, etc., as long as your data and measurement infrastructure is properly set up. 

Right, and GenAI could invent something totally new.

Exactly. And that seems to be where the industry is heading, with Google’s PMax really trying to push ahead with Google AI. While PMax is trying to push better asset combination transparency compared to GAC, marketers need to be aware of the best ways to set up campaigns when leveraging Gen AI tools. That’s because there are some limitations, such as campaign-level asset reporting being the best view (versus ad set-level) when using automatically created assets. It is important to review new iterations frequently, and there is the ability to manually pause inappropriate ones. However, when it comes to automated bidding strategies, targeting, etc., these platforms are less transparent in terms of decision-making, which is something to consider.

That’s actually a super interesting point. Eric Suefert has a good related point, which he brings up in a recent article about these types of products. Essentially, these platforms have a negative incentive to offer targeting transparency. They are only incentivized to meet your overall targets and not to expose any of the targeting or buys that are falling well short. So, where a human operator might stop a buy when performance starts to dip — but is still well above the target — the algorithms will continue to buy negatively until the return on ad spend (ROAS) sinks to that target. So what do you do? Do you juice the targets?

That is a consideration, as the systems will seek the most efficient consumers or segments. Marketers should have a holistic strategy in place that will enable them to find other avenues for incremental growth. That’s why it’s important to think carefully and differently about the shift toward ML automation and pure algorithmic-led buying. At a surface level, it just sounds easier, and a big part of that is true, but if you’re not careful, you could miss out on significant revenue over time.

That reminds me of when UAC campaigns were a brand-new concept. People started a lot of little campaigns, which allowed them to kind of tweak the levers a little bit more. The problem is this kind of goes directly in the face of how machine learning works, which is that it needs a lot of data to target effectively. So, I don’t know if breaking apart little campaigns would actually rob the machine learning algorithm of its ability to succeed.

Finding a healthy balance that allows “test and learn” while the models continue to develop is one approach. It’s clear that hyper-segmentation isn’t the way to go either, at least not for most brands. The volume of data points needed, especially to exit the learning phase, is a great reason for this. Some brands we work with can easily clear the learning phase early on with an install + registration + free trial (on GAC, specifically), while others with either a higher price point, longer consumer journey, or limited awareness will rarely exit the learning phase when optimizing toward their ultimate goal. 

So that’s where a lot of testing and a watchful eye on cost per action (CPA), ROAS, or whatever your revenue metric is comes into play. For an entertainment brand, we tried a variety of strategies to drive ROI-positive results from PMax but were unable to achieve the goal. While this is a somewhat rare case, upon reverting to running our main Google activity separately (YouTube, SEM, display), YouTube quickly became the most efficient aside from our low-level branded SEM. It came in under our CPA goal at scale despite launching as a brand new campaign. So you really just need to be willing to constantly try something new. 

So, would YouTube work where PMax doesn’t? Wouldn’t they be using the same tools and methodologies? 

Although it may seem counterintuitive, we were able to drive success on YouTube by using bid modifiers for age and gender. With PMax, we could either choose “all” or segment into different campaigns, but we found that the best consumers by age and gender weren’t consistent across Google’s channels. 

Ah, so here’s a real-world example where targeting is helping drive success. In this case, you know the audience but are driving too few conversions to teach the machine learning systems to figure it out. So, what is the outcome of this for ML-enabled buying? Do you need to back away from the ideal outcome because it happens too late or too infrequently and try to predict what will turn into a conversion? Or do you need to use an ad product that you can force targeting because you know better than the algorithm? Because I assume we’re not always going to have the ability to target with future products.

That might help, but it’s not the entire picture; it’s possible that a big part of this example was driven by creative. The videos we had at our disposal spoke to a different audience than our display assets or search copy. In this sense, PMax was struggling to find consumers across the board who would convert within our bid levels and ultimately only spent a fraction on video. Sometimes, it can be more valuable to have the right type of creative versus focusing purely on volume of creative. Ideally, you’d have quality and quantity. But to your point, when struggling to exit the learning phase, one of the few recourses you typically have is to move the optimization event up the funnel to something that will drive more event fires. 

This is actually very similar to what we run into when optimizing signals from SKAN outcomes. Effectively, out of 100 users, it’s much better to have 10 “directionally good” outcomes than one “ideal” outcome because it allows the algorithms to collect more signals that serve as inputs for the model. So it’s a similar outcome; you need to figure out how to get early optimal output of the campaign, even if it’s not perfect, to give feedback to the statistical model or machine learning model that’s driving.

So, how do you get started with finding “directionally good” signals?

You can’t just go on day one trying to optimize toward the most expensive conversion or, say, 10 steps down the user journey. You need to go back to the early days of GAC when you would start optimizing for the install, then graduate into optimizing toward an action, potentially giving it multiple actions of similar importance. Get people in the door, start building up signals in general, and then slowly go down the funnel. Granted, not all brands have the time or budget for this, so you can either look to “directionally good” signals from your organic behavior and other paid channels or create a testing approach starting with events assumed to show consumer intent that are fairly frictionless.

Thank you, Jonathan. We could go on for hours, but can you leave us some parting thoughts on what your firm has learned?

Sure, it’s obvious the AI revolution has and will continue to significantly impact the advertising industry and drive new technological enhancements for marketers. New product enhancements like PMax and ASC are exciting developments that will open up opportunities for testing, but there will always be a need for experienced people overseeing campaigns. Working with experienced partners enables results to be evaluated for learnings and campaigns updated depending on the results. Marketers should set long-term goals, such as user retention, as well as acquisition targets to avoid being too short-term in their approach. 

The post How To Maximize Paid Media Performance With Privacy-Centric Ad Products Google’s PMax and Meta’s ASC appeared first on Branch.

]]>
https://www.branch.io/resources/blog/how-to-maximize-paid-media-performance-with-privacy-centric-ad-products-googles-pmax-and-metas-asc/feed/ 0
The Future of Measurement https://www.branch.io/resources/blog/the-future-of-measurement/ https://www.branch.io/resources/blog/the-future-of-measurement/#respond Tue, 18 Jun 2024 11:13:18 +0000 https://branch2022stg.wpenginepowered.com/?p=19092 Discover how privacy policies are reshaping digital marketing measurement. Learn key trends and strategies to stay ahead in this evolving landscape.

The post The Future of Measurement appeared first on Branch.

]]>
Introduction

I was recently sitting with an exasperated technology executive at a well-known, quick-serve restaurant chain, and he said something I think sums up the frustration of anyone in digital marketing: “Why can’t we just measure things accurately?” Truthfully, I was at a loss. Besides a pandering and evasive sounding “It’s complicated,” how do you convey an encyclopedia of technology and policy evolution that has taken place over a decade that also provides the context necessary to understand signal degradation that’s pulling us backward in measurement efficacy?

The truth is privacy policies make marketing measurement harder and more opaque by design. Ever since that conversation, I’ve been ruminating on how to succinctly summarize the state of our industry. The closest I’ve heard is: “Measurement is the past; the future is signal.” But beyond pithy statements, executives need context to understand how to prepare for industry changes in marketing measurement. This article aims to discuss current market trends and postulate on the likely future of digital marketing measurement.

Seven industry trends

1. The self-attributing network (SAN) starts to grade its own homework

Starting in 2017 Meta —  then Facebook — rolled out a new concept of advertising reporting, naming itself a self-attributing network (SAN). Following a rise of last-touch attribution becoming the default methodology for marketing measurement, Facebook decided it needed to show advertisers its value to acquisition beyond simply being the advertiser that last touched the consumer. So Facebook started to “self-report” if it had influenced (i.e., shown an ad to) a user within a lookback window. Google and Twitter quickly followed suit. Obviously, SAN reporting has merit. Facebook can and does influence customer acquisition, even if it isn’t credited with the last touch. But self-reporting also comes with drawbacks. When these leading ad platforms — which make up approximately 80% of ad spend growth — report their own outcomes, you invariably have multiple platforms claiming credit for a single customer. It begs the question: How do you measure advertiser efficacy if they all claim ultimate responsibility for the sale? This paradigm left advertisers with the dubious task of deciding how to weigh the output of SAN reporting. 

2. User privacy creates the concept of platform attribution

In 2020, Apple launched App Tracking Transparency (ATT), which started the deprecation of identifiers for deterministic user tracking on iOS. In 2024, Google Chrome started to deprecate cookies, beginning a slow march that will eliminate deterministic tracking of users on the web. Android’s deprecation of the Google Advertising Identifier (GAID) will soon follow. This represents a fundamental shift in how digital advertising will work; advertisers can no longer deterministically track a user across the digital landscape. 

To meet the industry’s need for a new method of measurement, Apple and Google have introduced attribution methodologies — SKAdNetwork (SKAN) and Google Privacy Sandbox, respectively. In short, these methodologies return aggregate attribution results that preserve user privacy. Because the privacy controls obfuscate the individual identifiers, the outcome of these methodologies is, by design, incomplete and imperfect. The advertiser is left with incomplete, broad results that increase marketing measurement and optimization uncertainty.

3. SANs develop ML-driven advertising

In the wake of user ID deprecation, Facebook lost $12B in ad revenue, mainly because it lost efficacy in driving advertiser return on investment on the iOS platform. In just two years, Meta and Google — the undisputed leaders in the advertising space — rebounded by building targeted advertising driven by machine learning (ML) to overcome, or at least mitigate, privacy-centric issues. This was a fundamental shift for these companies; Meta’s Advantage Shopping Campaigns and Google Performance Max are different from traditional performance media products in that they require much less input, such as optimization and targeting, from advertisers. 

A byproduct of these ML-driven products is the need for new data streams. Both Meta and Google have released APIs — Meta’s Aggregate Events Measurement and Conversion API and Google’s GBRAID — that allow them rich, privacy-centric data signals from the advertiser that help these algorithms react and optimize campaigns. These APIs are game changers for these platforms and their customers, as they allow access to valuable data for campaign optimization that are far more effective than privacy-attribution methodologies, such as SKAN and Privacy Sandbox.

4. Advertisers increasingly use broad-stroke modeling

Another consequence of decreasing signal, increased confusion, and multiple sources of truth is that advertisers are increasingly leaning on macro modeling to understand the holistic impact of their marketing efforts. These statistical models use broad inputs like marketing spend and earnings to provide comprehensive outcome predictions. Media mix modeling, while certainly not a new methodology, is the most widely explored example and represents an interesting companion to the more granular, channel-specific methodologies.

Traditionally, these broad-stroke models were primarily used for budgeting and benchmarking, requiring heavy lifting and refinement yearly or quarterly. But thanks to the ready availability of low-cost compute resources and privacy factors eroding measurement efficacy, these methods are receiving attention and interest for more practical performance applications. Leading companies are using the outputs from these broad-stroke statistical models to best understand the eroding deterministic methods of measurement.

5. Advertisers are developing in-house data analysis capabilities

Forward-looking organizations recognize the added market complexity and are developing a new expertise within their marketing performance teams. This so-called marketing economist role is to become adept in taking in multiple data sources to help arrive at a trusted outcome. This expertise isn’t developed easily and it certainly doesn’t happen overnight. Much like 15 years ago when a team solely focused on mobile user acquisition didn’t exist, this is a new paradigm. A few leading companies have already started investing in-house to build out teams with this expertise; the rest will outsource this knowledge. Over time, most players develop the expertise in-house, while technology platforms will grow to offer tools and applications to help this emerging discipline.

6. The deprecation of cookies will combine web and mobile app measurement

Mobile web measurement has traditionally remained at arm’s length from mobile app measurement. Generally, marketing channels on web and mobile have remained very distinct, often using different tools to measure marketing outcomes among channels.

However, with Google’s deprecation of cookies, measurement of these channels will slowly be combined. Google Privacy Sandbox, the technology and framework introduced by Google as an alternative attribution methodology, provides a single API that functions across both platforms. This is the first large-scale unification of digital measurement across channels and is an early harbinger of more holistic measurement. 

7. Data clean rooms and other PETs emerge

While not new, the purpose of a clean room is to combine and share information in a controlled way (i.e., match user overlap without revealing user identities). At its core, a data clean room is a data warehouse that restricts the granularity and types of information accessible. In essence, the applications of a clean room are about as broad as those of a data warehouse, but practically one in which restrictions to data access are necessary.

The most common application is between an advertiser and publisher, where neither party wants to reveal the identity of their users. A clean room allows an advertiser to understand, serve to, or report on overlapping users. In most cases, both the advertiser and publisher have access to first-party data (e.g., an email address) and, through hashing, can match the overlap of these users without revealing the identity of this user.

A data clean room is the best known of the many emerging privacy-enhancing technologies (PETs). The goal is to maintain operational effectiveness while adhering to privacy policies. Therefore, the application of a clean room is usually best for when overlapping datasets are needed, but restrictions on that data are required.

Learn more in our blog: The Role of Data Clean Rooms in the World of Advertising

The arc of change

Let’s now explore how change is — and will likely — permeate our industry.

In 2020, the overwhelming theme was panic. Over time, fear has been alleviated by technical solutions, but, as you can imagine, with seven significant and deeply complex thematic trends, today’s overarching shared feeling in the industry is one of confusion. At this stage, virtually every marketing professional knows the current state of privacy affects the industry. Unfortunately, the marketing executive needs not only to understand the conceptual and technical changes – but also what to do about them. 

Adoption of multiple sources of truth

While no measurement is perfect, the evolutionary winner of attribution over the last decade was last touch. This gave marketers an imperfect-but-precise methodology of measurement. And, where the marketer was more advanced, it allowed for a highly customized multi-touch methodology to be built. The deprecation of deterministic identifiers makes the former more difficult and the latter impossible. 

The advent of technologies to overcome the restrictions of measurement means marketers are now faced with multiple sources of data, from multiple providers, that must be reconciled. Consider the privacy-centric attribution of platforms like SKAN, a subset of opt-in data that allows deterministic tracking and SAN reporting. This means the performance marketer has three sources of data they can use to triangulate a holistic view of performance.

Exploration of alternative sources

Inevitably, when facing signal loss, marketers will explore alternative methodologies for measurement. 

On one end of the spectrum, we’ve seen this take the form of in-house teams building statistical models that refine — or are refined — by other data sources. One of the most common examples is developing incrementality tests to help normalize multiple data sources. In a more direct approach, we’ve seen advertisers poll their end users at the point of sale to help refine and align attribution of marketing spend.

An emerging trend to follow is mega retailers partnering with an ad partner through a proprietary data connection for retargeting and measurement, as this serves as valuable insight into potential future technologies in the space.

Building expertise

Many of these emerging trends require intellectual horsepower applied in-house. Many advertisers already have some in-house or agency-provided data analysis capability. However, industry leaders are considering how new measurement paradigms will shift existing budgetary planning and operational workflows. This often takes shape in the form of an in-house data science team tasked with helping the marketing organization with budgeting and reporting. While this is an expensive and time-consuming project, it represents an investment in a complex and uncertain world of marketing measurement.

What this means for your business

Unfortunately, the shift to privacy pointedly makes marketing measurement more difficult. Ignoring the problem won’t make it go away. To remain effective amid the changes, successful businesses will adapt to continue to understand where and how to market to their users.

The first step is to understand how the changes will affect your business. The key is understanding how changes will impact your measurement and marketing efficacy before the changes negatively impact your business.

The second step is planning how to make appropriate changes with minimal impact on your business. No one buys privacy for privacy’s sake, but you do need to adhere to privacy rules while keeping your business running. This challenge isn’t limited to the marketing department; it spans the entire business. All across digital marketing, leaders are reporting to their companies that marketing measurement is getting less precise and tracking return on ad spend will get worse, not better. Ideally, you’re having these conversations before they happen, not explaining to the board why you can’t track return on advertising spend (ROAS) on last year’s budget.

Step three: Once you have a plan, you’ll need to remain flexible. Changes will continue to impact these emerging paradigms, so adaptation, experimentation, and testing will be key components in helping you gain and retain an edge in your marketing efforts. 

And last, consider your resources. Companies like Branch put tremendously talented people to work on solving these problems for — and with — our customers. Lean on your vendors to understand what others are doing — and what you could be doing better.

The good news is a paradigm shift will serve as an opportunity for innovation and a catalyst for emerging leaders in digital marketing. How you adapt today will set your brand up for success — or failure — in the future. To the marketing executive asking, “Why can’t we measure everything accurately,” perhaps the best response should be, “Neither can anyone else, so how do we use that to our advantage?”

The post The Future of Measurement appeared first on Branch.

]]>
https://www.branch.io/resources/blog/the-future-of-measurement/feed/ 0
WWDC 2024: Predictions on Apple’s Announcements https://www.branch.io/resources/blog/wwdc-2024-predictions-on-apples-announcements/ https://www.branch.io/resources/blog/wwdc-2024-predictions-on-apples-announcements/#respond Thu, 23 May 2024 20:40:48 +0000 https://branch2022stg.wpenginepowered.com/?p=18915 Get ready for WWDC 2024! Discover our predictions on Apple's upcoming announcements, including SKAN 5, Privacy Manifest, Vision Pro, and AI integration

The post WWDC 2024: Predictions on Apple’s Announcements appeared first on Branch.

]]>
Reading this after WWDC? Check out From Predictions to Reality: AdAttributionKit Unveiled at WWDC 2024 to see how Apple’s announcements stacked up against our predictions!


As the tech world turns its eyes towards Apple’s Worldwide Developers Conference (WWDC) 2024, expectations and speculations are at an all-time high. Here are some predictions for what we expect from Apple at this year’s event:

SKAN 5

Current state: Adoption of SKAN 4 has been lukewarm, with only about 30% of advertisers on board. The upcoming SKAN 5 promises to enable retargeting measurement but falls short of improving targeting capabilities, focusing instead on reidentification of users post-install.

Prediction: The impact of SKAN 5’s launch will likely be minimal. We believe the industry will largely ignore the update due to its limited improvements compared to previous versions and the bevy of complexities it introduces. 

Edit (5/30): We’re still weeks away from the conference, but there’s already compelling evidence that this prediction is incorrect. Our friends at DataSeat published a strong prediction that SKAN 5 will be renamed (and reissued) AdAttributionKit. Our take? Probably correct. Call it whatever you want; it’s SKAN 5 with alternative app store support. Why drop the Store KitAd Network moniker? To protect the Apple App Store brand, of course!

Privacy Manifest

Current state: As of May 1, Privacy Manifest now requires apps to declare their data usage. Apple’s enforcement seems primarily focused on apps that do not properly declare their Required Reason APIs;  notably, we’ve seen numerous App Store submission rejections of apps that haven’t declared their own or their third-party usage of Required Reason APIs. 

Prediction: It’s unlikely we’ll see any major announcements regarding Privacy Manifest. However, there is anticipation that third-party tracking declarations will directly correspond with Apps Privacy Nutrition Label declarations. 

App Store adjustments

Current state: In January, Apple announced a number of changes to its App Store policies as a result of the Digital Markets Act (DMA). Most notably, the ability to sideload apps from third-party app stores in the EU. The changes generated significant buzz, though they’ve been perceived as overly restrictive. 

Prediction: Apple will likely announce slight changes to its policies to align with these regulations, but don’t expect them to make alternative app stores any more viable. Apple’s stance appears firm, unless future legal actions force it to reconsider.

Vision Pro

Current state: Expectations around Apple’s first 3D camera are tempered. Predictions of slow sales, compounded by limited content availability, deters developers from investing in the platform.

Prediction: Apple is likely to adopt a conservative approach to the iteration of Vision Pro devices — similar to its strategy with iPads — focusing on long-term development.

Artificial intelligence (AI)

Current state: Although Apple has shown signs of AI innovation, notably through its published research papers, general sentiment suggests that its announcements will focus on what’s planned rather than what’s currently available. 

Prediction: AI is poised to be the centerpiece of this year’s announcements. Expect to see enhancements in how AI integrates with the Apple ecosystem, increased access through developer frameworks, and more on-device AI capabilities for Siri, like real-time analysis using the iOS 18 camera. There is also a distinct possibility that OpenAI and Apple will announce a partnership to enable Apple hardware users direct, customized access to OpenAI’s large language models (LLMs).

Questions we hope WWDC answers

As always, while we make predictions based on current trends and information, the actual announcements are sure to surprise. This year, we hope Apple clears a few things up:

  1. What is the long-term plan with SKAN? It’s imperfect, and that has impacted adoption. How does Apple plan to address the shortcomings?
  2. Will there be additional methods of policing fingerprinting? Apple has continually made it clear fingerprinting is not allowed, but it’s still widely practiced. What does Apple intend to do about this? 
  3. Would Apple ever enforce a “nuclear option”? If Apple really wanted to put an end to device fingerprinting, it could potentially turn on its iCloud+ Privacy Relay feature for all, or part of, iOS devices. This would eliminate device IP addresses for enough web and in-app network traffic to render fingerprinting moot. 

Stay tuned for what promises to be an exciting event at WWDC 2024 and our annual post-event recap!

The post WWDC 2024: Predictions on Apple’s Announcements appeared first on Branch.

]]>
https://www.branch.io/resources/blog/wwdc-2024-predictions-on-apples-announcements/feed/ 0
How To Combine Attribution Data With Media Mix Modeling https://www.branch.io/resources/blog/how-to-combine-attribution-data-with-media-mix-modeling/ https://www.branch.io/resources/blog/how-to-combine-attribution-data-with-media-mix-modeling/#respond Tue, 02 Apr 2024 13:26:09 +0000 https://branch2022stg.wpenginepowered.com/?p=18464 Branch's Adam Landis recently sat down with Michael Kaminsky, co-CEO and co-founder of Recast, to chat about how companies are combining holistic statistically-driven models like MMM with attribution data.

The post How To Combine Attribution Data With Media Mix Modeling appeared first on Branch.

]]>
Marketing measurement is changing; privacy policies are forcing companies away from deterministic last-touch measurement and a variety of alternatives — SKAN, Google Privacy Sandbox, Modeled SAN outputs — are emerging. Aside from the complexities introduced by these changes, I’ve noticed some companies take advantage of this shift by combining traditionally separate measurement methods. The analysis method of media mix modeling (MMM) has been around since the 1960s but has historically remained distinct from more modern digital deterministic measurement methods. However, now that marketing teams are forced to reconcile multiple sources of measurement to determine what to do, there is an opportunity to include MMM with the more granular attribution methods. I recently sat down with Michael Kaminsky, co-CEO and co-founder of Recast, to chat about how companies are combining holistic statistically-driven models like MMM with attribution data.

— Adam Landis, Head of Growth at Branch


Michael, thanks for joining us today. To provide some context, can you give us some background of what you folks are up to at Recast?

Absolutely! Recast is a modern media mix modeling (MMM) platform focused on speed and verifiability. We combine best-in-class statistical modeling and modern machine learning methods so companies can develop MMMs faster, more flexibly, and more accurately. This is a new paradigm that pulls MMMs out of PowerPoint decks and puts powerful tools into the hands of marketers. They can plan, optimize budgets, and verify model accuracy — all on an ongoing basis. In some ways, we think about what we’re doing as helping to “bridge the gap” between last-generation MMM analyses that only happened a few times a year with always-on digital tracking attribution methods.

For those who don’t know, what is MMM?

MMM generally stands for “media mix modeling,” or sometimes “marketing mix modeling.” The idea is to construct a top-down statistical or econometric model that statistically links marketing activity to business outcomes (normally something like sales or conversions). MMMs work by looking at a time series of historical marketing data, and then identifying patterns in the data. So the model might try to answer a question like, “When we spend relatively more on Facebook ads, controlling for our other marketing activity, how many additional sales do we drive?” Good media mix models need to account for all of the complexity associated with marketing in practice (e.g., time shifts or adstocks, seasonality, diminishing marginal returns, etc.), so good models tend to be quite complex.

Check out our introductory blog post on MMM for more details.

This might be a dumb question, but I hear incrementality often combined with MMM. How does incrementality relate to MMM outputs?

A properly-tuned MMM should provide marketers with a measure of incrementality. The word “incrementality” is just marketer speak for “causality.” When we talk about incrementality, we are really talking about the true causal impact of our marketing activity. If we refer to the incrementality of “branded search” advertising, what we really mean is, “How many conversions would not happen if not for our branded search advertising?” The idea is that some customers might click on a branded search ad who were going to purchase anyway, and so the spend on that branded search click isn’t “incremental” — since the conversion wasn’t caused by the search click. We elaborate more on the topic of branded search incrementality in this blog post.

Incrementality is the most important concept in marketing measurement because incrementality tells us the true return on investment of our marketing activity and is how we can actually optimize our marketing budgets. Media mix models, when built correctly, should provide results that are estimates of incrementality. These can be used to drive budget optimization and should be consistent with experimental results that also attempt to measure incrementality.

That’s interesting. I didn’t know the statistical outcomes of an MMM would provide estimated lift for an individual channel. I know this will probably be a huge “it depends,” but can you talk about accuracy: both how accurate these models are and the factors that go into the variability?

You are right that it’s a huge “it depends.” When done well, MMM models can be highly accurate, and their lift estimates can be corroborated by other experimental and quasi-experimental results (e.g., geo-holdout tests, randomized controlled trials, go-dark tests, interrupted time series, etc.). However, when done poorly, these models can be incredibly inaccurate and can be actually misleading to the point where it’s better not to use an MMM at all.

The accuracy of an MMM model depends on exactly how the model is specified statistically and how well that specification matches a particular business. Simpler specifications will tend to be more biased (i.e., less accurate) than more complex specifications. We like to say that running an MMM is trivially easy, but running a good MMM that is actually accurate and can be validated is incredibly difficult.

Does an MMM model need to be holistic to be valuable? Or can you apply MMM on a specific channel?

In general, media mix models only work if you can include all marketing activity in the model. If you attempt to build an MMM with only a subset of marketing channels included, you risk over-crediting the included channels. So, for example, it could be the case that TV ads drive a lot of your sales, but if you don’t include TV marketing in your MMM model, those conversions driven by TV could end up being credited to Facebook.

There are other types of models you can build to look at the impact of a change in marketing activity for a single marketing channel (e.g., via an interrupted time series design), but in general, an MMM needs to see all of your marketing activity in order to work.

How is MMM employed by a brand versus a more traditional, channel-specific attribution output like last-touch? 

The most sophisticated marketers know that they should use the right tool for the job and that means using different measurement methods in different contexts. Digital tracking methods like MTA or first- and last-touch attribution are valuable but are mostly used for day-to-day (or hour-to-hour) channel management. They’re the tools that individual channel managers use to track, manage, and optimize their channel day in and day out.

However, those same digital tracking methodologies are not generally able to measure incrementality and can’t be relied upon for forecasting or cross-channel budget allocation. So that’s where a tool like MMM comes in: It helps marketing leaders measure marketing impacts more holistically and allows them to allocate budget across different marketing channels because it can compare those channels on an apples-to-apples basis.

So while channel-specific attribution is used operationally, MMM is generally used from a macroscopic impact perspective. Can the output of channel-specific or deterministic attribution help refine MMM models?

Many people want to build hybrid models that combine data from digital tracking methods with a top-down MMM model. I generally think this is a bad idea, or at the very least you want to be very, very careful with how you implement it.

The reason why it’s not a good idea is because digital tracking methods tend to be inherently biased towards the channels that are 1) at the bottom of the funnel and 2) easiest to track. One of the main reasons to use an MMM is to give you an (ideally) unbiased view of how your marketing channels are performing, so you don’t want to pollute your model with biased results from your digital tracking system!

So biased inputs will reduce the model’s effectiveness. What about the reverse? Attribution is shifting with multiple sources of data. Can the MMM output help with attribution data?

Yes! Sophisticated marketing teams do have a process that “triangulates” the results from different measurement methods. For example, they might build a spreadsheet that lines up the results from the MMM with last-touch and first-touch attribution as well as platform metrics in order to get a sense for where the different methods agree (and disagree), which they can then use for operational decision-making.

One easy example of this is creating incrementality “coefficients” that channel managers can use operationally. That might look something like, “We know we need to multiply the in-platform CAC for Facebook by 1.25 to hit our true incremental CAC-LTV payback targets.”

I’ve seen this too. A macro view of unbiased holistic reporting can be used to create coefficients to track granular, more continually available reporting. Essentially, “We always know that Facebook channels report 25% higher, so we can discount ROAS on those campaigns by 25% until we test again.” 

Thank you for your time, Michael. This was a very enlightening look at how brands are combining these traditionally separate methodologies of measurement. We’ll definitely have to do a follow-up webinar. In the meantime how can folks get in touch with you?

You can follow me on LinkedIn for my writing on marketing effectiveness, and make sure to check out Recast if you want a closer look at a modern MMM platform.

To learn more about Branch’s attribution solutions, request a demo with our team.

The post How To Combine Attribution Data With Media Mix Modeling appeared first on Branch.

]]>
https://www.branch.io/resources/blog/how-to-combine-attribution-data-with-media-mix-modeling/feed/ 0
How Are E-commerce Brands Harnessing the Power of CTV? https://www.branch.io/resources/blog/how-are-e-commerce-brands-harnessing-the-power-of-ctv/ https://www.branch.io/resources/blog/how-are-e-commerce-brands-harnessing-the-power-of-ctv/#respond Tue, 13 Feb 2024 15:50:20 +0000 https://branch2022stg.wpengine.com/?p=18174 Increasing opportunity begs the practical question, how exactly should advertisers view the CTV opportunity? Our Head of Growth sat down with Dimitri Souffan, who leads Business Development at the CTV platform Vibe, to ask some pointed questions about this emerging opportunity.

The post How Are E-commerce Brands Harnessing the Power of CTV? appeared first on Branch.

]]>
Connected TV (CTV) is one of the fastest-growing sectors in digital media, and now there’s finally line of sight for CTV ad spend to overtake linear TV. As the digital streaming wage war for this emerging channel continues, the battle has shifted from acquiring users at all costs to retaining and eking profitability from these expensively acquired users; Netflix introduced an ad model in 2023, and Amazon will soon introduce advertising to Prime users. This shift will open up ad inventory and, combined with increasing viewers, will create an opportunity for advertisers. But increasing opportunity begs the practical question, how exactly should advertisers view the CTV opportunity? I sat down with my good friend Dimitri Souffan, who leads Business Development at the CTV platform Vibe, to ask some pointed questions about this emerging opportunity.

— Adam Landis, Head of Growth at Branch


Hi Dimitri, thanks for taking the time. To start, can you tell us a little about your company?

Vibe is a bit of an outlier. We’re a CTV ad platform with all the power, agility, and performance focus you would expect from legacy digital channels while remaining laser-focused on simplifying the ad-buying process for brands of all sizes.

The feedback we often get is a huge “thank you!” for creating a simple, streamlined interface, even from savvy, veteran mobile and e-commerce marketers. There’s a real appetite for agility and transparency right now, and Vibe is all about empowering our customers to take control of their ad-buying experience.

We’re a product-first company. Over half of our team are veteran developers, constantly pushing performance forward, backed by a very lean sales and admin team. The platform speaks for itself — and that’s how we like it.

CTV is a relatively new space, and we could easily talk about an approach to all clients. So why are we talking specifically about e-commerce? What are the specific opportunities for these types of companies?

E-commerce brands capitalized on a kind of golden age during COVID and are quickly adapting their marketing efforts to a new economic and digital landscape. Online shoppers are now pickier about how and where they spend their money. Meanwhile, digital signal loss and cookie deprecation have dramatically decreased e-commerce campaign return on advertising spend (ROAS).

The top challenges facing e-commerce marketers this year — effective targeting, increased consumer trust, and coherent omni-channel campaign deployment — all fit quite nicely in the CTV framework.

CTV advertising has a huge advantage today in that it doesn’t need third-party cookies or approved identity signals from Apple products to target audience household IPs across devices, so e-commerce ads can run in front of highly engaged audiences on premium channels. Demographics, geography, context, and first-party audience targeting on CTV significantly contribute to e-commerce campaign conversion, while retargeting capabilities mitigate cart abandonment rates.

Meanwhile, your ad creative runs on the most trusted medium by far — premium television — reassuring customers about your brand’s legitimacy and your product’s capabilities, no matter where they’re watching (e.g., mobile, tablet, smart TV).

It’s a huge opportunity for e-commerce brands. Not only is CTV a performance channel in its own right, but it also supercharges other channels. Social and display campaigns all perform significantly better when running concurrently with CTV.

When looking at CTV, what are the biggest hurdles e-commerce companies face?

It’s important for larger companies who have worked with linear TV in the past to understand that although the look and feel of the TV experience will remain the same for viewers, CTV advertising is a completely different animal. One of the most exciting aspects of CTV advertising today — especially programmatic CTV — is probably its pricing, which is quantified in CPMs rather than inflexible, long-term contracts. Budgets that never would have covered the cost of a national TV campaign can get you incredible results on CTV, thanks to advanced targeting capabilities like CRM targeting, retargeting, MMP integration, and more.

Integration with Branch’s measurement solutions can help advertisers measure incremental reach, brand lift, cross-device attribution, and campaign ROI.

Right, we’ve talked about the CTV measurement that Branch provides. What are the targeting options that e-commerce companies have, and how are they using them?

Vibe offers geo-targeting (and exclusion) down to ZIP code, audience interest targeting (e.g., golf, beauty, fashion, automotive), demographic targeting (e.g., income level, political affiliation), contextual targeting (e.g., channel, live sports game, device type, time slot), retargeting, and CRM targeting. The point is to spend precious marketing dollars speaking to audiences that are actually interested in your brand, not spray and pray to massive (and expensive) audiences. It’s also essential to leverage real-time campaign results segmented by targeting dimension and act on those insights.

Audience interest targeting is an interesting one. How does this work?

We’ve built some very powerful audience segments for our customers, and we are constantly adding to our interest targeting pool to help advertisers find in-market audiences. We create proprietary segments in-house, based on behavior observed over thousands of campaigns, and we also work with third-party data providers like Oracle or Lotame to achieve real scale for our customers.

How does the third-party data work?

The predefined segments we work with give great targeted reach to our customers like, for example, 250K+ basketball lovers. Meanwhile, first-party data can get very specific and really push effective performance forward.

You say “effective.” How would the advertiser know what works best?

Well, as much as advertisers would love a straight, one-size-fits-all answer, it’s really all about testing and acting on tangible insights. Fortunately, multivariate campaign deployment is super simple with Vibe and allows advertisers to optimize their campaigns in real time, based on those insights. The outcome then becomes your benchmark (e.g., sales, installs, add-to-carts).

Let’s come back to time-slot targeting. How does time-of-day work for e-commerce?

Again, it’s back to driving your ideal outcome with the best ROI. A simple example is we’ll launch and schedule during the day from 6 a.m. to 11 a.m. After some time, we look at the data and evaluate when is most effective to drive conversions. Then, we can limit our timeframe to maximize ROI. Remember, frequency capping is also a key component of campaign impact, and Vibe advertisers can determine the times at which and the frequency with which their customers interact best with their brand.

I remember hearing targeting options years ago for linear TV, and it was something like, “your ad will show sometime on this day.” I assume the same logic also applies for geo?

Yes, although your CPM might go up with more granular targeting, your dollar will go further and you will see better ROI.

Channel is a little more obvious, but can you give us some examples of how e-commerce would look at targeting?

Channel targeting is what you might call the “old school” method, used in linear TV, but today you have hundreds of premium channels at your fingertips at unprecedented prices.

Interestingly, you may think channel-specific targeting is effective, but we’ve found by observing thousands of clients that audience targeting is actually much more effective. I think this is because of legacy targeting limitations. Another example of why CTV is more powerful than linear.

Retargeting is a well-known, effective methodology on the web, where you have cookies and IDs that you can show ads to again. How does this work for CTV?

We touched on this a bit earlier, but basically CTV platforms are able to target audiences by IP address and then graph those households for even further impact. Advertisers then place a custom pixel on their websites to monitor CTV-enabled web visits, clicks, purchases, app installs, etc. Those same pixels also enable the retargeting of past customers or the exclusion of those same customers for acquisition campaigns.

Interesting. That could be super effective. How does this apply to e-commerce companies?

There are a bunch of ways in which retargeting applies to e-commerce advertising, if only because it adds an impactful touchpoint to an increasingly fragmented customer journey. We have seen truly astounding ROAS figures over the past year, mostly with retargeting campaigns, which engage high-intent audiences in a premium, brand-safe environment. And don’t forget, those same audiences can also be excluded to focus campaigns on new segments that make a real impact on incremental reach and brand awareness.

Last but not least, you can create that “TV effect” that builds trust and legitimacy in your brand. Think about the experience: a user comes to your website, browses a bit, and then leaves. A few days later, while watching TV, they see an ad for your website, come back feeling more confident, and make a (trackable) purchase. How cool is that?!

 

For more on how to take advantage of CTV advertising, request a demo!

The post How Are E-commerce Brands Harnessing the Power of CTV? appeared first on Branch.

]]>
https://www.branch.io/resources/blog/how-are-e-commerce-brands-harnessing-the-power-of-ctv/feed/ 0